11 June 2014

Liberte, egalite, fraternite! with Lemonade

Ligand efficiency has a long and glorious history in FBDD, discussed in depth.  Most recently, it was the subject of a LONG discussion.  Since its inception, FBDD has tried to fit in, be like the cool kids as it were.  "Regular" medchem had the Rule of 5, so Astex gave us the Rule of 3, aka the Voldemort Rule.  The principles of FBDD just make sense, but overturning entrenched dogma is hard.  So, simple metrics were devised to explain to people why smaller is better.  Groups tried to create better and more predictive metrics.  Well, we can add lipophilicity, or log D, or make it empirical.  The more complicated, obviously the better it is.  The search for a GUM (Grand Unifying Metric) is ongoing.  

Then Schultz had to come along and assault the Bastille.  So, to the walls Men!  Defend the King! And so he was, but there was no peace.  But trouble was brewing and now another assault has come.  Peter Kenny and his compatriots have taken another shot at the King, with the completely unprovocative title: "Ligand Efficiency Metrics Considered Harmful".  The paper reviews current LEMs (Ligand Efficiency Metrics), and proposes a better approach.  As Pete has said at length, one of his major problems is with the choice of arbitrary assumptions of standard state, as he laid out in this comment.  Of course, the real issue is the intercept and what that means.

What are the key problems the authors point out?
  • Correlation Inflation
  • Defining assumptions are arbitrary
  • Scaling (dividing measured activity by physicochemical property) and offsetting (subtracting physicochemical property from measured activity) are used but no one has ever said why you do one or the other for a given LEM
  • Scaling assumes a linear relationship between activity and property (zero intercept)
  • Offsetting implies the LEM has a unit slope
  • LEMs are not quantitative, but are presented as such.
  • Units matter!  ΔG = RTln(Kd/C) where C= concentration of standard state.  Substitution of IC50 in this equation is easy to do, but not at all correct.  A more subtle example of the problem is provided by the definition of LLEAT as the sum of a (dimensionless) number and a quantity with units of molar energy per heavy atom.[Click to embiggen.]
 So, this is a fun paper to read.  I highly recommend it.  The authors go into exquisite detail of what each and EVERY factor that goes into a LEM means and how it is used, and more importantly how it SHOULD be used.  The first conclusion is that there is no basis in science for any of the assumptions related to scaling or offsetting.  They recommend that data be modeled to determine a trend.  They argue that only in this way can a given compound be determined to have beaten the trend.  The King is dead; long live the King?

They are right, of course, and IMNSHO it doesn't matter.  I still aver that ligand efficiency metrics are useful.  I can measure accurately with a meter stick that is only 95 cm, as long as I know it is 95cm.  The same thing with any LEM; understand its limitations and use it appropriately.  And remember, its a guide, not a hard and fast rule. 

Pete and colleagues set up the obvious acronym, LEMONS (Ligand Efficiency Metrics of No Substance).  And when life gives you LEMONS, you make LEMONADE (Ligand Efficiency Metrics with No Additional Determinate Evaluation). 

3 comments:

Peter Kenny said...

I like LEMONS.

The main take home message from 'Ligand efficiency metrics considered harmful' is that if you want to normalize activity by risk factor then you really should use the trend observed in the data rather than an arbitrarily assumed trend. If the trend in activity is a straight line passing through the origin then LE is a perfectly acceptable measure of compound quality. However, you still need to model the data to shot that this is the case and if you do that a residual is a more useful measure of the extent to which activity beats the trend. We discussed this last year in http://dx.doi.org/10.1007/s10822-013-9655-5 and it is unfortunate that neither of the opponents in the recent debate picked it up.

We also challenged the practise of combining results from multiple assays in LE-based analysis. In some situations it may be OK to do this but the burden of proof is on the person doing this to show that it is OK. 'Correcting' LE for molecular size looks distinctly shaky.

Anonymous said...

Bearing in mind we have spent decades driving med chem projects on that gloriously tweak-able and biologically irrelevant value of IC50 its amazing we discovered anything.

Teddy sums it up well with his 95cm meter stick comment. As long as you accept LEMs as guides and not facts you are doing OK. But that relies on scientists understanding data as opposed to blindly following a single value which has fallen out of fashion these days.


It does drive me mad when people start plugging their favourite loggable parameter into LE = -RTln{x}/HA and 'aim for better than 0.3.


Anonymous said...

So, have we figured out yet, exactly, how many fairies are on the head of this particular pin, yet?

There is nothing "wrong" with LE, LLE, LELP, or even IC50. It's all in how one uses/interprets/takes action after obtaining these numbers.

And, unfortunately, I think this sort of gets lost in the these debates. All of these recent papers are interesting to read, but for your average (or even above-average) medicinal chemist, the math, and some of the arguments, quickly becomes a little boring. (Sorry, I said it. And my son is a mathematics major).

I find some of these metric useful on some projects. On others, not so much. LLE can certainly help identify interesting trends in certain series of molecules, but for another series of molecules against the same target it washes out. Why? Wish I knew.

To take a lesson from Wall Street, the trend is your friend. And when the trend is not your friend, move on to something else. Too often, trying to figure out why a metric isn't working is just not productive (although it can result in lots of papers).