Ligand efficiency (LE) has been
discussed repeatedly and extensively on Practical Fragments, most
recently in September. Two criticisms are its dependence on standard state and
the observation that larger molecules frequently have lower ligand efficiencies
than smaller molecules. In a just-published open-access ACS Med. Chem. Lett.
paper, Hongtao Zhao proposes a new metric, xLE, to address these concerns.
LE is defined as the negative Gibbs
free energy of binding (ΔG) divided by the number of non-hydrogen (or heavy)
atoms, and of course ΔG is state-dependent. Standard state assumptions are 298K
and 1M concentrations, choices that some people see as arbitrary since few
biologically relevant molecules ever achieve concentrations near 1M. To remove
the dependence on standard state, Zhao proposes to remove the translational
entropy term of the unbound ligand from the free energy calculation.
Zhao also addresses the second criticism,
that larger molecules often have lower ligand efficiencies. This phenomenon was
observed in an (open-access) 1999 paper titled “the maximal affinity of
ligands,” which found that, beyond a certain threshold, larger ligands do not
have stronger affinities; there are very few femtomolar binders even among the
largest small molecules. Thus, Zhao proposes attenuating the size dependence.
The new metric, xLE, is defined
as follows:
xLE = (5.8 + 0.9*ln(Mw)
– ΔG)/(a*Nα) - b
Where N is the number of
non-hydrogen atoms, α is chosen to reduce size dependence, and a and b are
“scaling variables.” He chooses α=0.2, a=10, and b=0.5, with little explanation.
To assess performance, Zhao examined
nearly 14,000 measured affinities from PDBbind. When plotted by number of
atoms, median affinity increased up to about 35 heavy atoms but then leveled
off. Median LE values decreased sharply from 6 to 12 heavy atoms and then
leveled off somewhere in the 20s. But median xLE values were consistent
regardless of ligand size.
Zhao also examined LE and xLE
changes for 175 successful fragment-to-lead studies from our annual series of J.
Med. Chem. perspectives. LE decreased from fragment to lead for 48% of
these, but xLE increased for all but a single pair.
And this, in my opinion, is a
problem.
In the seminal 2004 paper, LE was
proposed as "a simple ‘ready reckoner’, which could be used to assess the
potential of a weak lead to be optimized into a potent, orally bio-available
clinical candidate." The metric was particularly important before FBLD was
widely accepted, when chemists were even less inclined to work on weak binders.
Here is the situation for which
LE was devised. Imagine two molecules, compounds 1 and 2. The first has just 12 non-hydrogen
atoms, a molecular weight of 160, and a modest 1 mM affinity for a target -
similar to some fragments that have yielded clinical compounds. The second is
much larger: 38 non-hydrogen atoms, a molecular weight of 500, and 10 µM
affinity for the same target. Considering potency alone, compound 2 is the
winner.
However, the LE for compound 1 is
a respectable 0.34 kcal/mol/atom, while the LE for compound 2 is 0.18
kcal/mol/atom. So while a 10 µM HTS hit may initially look appealing, the LE
suggests that this is an inefficient binder, and further optimization may
require adding too much molecular weight to get to a desired low nanomolar
affinity.
In contrast, the xLE values for
both compounds are nearly identical, 0.38, and so this metric would not help a
chemist prioritize which hit to pursue. In other words, xLE does not provide
the insight for which LE was created. It
might even lead to suboptimal choices.
Moreover, unlike LE, xLE is
non-intuitive. And finally, with three scaling or normalization factors,
xLE is arguably even more arbitrary than a metric dependent on the
widely-accepted definition of standard state.
Personally I find the practical
applications of xLE limited, but I welcome your thoughts.
1 comment:
Thank you for commenting on the viewpoint—much appreciated. xLE actually has only one parameter; the other two are used solely to place its distribution on the same scale as LE. One can remove those two by setting a = 1 and b = 0 without changing the conclusions. The parameter α was determined empirically to minimize the dependence of the median xLE on molecular size; I realize the original wording may have been misleading.
In the context of LE for fragments with 12 heavy atoms, the “respectable 0.34 kcal/mol/atom” warrants a second thought. In contrast, xLE indicates that both compounds have relatively low binding efficiency compared with the median of 0.55. In the viewpoint, xLE is recommended as an efficiency metric to guide potency optimization rather than as an order-ranking tool for prioritizing starting points.
When xLE falls in the first quartile, further potency optimization will likely require adding more heavy atoms. For both compounds in your example, it may be necessary to identify any suboptimal interactions and optimize those before increasing heavy atom count. In HTS triage, ranking compounds by efficiency metrics is understandable: we prefer to start from compounds that make optimal interactions and are easy to elaborate by adding atoms. However, do we truly believe that 0.34 kcal/mol/atom for a 12–heavy-atom fragment is a good starting point? What would "fit quality" indicate for compound 1? If we argue it is always easier to start with smaller compounds, wouldn’t heavy atom count alone suffice (albeit with an implicit and arbitrary activity cutoff)?
One point I considered, but did not include in the viewpoint, is how to choose a dataset to empirically set the parameter α, and why the median is used instead of maximal affinity.
Post a Comment