Ligand efficiency (LE) has been
discussed repeatedly and extensively on Practical Fragments, most
recently in September. Two criticisms are its dependence on standard state and
the observation that larger molecules frequently have lower ligand efficiencies
than smaller molecules. In a just-published open-access ACS Med. Chem. Lett.
paper, Hongtao Zhao proposes a new metric, xLE, to address these concerns.
LE is defined as the negative Gibbs
free energy of binding (ΔG) divided by the number of non-hydrogen (or heavy)
atoms, and of course ΔG is state-dependent. Standard state assumptions are 298K
and 1M concentrations, choices that some people see as arbitrary since few
biologically relevant molecules ever achieve concentrations near 1M. To remove
the dependence on standard state, Zhao proposes to remove the translational
entropy term of the unbound ligand from the free energy calculation.
Zhao also addresses the second criticism,
that larger molecules often have lower ligand efficiencies. This phenomenon was
observed in an (open-access) 1999 paper titled “the maximal affinity of
ligands,” which found that, beyond a certain threshold, larger ligands do not
have stronger affinities; there are very few femtomolar binders even among the
largest small molecules. Thus, Zhao proposes attenuating the size dependence.
The new metric, xLE, is defined
as follows:
xLE = (5.8 + 0.9*ln(Mw)
– ΔG)/(a*Nα) - b
Where N is the number of
non-hydrogen atoms, α is chosen to reduce size dependence, and a and b are
“scaling variables.” He chooses α=0.2, a=10, and b=0.5, with little explanation.
To assess performance, Zhao examined
nearly 14,000 measured affinities from PDBbind. When plotted by number of
atoms, median affinity increased up to about 35 heavy atoms but then leveled
off. Median LE values decreased sharply from 6 to 12 heavy atoms and then
leveled off somewhere in the 20s. But median xLE values were consistent
regardless of ligand size.
Zhao also examined LE and xLE
changes for 175 successful fragment-to-lead studies from our annual series of J.
Med. Chem. perspectives. LE decreased from fragment to lead for 48% of
these, but xLE increased for all but a single pair.
And this, in my opinion, is a
problem.
In the seminal 2004 paper, LE was
proposed as "a simple ‘ready reckoner’, which could be used to assess the
potential of a weak lead to be optimized into a potent, orally bio-available
clinical candidate." The metric was particularly important before FBLD was
widely accepted, when chemists were even less inclined to work on weak binders.
Here is the situation for which
LE was devised. Imagine two molecules, compounds 1 and 2. The first has just 12 non-hydrogen
atoms, a molecular weight of 160, and a modest 1 mM affinity for a target -
similar to some fragments that have yielded clinical compounds. The second is
much larger: 38 non-hydrogen atoms, a molecular weight of 500, and 10 µM
affinity for the same target. Considering potency alone, compound 2 is the
winner.
However, the LE for compound 1 is
a respectable 0.34 kcal/mol/atom, while the LE for compound 2 is 0.18
kcal/mol/atom. So while a 10 µM HTS hit may initially look appealing, the LE
suggests that this is an inefficient binder, and further optimization may
require adding too much molecular weight to get to a desired low nanomolar
affinity.
In contrast, the xLE values for
both compounds are nearly identical, 0.38, and so this metric would not help a
chemist prioritize which hit to pursue. In other words, xLE does not provide
the insight for which LE was created. It
might even lead to suboptimal choices.
Moreover, unlike LE, xLE is
non-intuitive. And finally, with three scaling or normalization factors,
xLE is arguably even more arbitrary than a metric dependent on the
widely-accepted definition of standard state.
Personally I find the practical
applications of xLE limited, but I welcome your thoughts.