Showing posts with label LLEAT. Show all posts
Showing posts with label LLEAT. Show all posts

03 August 2015

Fragments and HTS vs BCATm

One of the themes throughout this blog is that fragments are useful not just in and of themselves, but as part of a broader tool kit, what Mark Whittaker referred to as fragment-assisted drug discovery, or FADD. A nice example of this has just been published in J. Med. Chem. by Sophie Bertrand and colleagues at GlaxoSmithKline and the University of Strathclyde.

The researchers were interested in mitochondrial branched-chain aminotransferase (BCATm), an enzyme that transforms leucine, isoleucine, and valine into their corresponding α-keto acids. Knockout mouse studies had suggested that this might be an attractive target for obesity and dyslipidemia, but there’s nothing like a chemical probe to really (in)validate a target. To find one, the researchers performed both fragment and high-throughput screens (HTS).

The full results from the fragment screen have not yet been published, but the current paper notes that the researchers screened 1056 fragments using biochemical, STD-NMR, and thermal shift assays. Compound 1 came up as a hit in all three assays, and despite modest potency and ligand efficiency, it did have impressive LLEAT. The researchers were unable to determine a crystal structure of this fragment bound to the protein, but STD-NMR screens of related fragments yielded very similar hits that could be successfully soaked into crystals of BCATm.


The HTS also produced hits, notably compound 4, which is clearly similar to compound 1. In addition to its increased biochemical potency, it also displayed good cell activity. Moreover, a crystal structure revealed that the bromobenzyl substituent bound in an induced pocket that did not appear in the structure with the fragment, or indeed in any other structures of BCATm.

The researchers merged the fragment hits with the HTS hits to get molecules such as compound 7, with a satisfying boost in potency. Interestingly, the fragment-derived core consistently gave a roughly 10-fold boost in potency compared to the triazolo compounds from HTS. Comparison of crystal structures suggested that this was due to the displacement of a high-energy water molecule by the nitrile.

Extensive SAR studies revealed that the propyl group could be extended slightly but most other changes at that position were deleterious. The bromobenzyl substituent was more tolerant of substitutions, including an aliphatic replacement, though this abolished cell activity. Compound 61 turned out to be among the best molecules in terms of potency and pharmaceutical properties, including an impressive 100% oral bioavailability and a 9.2 hour half-life in mice. Moreover, this compound led to higher levels of leucine, isoleucine, and valine when mice were fed these amino acids.

This is a lovely case study of using information from a variety of sources to enable medicinal chemistry. Like other examples of FADD, one could argue as to whether the final molecule would have been discovered without the fragment information, but it probably at least accelerated the process. More importantly, molecules such as compound 61 will help to answer the question of whether BCATm will be a viable drug target. 

06 October 2014

Physical properties in drug design

This is the title of a magisterial review by Rob Young of GlaxoSmithKline in Top. Med. Chem. At 68 pages it is not a quick read, but it does provide ample evidence that physical properties are ignored at one’s peril. It also offers a robust defense of metrics such as ligand efficiency.

The monograph begins with a restatement of the problem of molecular obesity: the tendency of drug leads to be too lipophilic. I think everyone – even Pete Kenny – agrees that lipophilicity is a quality best served in moderation. After this introduction Young provides a thorough review of physical properties including lipophilicity/hydrophobicity, pKa, and solubility. This is a great resource for people new to the field or those looking for a refresher.

In particular, Young notes the challenges of actually measuring qualities such as lipophilicity. Most people use log P, the partition coefficient of a molecule between water and 1-octanol. However, it turns out that it is difficult to experimentally measure log P for highly lipophilic and/or insoluble compounds. Also, as Kenny has pointed out, the choice of octanol is somewhat arbitrary. Young argues that chromatographic methods for determining lipophilicity are operationally easier, more accurate, and more relevant. The idea is to measure the retention times of a series of compounds on a C-18 column eluted with buffer/acetonitrile at various pH conditions to generate “Chrom log D” values. Although a stickler could argue this relies on arbitrary choices (why acetonitrile? Why a C-18 column?) it seems like a reasonable approach for rapidly assessing lipophilicity.

Next, Young discusses the influence of aromatic ring count on various properties. Although the strength of the correlation between Fsp3 and solubility has been questioned, what’s not up for debate is the fact that the majority of approved oral drugs have 3 or fewer aromatic rings.

Given that 1) lipophilicity should be minimized and 2) most drugs contain at most just a few aromatic rings, researchers at GlaxoSmithKline came up with what they call the Property Forecast Index, or PFI:

PFI = (Chrom log D7.4) + (# of aromatic rings)

An examination of internal programs suggested that molecules with PFI > 7 were much more likely to be problematic in terms of solubility, promiscuity, and overall development. PFI looks particularly predictive of solubility, whereas there is no correlation between molecular weight and solubility. In fact, a study of 240 oral drugs (all with bioavailability > 30%) revealed that 89% of them have PFI < 7.

Young summarizes: the simple mantra should be to “escape from flatlands” in addition to minimising lipophilicity.

The next two sections discuss how the pharmacokinetic (PK) profile of a drug is affected by its physical properties. There is a nice summary of how various types of molecules are treated by relevant organs, plus a handy diagram of the human digestive track, complete with volumes, transit times, and pH values. There is also an extensive discussion of the correlation between physical properties and permeability, metabolism, hERG binding, promiscuity, serum albumin binding, and intrinsic clearance. The literature is sometimes contradictory (see for example the recent discussion here), but in general higher lipophilicity and more aromatic rings are deleterious. Overall, PFI seems to be a good predictor.

The work concludes with a discussion of various metrics, arguing that drugs tend to have better ligand efficiency (LE) and LLE values than other inhibitors for a given target. For example, in an analysis of 46 oral drugs against 25 targets, only 2.7% of non-kinase inhibitors have better LE and LLE values than the drugs (the value is 22% for kinases). Similarly, the three approved Factor Xa inhibitors have among the highest LLEAT values of any compounds reported.

Some of the criticism of metrics has focused on their arbitrary nature; for example, the choice of standard state. However, if metrics correlate with a molecule's potential to become a drug, it doesn’t really matter precisely how they are defined.

The first word in the name of this blog is Practical. The statistician George Box once wrote, “essentially, all models are wrong, but some are useful.” Young provides compelling arguments that accounting for physical properties – even with imperfect models and metrics – is both practical and useful.

Young says essentially this as one sentence in a caveat-filled paragraph:

The complex requirements for the discovery of an efficacious drug molecule mean that it is necessary to maintain activity during the optimisation of pharmacokinetics, pharmacodynamics and toxicology; these are all multi-factorial processes. It is thus perhaps unlikely that a simple correlation between properties might be established; good properties alone are not a guarantee of success and some effective drugs have what might be described as sub-optimal properties. However, it is clear that the chances of success are much greater with better physical properties (solubility, shape and lower lipophilicity). These principles are evident in both the broader analyses with attrition/progression as a marker and also in the particular risk/activity values in various developability screens.

In other words, metrics and rules should not be viewed as laws of nature, but they can be useful guidelines to control physical properties.

03 February 2014

How weak is too weak for PPIs?

Ben Perry brought up an interesting question in a comment to a recent post about fragments that bind at a protein-protein interface: “At what level of binding potency does one accept that there may not be any functional consequence?” I suspect the answer will vary in part based on the difficulty and importance of the target, and many protein-protein interactions (PPIs) rank high on both counts. In a recent (and open-access!) paper in ACS Med. Chem. Lett., Alessio Ciulli and collaborators at the University of Dundee, the University of Cambridge, and the University of Coimbra (Portugal) ask how far NMR can be pushed to find weak fragments.

The researchers started with a low micromolar inhibitor of the interaction between the von Hippel-Lindau protein and the alpha subunit of hypoxia-inducible factor 1 (pVHL:HIF-1α), an interaction important in cellular oxygen sensing. The team had previously deconstructed this molecule into component fragments, but they were unable to detect binding of the smallest fragments.

In the new study, the researchers again deconstructed the inhibitor into differently sized fragments and used three ligand-detected NMR techniques (STD, CPMG, and WaterLOGSY) to try to identify binders. As before, under standard conditions of 1 mM ligand and 10 µM protein, none of the smallest fragments were detected. However, by maintaining ligand concentration and increasing the protein concentration to 40 µM (to increase the fraction of bound ligand) or increasing concentrations of both protein (to 30 µM) and ligand (to 3 mM), the researchers were able to detect binding of fragments that adhere to the rule of three.

Of course, at these high concentrations, the potential for artifacts also increases, but the researchers were able to verify binding by isothermal titration calorimetry (ITC) and competition with a high-affinity peptide. They were also able to use STD data to show which regions of fragments bind to the protein, suggesting that the fragments bind similarly on their own as they do in the parent molecule. (Note that this is in contrast to a deconstruction study on a different PPI.) Even more impressively for a large (42 kDa) protein, the researchers were able to use 2-dimensional NMR (1H-15N HSQC) to confirm the binding sites.

Last year we highlighted a study that deconstructed an inhibitor of the p53/MDM2 interaction. In that case, the researchers were only able to find super-sized fragments, and they argued that for PPIs the rule of three should be relaxed. The current paper is a nice illustration that very small, weak fragments can in fact be detected for PPIs, though you may need to push your biophysical techniques to the limit.

But back to the original question of how weak is too weak. With Kd values from 2.7-4.9 mM, these are truly feeble fragments. Nonetheless, they could in theory have been viable starting points had they been found prospectively. That assumes, though, that these fragments would have been recognized as useful and properly prioritized. The ligand efficiencies (LE) of all the fragments, while not great, are not beyond the pale for PPIs. Previous research had suggested that much of the overall binding affinity in compound 1 comes from the hydroxyproline fragment (compound 6, which was originally derived from the natural substrate). Not discussed in the paper, but perhaps more significantly, the LLE (LipE) and LLEAT values are best for compound 6, which despite having the lowest affinity is the only compound that could be crystallographically characterized bound to the protein. In the Great Debate over metrics, this suggests that LLE and LLEAT may be more useful than simple LE for comparing very weak fragments.

09 September 2013

More thoughts on the Astex-Otsuka marriage

Teddy already highlighted the planned $866 million acquisition of Astex by Otsuka, and I thought I’d add a bit of context. Astex Therapeutics was founded in 1999, just three years after publication of the Abbott SAR by NMR paper that arguably launched widespread interest in fragment-based lead discovery. From the outset, Astex focused heavily on crystallography, which was somewhat unusual at the time; Vicki Nienaber’s seminal SAR by Crystallography paper only came out in 2000.

Astex researchers have made many practical contributions to FBLD, from the (sometimes controversial) rule of three to the LLEAT metric to the Astex Viewer familiar to anyone who has seen a presentation from the company. More than 100 publications have come from Astex, including one of the earliest comprehensive reviews of the field. And the company has also delivered: of 28 fragment-derived compounds to make it into the clinic, Astex has had a role in nearly a quarter, including AT13387, AT7519, AT9283, JNJ-42756493 (with J&J), LEE011 (with Novartis), AT13148, and AZD5363 (with AstraZeneca and ICR).

In terms of price, $866 million is indeed a tidy sum, more than the up-front Daiichi Sankyo paid for Plexxikon (though a bit under the total deal value of $935 million) and more than an order of magnitude higher than the $64 million Lilly paid for SGX back in the dark days of 2008. Even with close to a billion dollars on the table, some are calling the price too low, with one analyst suggesting Astex is worth $13 per share rather than the $8.50 offered by Otsuka.

Of course, the Astex pipeline is not entirely fragment-based; a merger with SuperGen in 2011 brought in a marketed product (decitabine) as well as other clinical compounds. Still, from what Otsuka has said publicly, it does appear that the FBLD technology was a major driver: it is the first item mentioned under the heading “Objectives of the Acquisition.”

As Derek Lowe pointed out over at In the Pipeline, Japanese firms have a good track record of not breaking or shuttering acquired companies; last I checked Plexxikon was still going strong. Hopefully this will hold true for Astex as well. Practical Fragments offers congratulations and wishes continued success to everyone involved.

25 August 2013

Myriad metrics – but which are useful?

Practical Fragments recently introduced WTF as a light-hearted jab at the continuing proliferation of metrics to evaluate molecules, but there is an underlying problem: which ones are useful? In a provocative paper just published online in Bioorg. Med. Chem. Lett. (and also discussed over at In the Pipeline) Michael Shultz asks:

If one molecular change can theoretically alter 18 parameters, two shapes, the rules of 5, 3/75, 4/400 and ‘two thumbs’ while simultaneously affecting at least nine composite parameters and countless different methods of representing data, how is a practicing medicinal chemist to know if any specific modification was actually beneficial?

Shultz focuses on three parameters in depth: ligand efficiency (LE), ligand-efficiency-dependent lipophilicity (LELP), and lipophilic ligand efficiency (LLE, also referred to as lipophilic efficiency or LipE). He conducts a number of thought experiments to see how these metrics change when, for example, a methyl group is changed to a t-butyl group or a methyl sulfone. He also examines how the metrics perform against historical data from Novartis lead-optimization programs.

One problem with LE is that, although it was introduced to normalize potency and size, it is still highly dependent on number of heavy atoms (heavy atom count, or HAC): addition of one atom to a small fragment will have a more dramatic effect on LE than addition of one atom to a larger molecule. This has led to metrics in which larger molecules are treated more leniently, but because of the way all these metrics are mathematically defined, none achieve completely size-dependent normalization.

More seriously, LE ignores lipophilicity, which seems to be correlated with all sorts of deleterious properties. With a nod to Mike Hann’s “molecular obesity,” Shultz notes that the widely used body mass index (BMI) “cannot distinguish between the truly obese and professional athletes of identical height and weight. Similarly, HAC based composite parameters such as LE cannot distinguish between ‘lean molecular mass’ and groups of real molecular obesity.”

LELP addresses this shortcoming by incorporating clogP, but it has problems of its own. For example, “the effects of lipophilicy are magnified as molecular size increases.” More alarmingly, as clogP approaches zero, LELP becomes increasingly insensitive to both size and potency; a femtomolar binder would have the same LELP as a millimolar binder when clogP = 0.

In contrast to both LE and LELP, LipE (or LLE) is size-independent, so a change in potency or lipophilicity will produce the same change in LipE no matter the size of the initial molecule. Shultz uses data from two lead optimization programs to show that LipE behaves better than LE or LELP. This is in contrast to a previous report that suggested LELP to be superior to LipE, albeit against a different data set.

Shultz further notes that LipE can be thought of as the tendency of a molecule to bind to a specific protein rather than to bulk octanol:

LipE = pKi - clopP = log [EI]/([E][I]) – log ([Ioctanol]/[Iwater])
where E stands for protein and I stands for inhibitor
Although this is a simple consequence of the math, it is a nice way of visualizing an otherwise abstract number. Moreover, it suggests that optimizing for LipE could optimize for enthalpic interactions, a topic Shultz explores in depth in a companion paper.

Overall Shultz raises some excellent points, but I still believe there is value in LE (and LLEAT), particularly in the context of fragments, which usually have low affinity. Ligand efficiency can prioritize molecules that might otherwise be overlooked. For example, it is hard to get too excited over a 1 mM binder, but if the hit has only 8 heavy atoms it could be valuable.

Turning to my own miniature thought experiment, fragments 1 and 2 have very similar LipE values, but the LE of Fragment 1 is better, and arguably makes a more attractive fragment hit.

Of course, in the end, rules should not be followed slavishly; the most lucrative drug of all time, Pfizer’s atorvastatin, violates Lipinski’s rule of five. Papers like this are important to highlight the problems and inconsistencies that underlie some of our metrics. Ultimately I’ll take biological data and the intuition of a good medicinal chemist over any and every rule of thumb.

What do you think? What role should LE, LELP, and LipE play in drug discovery?

25 August 2011

Journal of Computer-Aided Molecular Design 2011 Special FBDD Issue

The most recent issue of J. Comput. Aided Mol. Des. is entirely devoted to fragment-based drug discovery. This is the second special issue they’ve dedicated to this topic, the first one being in 2009.

Associate Editor Wendy Warr starts by interviewing Sandy Farmer of Boehringer Ingelheim. There are many insights and tips here, and I strongly recommend it for a view of how fragment-based approaches are practiced at one large company. A few quotes give a sense of the flavor.

On corporate environment:
In most cases, the difference between success and failure has little to do with the process and supporting technologies (they work!), but rather much more to do with the organizational structure to support FBDD and the organizational mindset to accept the different risk profile and resource model behind FBDD.
On success rates:
We have found that FBDD has truly failed in only 2-3 targets out of over a dozen or so.
On cost:
FBDD must be viewed as an investment opportunity, not a manufacturing process. And the business decisions surrounding FBDD should factor that in. FBDD is more about the opportunity cost (of not doing it) than the “run” cost (of doing it).
On expertise:
Successful FBDD still requires a strong gut feeling.
On small companies:
In the end, FBDD will always have a lower barrier to entry than HTS for a small company wanting to get into the drug-discovery space.

The key to success for such companies is to identify or construct some technology platform.
There’s a lot of other really great content in the issue, much of which has been covered in previous posts on fragment library design, biolayer interferometry, LLEAT, and companies doing FBLD. The other articles are described briefly below.

Jean-Louis Reymond and colleagues have two articles for mining chemical structures, one analyzing their enumerated set of all compounds having up to 13 heavy atoms (GDB-13), the other focused on visualizing chemical space covered by molecules in PubChem. They have also put up a free web-based search tool (available here) for mining these databases.

Roland Bürli and colleagues at BioFocus describe their fragment library and its application to discover fragment hits against the kinase p38alpha. A range of techniques are used, with reasonably good correlation between them.

Finally, M. Catherine Johnson and colleagues present work they did at Pfizer on the anticancer target PDK1 (see here and here for other fragment-based approaches to this kinase). NMR screening provided a number of different fragment hits that were used to mine the corporate compound collection for more potent analogs, and crystallography-guided parallel chemistry ultimately led to low micromolar inhibitors.

03 August 2011

Ligand efficiency metrics poll results

Poll results are in, and not surprisingly, ligand efficiency (LE) comes out on top, with 86% of respondents using the metric. What was a surprise to me is how many folks use ligand lipophilic efficiency (LLE) (46%). Coming in a distant third at 15% is LLEAT, but given that this metric was just reported it has a pretty strong showing, and I wouldn't be surprised to see this increase. Binding efficiency index (BEI) comes in fourth with 12% of the vote, and Fsp3 is tied with "other" with 8% of the vote. The other metrics only received one or two votes each.
Since people could vote on multiple metrics, there were more responses than respondents. Subtracting those who voted for "none" leaves 124 data points, suggesting that the average researcher is using 1.9 of these metrics (though unfortunately we don't have information on the median user).

Finally, for the 5 of you who selected "other", what else is out there that we've left out?