29 August 2013

3D Fragments...An Analysis

[**Programming Note**  Sorry about two posts in one day, but I thought this was too cool to wait.]
The 3D-arity of fragments is a common topic of discussion in this field.  ICYMI, Chris recently did an analysis of PPI interactions and the compounds that target them.  However, even more recently, Chris just put up his analysis of the 3DFrag consortium's fragment collection.  The 3DFrag collection does not look any more 3D than commercially available collections.  Justin Bower from the Beatson points out that this is because their fragment collection is largely due to culling from commercial collections.  

Now, no one will argue that nPMI is the best metric for assessing 3D-arity.  But it is the best we have so far.  So, Chris has tried to improve on the visualization of nPMI for very large libraries.  Chris has divided the PMI plot into regions that are disc, rod-like, or spheres (he details how he classifies them at his page).  The upshot of this is that he can then generate very simple plots like this:
I think this is a great leap forward.  Obviously, because of the nature of the chemistry performed over the past umpteen years, this would be totally expected.  As Peter Kenny has pointed out previously, rods have volume, but I think that is not what the 3D-eers are aiming at.  What's really nice is that 3DFrag has chemists to make fragments.  In May, they reported that they had added 221 synthesized fragments.  I would like to see how these 221 fragment differ from the commercially available ones.  The proof is always in the pudding after all. 

28 August 2013

NMR as an Impractical Tool, Again

When I was in grad school, I was faced with the choice of two NMR labs to join (after starting as a organic chemist and flirting with enzymology).  Both used NMR, but with very different goals.  One lab used NMR and found systems to study using NMR.  The other studied interesting problems.  The PI would say if NMR is most appropriate than using, but don't be a slave to it.  I have taken that attitude my entire career.  In industry, it also has to be the mantra: best tool for the problem.  Academics tend to have the opposite mindset: let's make my tool work for anything.  

The Krimm lab has been cited here, here, here, here, and here on the blog and I hold their approach to academic tool creation for drug discovery in good regard.  In this paper, they present a combination computational/NMR method for determining if a fragment induces conformational changes in the target.  In their own words: 
The approach relies on the comparison of experimental fragment-induced Chemical Shift Perturbation (CSP) of amine protons to CSP simulated for a set of docked fragment poses, considering the ring-current effect from fragment binding.
Sometimes good people do bad things

I am not going to get into the details, but rest assured the science is sound.  Their approach is to evaluate H-N (you could also use H-C, why not) chemicals shifts from titration data to simulated CSPs.  When they did compare experimental with calculated CSPs they could not explain some of these shifts, even when they included ring current-induced shifts.  To further investigate this phenomenon, they used Residual Dipolar Couplings (RDCs) to further explore these unexplained CSPs.  It does.  

What are my problems with this paper?  Practicality, primarily.  Is this another anti-compchem rant?  Nope.  The problem here is that everything they propose to do, and they do it well, relies upon a whole sh!tpile of a priori knowledge: 1. the structure of the protein (typically from X-ray) and 2. the assignments of the protein (not trivial).  Additionally, the RDCs require acquiring two sets of data, aligned and unaligned.  RDCs are wholly impractical.  All of this should red flag this paper as a "Impractical" approach.  It also does not present a method for interrogating structural changes induced by ligands that is any better or more robust than the current standard of analysis.

25 August 2013

Myriad metrics – but which are useful?

Practical Fragments recently introduced WTF as a light-hearted jab at the continuing proliferation of metrics to evaluate molecules, but there is an underlying problem: which ones are useful? In a provocative paper just published online in Bioorg. Med. Chem. Lett. (and also discussed over at In the Pipeline) Michael Shultz asks:

If one molecular change can theoretically alter 18 parameters, two shapes, the rules of 5, 3/75, 4/400 and ‘two thumbs’ while simultaneously affecting at least nine composite parameters and countless different methods of representing data, how is a practicing medicinal chemist to know if any specific modification was actually beneficial?

Shultz focuses on three parameters in depth: ligand efficiency (LE), ligand-efficiency-dependent lipophilicity (LELP), and lipophilic ligand efficiency (LLE, also referred to as lipophilic efficiency or LipE). He conducts a number of thought experiments to see how these metrics change when, for example, a methyl group is changed to a t-butyl group or a methyl sulfone. He also examines how the metrics perform against historical data from Novartis lead-optimization programs.

One problem with LE is that, although it was introduced to normalize potency and size, it is still highly dependent on number of heavy atoms (heavy atom count, or HAC): addition of one atom to a small fragment will have a more dramatic effect on LE than addition of one atom to a larger molecule. This has led to metrics in which larger molecules are treated more leniently, but because of the way all these metrics are mathematically defined, none achieve completely size-dependent normalization.

More seriously, LE ignores lipophilicity, which seems to be correlated with all sorts of deleterious properties. With a nod to Mike Hann’s “molecular obesity,” Shultz notes that the widely used body mass index (BMI) “cannot distinguish between the truly obese and professional athletes of identical height and weight. Similarly, HAC based composite parameters such as LE cannot distinguish between ‘lean molecular mass’ and groups of real molecular obesity.”

LELP addresses this shortcoming by incorporating clogP, but it has problems of its own. For example, “the effects of lipophilicy are magnified as molecular size increases.” More alarmingly, as clogP approaches zero, LELP becomes increasingly insensitive to both size and potency; a femtomolar binder would have the same LELP as a millimolar binder when clogP = 0.

In contrast to both LE and LELP, LipE (or LLE) is size-independent, so a change in potency or lipophilicity will produce the same change in LipE no matter the size of the initial molecule. Shultz uses data from two lead optimization programs to show that LipE behaves better than LE or LELP. This is in contrast to a previous report that suggested LELP to be superior to LipE, albeit against a different data set.

Shultz further notes that LipE can be thought of as the tendency of a molecule to bind to a specific protein rather than to bulk octanol:

LipE = pKi - clopP = log [EI]/([E][I]) – log ([Ioctanol]/[Iwater])
where E stands for protein and I stands for inhibitor
Although this is a simple consequence of the math, it is a nice way of visualizing an otherwise abstract number. Moreover, it suggests that optimizing for LipE could optimize for enthalpic interactions, a topic Shultz explores in depth in a companion paper.

Overall Shultz raises some excellent points, but I still believe there is value in LE (and LLEAT), particularly in the context of fragments, which usually have low affinity. Ligand efficiency can prioritize molecules that might otherwise be overlooked. For example, it is hard to get too excited over a 1 mM binder, but if the hit has only 8 heavy atoms it could be valuable.

Turning to my own miniature thought experiment, fragments 1 and 2 have very similar LipE values, but the LE of Fragment 1 is better, and arguably makes a more attractive fragment hit.

Of course, in the end, rules should not be followed slavishly; the most lucrative drug of all time, Pfizer’s atorvastatin, violates Lipinski’s rule of five. Papers like this are important to highlight the problems and inconsistencies that underlie some of our metrics. Ultimately I’ll take biological data and the intuition of a good medicinal chemist over any and every rule of thumb.

What do you think? What role should LE, LELP, and LipE play in drug discovery?

21 August 2013

Fragment Design Done Right

As many of you probably know, I am not a fan of virtual screening, computational design, in silico much of anything.  I think it tends to be poorly applied, or academic.  Now, don't mark me as a Luddite, I think that computational tools can be quite useful, when appropriately applied.  What is appropriate?  Read on and let Hoffmann-LaRoche-Nutley show you in this beautiful paper.  

This is one of a line of great papers coming out of the closing Nutley site, so that is the one upside.  In this paper, the authors present how they leveraged the expertise of their chemists to design fragments against HCV NS5B, a well known drug target.  The story starts (I hesitate to say "their efforts start...") with a screen of 2700 fragments by SPR.  They identified 163 hits of which 29 were selected (criteria unstated) for co-crystallization.  Only one fragment delivered, 1.   

Fragment 1 had a 78uM KD, 130 uM IC50, but it could not be optimized for affinity, physicochemical or ADME properties at all.  They one important discovery from the co-crystal of 1 was an unexpected, and they believe, first ever interaction of its type: the NH hydrogen interacting with Q446 (Figure 1). 
Figure 1. 

Using this and the published structures internally (2-3) and externally (4-6) the built a model. The following guidelines were proposed for the new fragments: 1. Satisfy carbonyl of Q446 and NH of Y448, optionally displacing or engaging the conserved water molecule, 2. Occupy large hydrophobic pocket, exploring its size, 3. Position aromatic chain to make edge to face interaction with Y448, 4. And at least one hydrophobic interaction with G410 and/or M414.
Figure 2.
In a triumph of democracy and teamwork, the chemist woud discuss his ideas with the compchemist and have them modeled.  The ideas were presented to the team and the best ideas selected for synthesis.  They also chose to avoid acidic functionality.  Since 1 was the only known binder in the region without acidic functionality, they focused on incorporating the unique Q446 interaction.  What they found was that compounds capable of 1,2 and 1,3 interactions were best.  Table 1. shows their SAR. 
Their first two compounds were dead, dead as a this parrot.  Compound 9 satisfied 3 of the 4 criteria they established, yet showed very poor activity and bad ligand efficiency. Using LE was crucial, the authors state, because it allows the to distinguish affinity through bulk, vs. affinity through efficiency.  Finally, adding substituents to the hydantoin to explore the hydrophobic pocket showed significant increases in activity, e.g. 9->12.  Fragment 12 was co-crystallized and confirmed the expected binding mode.  A 2-pyridone fragment (13) gave similar activity to 9.  So, starting with a Pfizer-inspired compound gave 14 which demonstrated an increase in potency, but NOT ligand efficiency.   They next tried 15 and voila! a 100x increase in affinity with four less heavy atoms, the ligand efficiency went way up!  Co-crystallization showed that 15 bound as expected.  Two more heavy atoms added to 15 led to 16 and showed increased potency and ligand efficiency while retaining the desireable phyiscochemical properties of 15.  Compound 16 was further optimized and entered clinical trials with all of the atoms presented in it. 

So, what makes this "Fragment Design Done Right" in my eyes?  In this case, they utilized fragment docking as an aid to chemist's designing ligands.  They used in silico tools, like modeling, to test the potentially validity of the chemist's hypotheses.  In the end, their computation was as good as their experimental follow up, in this case X-ray.  What differentiates humans from the brute beasts (except for all the exceptions out there where tool usage has been shown) is that we use tools.  Using tools correctly, is what differentiates the smart humans from the herd.

19 August 2013

Fragments vs CHK2: high-concentration screening comes through

Checkpoint Kinase 2 (CHK2) is an oncology target that has been kicking around for years. Its relevance is still debated, so having more small molecule inhibitors would go a long way toward assessing its therapeutic potential. In a recent paper in PLoS One, Rob van Montfort and colleagues at The Institute of Cancer Research (UK) present their fragment-based efforts on CHK2.

The researchers describe the design of their screening library in some detail, starting with a series of typical computational filters on commercially available molecules. Although most of the molecules had MW < 300, molecular weights up to 320 Da were allowed for fragments containing F, Cl, or SO2 moieties. Also, all fragments were required to have at least 10 heavy atoms, which is on the high-side for a minimum. A total of 1869 fragments were purchased. All of these were analyzed for solubility and purity (by nephelometry and LC-MS, respectively), though unfortunately the researchers do not provide pass rates.

Having assembled the library, the researchers then screened each fragment at 300 micromolar against CHK2 in a biochemical assay (AlphaScreen). This led to 45 hits, but 25 of these showed some interference with the AlphaScreen assay itself. However, the remaining 20 all showed dose-response curves in a different assay format, giving IC50 values from 2.7 to 944 micromolar.

In parallel, the researchers screened CHK2 using a thermal shift assay, with each fragment present at 2 mM. Perhaps not surprisingly given the higher concentrations used, this led to 63 hits.

Where things got interesting – and encouraging – was when the researchers compared hits identified using the two methods. In contrast to others' experiences, there was reasonable overlap; of the 14 hits from both assays, 12 yielded measurable IC50 values when assessed using a microfluidic functional assay. Most of the AlphaScreen hits that didn’t produce thermal shifts came from the set of 25 that had previously been flagged as interfering with the AlphaScreen assay itself, and several were also insoluble. Of the 49 thermal shift hits that did not show up in the AlphaScreen assay, 13 were insoluble. Regarding the remaining 36, the researchers propose that they may bind to CHK2 outside its active site and thus don’t inhibit enzymatic activity.

Next, the researchers attempted to characterize the binding modes of the fragments crystallographically. Of the nine fragments that produced structures, eight came from the set of fragments confirmed using both AlphaScreen and thermal shift. Significantly, the only fragment to yield a structure that was identified solely from the thermal shift assay also produced the worst IC50 value (228 micromolar) and the lowest ligand efficiency. All nine fragments bind to the so-called hinge region of the kinase.

One interesting observation was that, although the library did contain larger molecules, 6 of the 9 fragments characterized crystallographically had MW < 200, and the other 3 were well under 300 Da. This is exactly what you would expect according to the concept of molecular complexity, and suggests that adding larger fragments to your library may actually lower your hit rate (though admittedly it may be a stretch to conclude too much from this one study).

Another interesting note is that co-crystallization was used in all cases. Folks sometimes believe that you need to grow vats of crystals for fragment soaking experiments, but co-crystallizing worked fine here, and in some cases may allow the protein to adopt different conformations than if grown in the absence of small molecules.

Overall, the paper presents some nice starting points against CHK2. Perhaps more important, this is a thorough, well-written, and open access account of fragment screening that is well worth perusing by anyone embarking on a fragment campaign.

14 August 2013

A library of fragment slides

Pete Kenny, of FBDD & Molecular Design fame, has generously uploaded 13 slide presentations on SlideShare. Several of these directly relate to fragment library design and screening, while others are broader overviews of drug discovery or touch on important topics such as lipophilicity and hydrogen bonding. Pete wrote recently about the danger of "correlation inflation," and you'll find a slide show on that too.

The presentations are ornamented with photos collected from Pete's extensive travels and suffused with his trademark sense of humor: where else can you see Carl von Clausewitz expounding on covalency?

There's a wealth of information and it's all free, so check it out!

07 August 2013

PAINS made painless

Practical Fragments has several entries on pan-assay interference compounds, or PAINS: see here for an introduction, here for their (mis)incorporation into a fragment library, here for the sad results of such misincorporations, here for a much longer review, and here for something that will hopefully bring a smile to your face after reading the previous tales of woe.

But artifacts are not only (or even primarily!) restricted to fragments, so Michael Walters at The University of Minnesota has established a blog devoted to PAINS (www.htspains.com).

As with any blog, success depends to a large degree on engagement with the broader community, so please check it out and leave comments!

05 August 2013

Click first, ask questions later

Three years ago Beat Ernst and his colleagues at the University of Basel described using NMR to identify two molecules that bind next to one another on the protein MAG. They then used in situ “click chemistry” to link these together to obtain a more potent binder. In a recent issue of J. Am. Chem. Soc. they have taken a similar approach to the protein E-selectin, but without the in situ part.

The selectins are cell adhesion proteins involved in a variety of biological processes, notably inflammation and tumor metastases. They bind to carbohydrates on the surface of leukocytes, but the affinities of any one selectin for a given carbohydrate tends to be low – often only millimolar. In the current paper, the researchers started with a reasonably potent modified carbohydrate, compound 3.

The researchers performed an NMR screen of 80 fragments in which they looked for increased relaxation of protons in the fragments upon binding to protein. This led to five hits. To determine whether these bound near compound 3, the researchers modified compound 3 with a “spin-label,” a moiety that would increase the rate of relaxation of nearby molecules and so make them detectable (see the previous post for more details). Two of the fragments appeared to bind near compound 3, and the researchers chose to pursue compound 4 – the very same fragment they had pursued previously for MAG.

At this point the researchers replaced the spin label with an alkyne (attached via spacers of various lengths) and added azide groups (again, with various spacers) to compound 4 and attempted to perform in situ click chemistry in the presence of the protein, as they had done previously. Nothing happened. Having come this far, they used more conventional conditions (ie, without the protein present) to make a small library of 20 triazoles and tested these for binding, leading to 5 hits with nanomolar activity, such as compound 43.

Given the high affinity of compound 43 for E-selectin, why didn’t it form in situ? The researchers suggest that:

Given its flat binding site, E-selectin does not act as an effective supramolecular catalyst for the alkyne-azide cycloaddition, because even upon simultaneous binding of first- and second-site ligands their azide- and acetylene-substituted linkers are not sufficiently preorganized to accelerate the cycloaddition reaction.

This is an example of a false negative from in situ fragment assembly. We previously wrote about another case in which in situ click chemistry yielded the less potent of two regioisomers due to trace amounts of contaminating copper. There is something conceptually beautiful about having a protein template the formation of its own inhibitor, but how often does it really work?

Ending on a positive note, the NMR approach described here is an example of linking without the need for protein structure. You just might want to click before you assay.