30 September 2013

MIP and MDP

Dan and I are were at the CHI Discovery on Target meeting last week.  It is highly focused on target validation and early stage hit generation.  This is NOT a chemistry conference, although there were plenty of chemists and chemistry talks; the target audience is biologists.  As such, it was a great arena to be teaching about fragments and educating a whole different phyla of FBDD consumers.  It was also nice to meet people and have them say, "Oh, I love the blog."  Of course, I just say, try commenting, that's manna to bloggers.  The nice thing is people generally understand FBHG, unfortunately I think they generally misunderstand it.  Why?

I think part of the problem is that the Most Impactful Papers (MIPs) in the field are also the Most Destructive Papers (MDP) in the field.  So, what are the MIP for this field?  For me, the criteria are pretty straight forward: one or two papers that are seminal to understanding the field.  As you may already be guessing, my list of MIP intersects my MDP.  

Most Impactful Papers
The Rosebowl of Fragment Papers: SAR by NMR.  This is the paper that showed that NMR was not bound by doing structures, but was a viable screening paradigm.  It started the whole "Fragment" thing. 
  • The Rationale Behind it All: The Leach and Hann Molecular Complexity paper.  If I had one paper to give to someone to explain why you should use fragments, this is it.  The first three graphs should be in every introduction to FBHG. 
  • The Voldemort Rule: The Rule of Three paper.  This paper has defined what a fragment is for a decade. 
  • Fragments get a name (that's never used): Fragonomics.  OK, self-referencing is not cool, so this should really be Dan's paper, the first review on FBDD. 
  • Pfizer lifts the curtain: Pfizer's fragment library paper. I love this paper because it gives a great overview of how Big Pharma put its fragment library together (think laser pointers!). 
This is obviously a very short and incomplete list, and totally my opinion.  Let me know what you think MIP are in the comments.

So, what are "destructive" papers?  Those are the papers that require me to spend a lot of time explaining why what people understand is not really a good general, practical approach.  
Most Destructive Papers:
  • I think THE most destructive paper is also the most impactful: SAR by NMR.  How can that be you say?  Easy.  Because of this paper, the vast majority of people who "have heard" of fragments think that you need to label protein to do use NMR for fragments.  While, target-based screening is really powerful, I don't think it should be the first thought for screening, but is more impactful on active follow up.  I can here the counterarguments coming, but WAIT there's more.  They used linking, rather than growing.  I think most people would agree that this is the "Serendipity" approach.  I think this territory is well trod on this blog.  Lastly, warheads. 
  • While it had its place the Rule of Three paper has also become destructive.  This is a "hot" topic, but my main problem is the slavish devotion to an empirical "Rule".
What are your thoughts?  

I am also cross-posting this on my site http://www.quantumtessera.com/325/ and the LI group to see if maybe a different venue will generate more comments.


23 September 2013

Programming Note

Dan and I will be co-teaching our award-winning (maybe not, but I bet our mother's think we are special) FBDD Short Course at CHI's Discovery On Target meeting.  It was standing room only in San Diego (probably because there was a shortage of chairs of something), but I bet you could squeeze in if you asked nicely.  If you are in Boston (Go Red Sox!), drop us a note, it would be great to catch up. 

19 September 2013

Fragments vs wild-type GPCRs by SPR

Membrane proteins such as G-protein coupled receptors (GPCRs) represent a large fraction of drug targets. These are mostly overlooked by the fragment community, for two reasons. First, assays for low affinity binders are difficult to develop. Second, the proteins usually lack structural information useful for advancing fragment hits. Earlier this year Heptares provided a lovely solution to both problems by generating stabilized mutant GPCRs, which could be screened using surface plasmon resonance (SPR) and characterized crystallographically. In a new paper in ACS Med. Chem. Lett., Iva Navratilova, Andrew Hopkins, Robert Lefkowitz, and a multinational team at the University of Dundee, Duke University, and the University of North Carolina Chapel Hill report using SPR to screen fragments against a wild-type GPCR.

The researchers chose the human β2 adrenergic receptor, which has served as a model GPCR for a variety of biological and biophysical studies. They expressed this with a His10 tag on the C-terminus and used a conventional nickel chip to immobilize the protein in the presence of detergent. The immobilized protein was able to bind to a known agonist and antagonist with dissociation constants similar to those reported in the literature, suggesting that it was folded correctly.

Next, 656 fragments were screened against the protein at 50 micromolar each. Using a surface containing β2 adrenergic receptor blocked with a known high-affinity, slowly dissociating agonist as a reference, the researchers looked for fragments that bound selectively to the surface containing the unblocked protein. A total of 81 fragments were then examined more closely in dose-response curves, yielding five confirmed hits, with dissociation constants ranging from 17 nM to 22 micromolar.

All five of these hits were tested in a conventional radioligand competition assay, confirming their binding. Interestingly, four of the five ligands were N-arylpiperazines, a class of molecules that the Heptares team also found as ligands for the β1 adrenergic receptor. When tested against this GPCR most were not selective, but one did show some selectivity for β2 adrenergic receptor against a panel of 27 GPCRs.

The fragment hits were then tested for activity in a cell-based assay, and all of them inhibited signaling. This illustrates a general complication with binding (versus functional) assays: with simple enzymes, once you’ve found a binder, it is probably either an inhibitor or has no effect. With GPCRs, a binder could be an agonist, an antagonist, a partial agonist, an inverse agonist, a neutral antagonist, or something else entirely; you need to go into cells quickly to figure out what you’ve got.

I do wonder whether it would be possible to screen at higher concentrations to look for weaker ligands, particularly for more challenging GPCRs for which no small molecule ligands are known. Still, the fact that SPR works as well as it does for a native GPCR is quite impressive. I suspect that we’ll see more and more fragment screening by SPR on membrane proteins. Whether folks will be comfortable optimizing fragment hits in the absence of high-resolution structures, though, remains to be seen.

17 September 2013

Rule of five versus rule of three

Metrics (such as ligand efficiency) and rules (such as the rule of three) seem to be some of the more controversial topics around here. If you aren’t experiencing metric-fatigue, it’s worth checking out a recent (and free!) “Ask the Experts” feature at Future Med. Chem., in which four prominent scientists weigh in on the utility of the rules of five and three.

Monash University’s Jonathan Baell (of PAINS fame) notes that, as of early 2013, the original 1997 Lipinski et. al. rule of five paper (and the 2001 reprint) had been cited more than 4600 times! Baell holds that, of the properties covered by the rules – molecular weight, lipophilicity, number of hydrogen-bond donors (HBD), and number of hydrogen-bond acceptors (HBA) – the property lipophilicity is probably the most important. Although he agrees that rules can be too strictly applied, he also asks:

What sum value is represented by the dead-end investment that the world never saw because of application of a Ro5 mentality?

I think this is a good, often-overlooked point. It is easy to find examples of drugs that violate the rule of five or programs that were killed by rule-bound managers with limited vision, but, as GlaxoSmithKline’s Paul Leeson says, “there is massive unexplored chemical space within the Ro5, which is available to innovative chemists.” Why not put much of the focus here?

Of course, readers of Practical Fragments are probably thinking as much about the rule of three as the rule of five, and one of the main criticisms of that rule, particularly by Pete Kenny, has been the fact that it is not clear how to define hydrogen-bond acceptors: do you count all nitrogen and oxygen atoms, including for example an amide –NH? I think the common-sense answer would be no, and Miles Congreve, the first author on the original rule of three paper, seems to agree. He also notes that the number of hydrogen bond acceptors seems to be less important in general than the number of hydrogen bond donors, which is negatively correlated with solubility, permeability, and bioavailability.

Given last year’s poll on the maximum size of fragments people allow in their libraries, it looks like most people are already capping molecular weight well below 300 Da, which skews the other parameters toward rule of three space. That said, Congreve does warn that commercial fragment libraries “contain too many compounds that are close to 300 Da, rather than containing a distribution of compounds in the range of 100 – 300 Da,” a statement borne out by by Chris Swain’s analyses. Of course, the larger you get, the more possibilities there are, and the optimal property distribution of a fragment library is still a matter of debate.

Ultimately I think many people will agree with Leeson, who says that “there are probably sufficient metrics in the literature today,” and with Celerino Abad-Zapatero, who notes that “additional rules will not be the answer in the long run.” On this note I promise no more posts on metrics or rules – for at least a month!

09 September 2013

More thoughts on the Astex-Otsuka marriage

Teddy already highlighted the planned $866 million acquisition of Astex by Otsuka, and I thought I’d add a bit of context. Astex Therapeutics was founded in 1999, just three years after publication of the Abbott SAR by NMR paper that arguably launched widespread interest in fragment-based lead discovery. From the outset, Astex focused heavily on crystallography, which was somewhat unusual at the time; Vicki Nienaber’s seminal SAR by Crystallography paper only came out in 2000.

Astex researchers have made many practical contributions to FBLD, from the (sometimes controversial) rule of three to the LLEAT metric to the Astex Viewer familiar to anyone who has seen a presentation from the company. More than 100 publications have come from Astex, including one of the earliest comprehensive reviews of the field. And the company has also delivered: of 28 fragment-derived compounds to make it into the clinic, Astex has had a role in nearly a quarter, including AT13387, AT7519, AT9283, JNJ-42756493 (with J&J), LEE011 (with Novartis), AT13148, and AZD5363 (with AstraZeneca and ICR).

In terms of price, $866 million is indeed a tidy sum, more than the up-front Daiichi Sankyo paid for Plexxikon (though a bit under the total deal value of $935 million) and more than an order of magnitude higher than the $64 million Lilly paid for SGX back in the dark days of 2008. Even with close to a billion dollars on the table, some are calling the price too low, with one analyst suggesting Astex is worth $13 per share rather than the $8.50 offered by Otsuka.

Of course, the Astex pipeline is not entirely fragment-based; a merger with SuperGen in 2011 brought in a marketed product (decitabine) as well as other clinical compounds. Still, from what Otsuka has said publicly, it does appear that the FBLD technology was a major driver: it is the first item mentioned under the heading “Objectives of the Acquisition.”

As Derek Lowe pointed out over at In the Pipeline, Japanese firms have a good track record of not breaking or shuttering acquired companies; last I checked Plexxikon was still going strong. Hopefully this will hold true for Astex as well. Practical Fragments offers congratulations and wishes continued success to everyone involved.

05 September 2013

What Do Fragments Get you?

Parroting In the Pipeline, what do fragments get you?  886 Million dollars, that's what!  As pointed out by the buying company:
"Astex's unique fragment-based drug discovery technology [Ed: emphasis added] and clinical oncology research and development capabilities, born out of the passion of its researchers, exemplify our corporate mottos and belief in "Sozosei (Creativity) and Jissho (Proof through Execution). I would like Otsuka Pharmaceutical to continue to respect Astex's uniqueness and leverage it to bring further growth for Otsuka Pharmaceutical."
Congratulations to the folks at Astex, the next time you see them at a conference, make sure they pick up the check.

03 September 2013

Another NMR Tool...

There are many things which aid in the successful prosecution of fragments.  Most people would agree that structural information is one of those things.  However, in many cases there is no structure, nor any hope of obtaining one.  Many different methods have been developed to try to address this gap.  Oftentimes they are impractical, sometimes they are useful.  In this paper, Gregg Siegal, Marcellus Ubbink, and co-workers from his academic lab present a new NMR-based structural tool.  [Editor's Note: I used to have a business relationship with Gregg's commercial side.]  So, is this a practical or impractical tool?  You can skip down to the bottom for the answer, or keep reading and follow me down the rabbit hole.

Their approach is not to generate high-resolution structures, but low resolution models of how initial fragments bind to the target. To accomplish this, the use pseudocontact shifts  (PCS)induced by paramagnetic ions.  To those of you whose eyes just glazed over, let me explain.  We typically only use diamagnetic atoms in NMR, because paramagnetic atoms cause line broadening, sometimes to extinction.  For ease of explanation, the PCS is similar to any dipolar coupling, it is a way to relax between atoms, like the NOE, but with a longer distance dependence r^-3 (PCS), vs. r^-6 (NOE).  However, with good decisions like the choice of the ion, the placement of the ion, and so on, you can get subtle effects on your ligand, rather than wiping it out. In the end, you need to know a few things: the actual fraction of ligand bound, the structure of the target (or a good homology model), and the PCS tensor (see below).  This work used rigid, paramagnetic ion binding tags attached to the target via engineered disulfide linkages (CLaNP).


 In total, they made three different tagged proteins and used Yb3+ as the paramagnetic ion and Lu3+ as the diagmagnetic ion. 
This data represents a mixture of bound and free ligand, so using the experimentally determined Kd and the known concentrations of ligand and target, the % bound ligand can be determined.  This can then be converted into PCS of only the bound state. 
However, the authors then tried to calculate the tensor, which is necessary to calculate the orientation of the PCS tensor.  When compared to the orientation of the ligand determined by NOE, there was an 4.7 A RMSD.  This approach only gives the relative location of the binding site.  When they formally calculated the PCS tensors they were able to get a better match of the PCS-derived orientation compared to the NOE-derived, but still not perfect agreement.  That is expected for different methods which can be considered orthogonal.  There ends up being a lengthy discussion of the shortcomings of this method and why it could be possibly better than NOE-based methods, in particular it does not need labeled protein.  However, I would argue if you are not producing your protein in E. coli it is likely being made in insect cells or mammalian cells.  In the case of insect cells, why would you wait two months, to get ligand orientation information on an initial hit?  The project has come and gone on the initial screen hits by that time.

While this is a interesting approach academically, it is really impractical.  Why?  As the authors state, this method is best for ligands with high micromolar to low millimolar affinity.  This positions it firmly in the very early stages of FBHG.  You need to have the structure of the target, or a good homology model.   You need to generate multiple mutants (they do state you can get by with only two positions, but three is better).  You need to do some seriously involved computation; something that is not routine at all.  This would be a much better tool if it could be robustly used at late hit expansion/early lead generation, but that doesn't seem likely.  So, you have what is largely an academic tool for generating models of ligand-target binding with fragments, but not something that would be routinely used.