The protein p53 is inactivated in a large fraction of cancer cells and has long been of interest for oncology. Mutations of the gene frequently lead to a destabilized form of the protein. For example, substitution of cysteine for tyrosine at position 220 causes the mutant protein to rapidly denature at body temperature and also opens a reasonably large and hydrophobic crevice on the surface of the protein at lower temperatures. If molecules could be identified that bind in this crevice, the protein might be stabilized, restoring its function. In a recent paper* in Chemistry and Biology, Alan Fersht and colleagues at Cambridge University have targeted this crevice using fragment screening.
The researchers assembled a fragment library of 1895 molecules from three commercial vendors (ChemBridge, Life Chemicals, and Maybridge). They then used two orthogonal screening methods, NMR (WaterLOGSY) and thermal denaturation scanning fluorimetry, to identify fragment hits. These were then confirmed using two-dimensional HSQC NMR. WaterLOGSY identified 70 confirmed hits, while thermal screening identified only 17; oddly, only three of these were in common. The authors suggest that fluorescence quenching may lead to a higher false negative rate for the thermal denaturation method, but it is also possible that the NMR method is identifying fragments that bind so weakly as to show no effect on protein stability.
Of the 84 hits, three fragments could subsequently be characterized bound to p53 crystallographically. They all fit in the Y220C crevice, though each sits in a somewhat different location.
There is still a long way to go for these molecules: the most potent fragment has a Kd of 105 micromolar. Still, with a ligand efficiency of 0.33 kcal/mol per atom, this compares favorably to the best molecule the authors had previously identified from an in silico screen of 2.7 million molecules (Kd roughly 150 micromolar, ligand efficiency 0.29 kcal/mol per atom).
Although it is still not clear that stabilizing mutant p53 will be a viable approach for treating cancer, the identification of a number of diverse fragments suggests that the Y220C site may be druggable. Moreover, the fragments themselves are potential starting points for developing more potent molecules.
*Thanks to Mauro Angiolini for bringing this publication to our attention on LinkedIn.
This blog is meant to allow Fragment-based Drug Design Practitioners to get together and discuss NON-CONFIDENTIAL issues regarding fragments.
21 February 2010
17 February 2010
Isothermal titration calorimetry (ITC)
Our last post covered SPR. While we’re on the topic of biophysical methods, we should touch on isothermal titration calorimetry (ITC). A Perspective in last month’s issue of Nature Reviews Drug Discovery gives a very readable and concise summary of the technique, along with its applications for fragment-based drug discovery.
In ITC, two samples are mixed together, and the change in heat is precisely measured. If one solution contains a protein and the other a small molecule, one can determine the enthalpy (deltaH) as well as the overall free energy (deltaG) of binding (and thus the affinity), entropy (deltaS), and stoichiometry. In their article, John Ladbury, Gerhard Klebe, and Ernesto Freire, all long-time proponents of the technique, describe the importance of enthalpically-driven versus entropically-driven protein-ligand interactions.
It turns out that compounds derived from medicinal chemistry efforts have a greater entropic component to their affinities than do natural ligands, which rely more heavily on enthalpy. This is because it is easier to improve entropy than enthalpy: enthalpy is dependent on the number and strength of non-covalent bonds between a protein and its ligand, and as anyone who has tried to engineer a specific hydrogen bond can attest, this is easier said than done. Entropy, on the other hand, can often be increased just by making a compound more hydrophobic. However, increasing hydrophobicity too much decreases solubility and can cause other problems. The authors suggest that, while it may be easier to improve entropy than enthalpy, focusing on the later parameter will lead to better drugs. In fact, for statins and HIV protease inhibitors, first-in-class compounds were largely entropically-driven, while best-in-class compounds have their affinities dominated by enthalpy. Just as natural ligands have evolved to rely more on enthalpy than entropy, drug developers are also selecting for enthalpically driven binders as they optimize other parameters. But this selection has been indirect, and the authors suggest that researchers should deliberately select for enthalpic binders.
The authors acknowledge that commercially available ITC instruments are not sufficiently high-throughput for primary screening, and also that fragment interactions are sometimes so weak that dissociation constants may not be measurable with the technology. Nevertheless, it is possible to measure enthalpy of binding even for fragments, and, as we noted last year, this can lead to superior molecules.
Despite its power, ITC does not seem to be used often in fragment campaigns: at a roundtable discussion at the recent Tri-Conference, not one of the dozen or so participants had direct experience with the method. I suspect this has to do both with the availability of instruments as well as perceived difficulties with the experiments. Hopefully this will change, but whether the technique will become as popular as SPR remains to be seen.
In ITC, two samples are mixed together, and the change in heat is precisely measured. If one solution contains a protein and the other a small molecule, one can determine the enthalpy (deltaH) as well as the overall free energy (deltaG) of binding (and thus the affinity), entropy (deltaS), and stoichiometry. In their article, John Ladbury, Gerhard Klebe, and Ernesto Freire, all long-time proponents of the technique, describe the importance of enthalpically-driven versus entropically-driven protein-ligand interactions.
It turns out that compounds derived from medicinal chemistry efforts have a greater entropic component to their affinities than do natural ligands, which rely more heavily on enthalpy. This is because it is easier to improve entropy than enthalpy: enthalpy is dependent on the number and strength of non-covalent bonds between a protein and its ligand, and as anyone who has tried to engineer a specific hydrogen bond can attest, this is easier said than done. Entropy, on the other hand, can often be increased just by making a compound more hydrophobic. However, increasing hydrophobicity too much decreases solubility and can cause other problems. The authors suggest that, while it may be easier to improve entropy than enthalpy, focusing on the later parameter will lead to better drugs. In fact, for statins and HIV protease inhibitors, first-in-class compounds were largely entropically-driven, while best-in-class compounds have their affinities dominated by enthalpy. Just as natural ligands have evolved to rely more on enthalpy than entropy, drug developers are also selecting for enthalpically driven binders as they optimize other parameters. But this selection has been indirect, and the authors suggest that researchers should deliberately select for enthalpic binders.
The authors acknowledge that commercially available ITC instruments are not sufficiently high-throughput for primary screening, and also that fragment interactions are sometimes so weak that dissociation constants may not be measurable with the technology. Nevertheless, it is possible to measure enthalpy of binding even for fragments, and, as we noted last year, this can lead to superior molecules.
Despite its power, ITC does not seem to be used often in fragment campaigns: at a roundtable discussion at the recent Tri-Conference, not one of the dozen or so participants had direct experience with the method. I suspect this has to do both with the availability of instruments as well as perceived difficulties with the experiments. Hopefully this will change, but whether the technique will become as popular as SPR remains to be seen.
15 February 2010
Surface Plasmon Resonance (SPR)
Fragment-based drug discovery took off with NMR in the 1990s and went mainstream with X-ray crystallography in the 2000s. Now surface plasmon resonance (SPR) is becoming increasingly popular as a primary means of identifying hits. The technique has been mentioned more than a dozen times on Practical Fragments, but we’ve never devoted an entire post to it until now.
This post follows up on two recent publications. The first is an excellent summary of SPR by our friends at FBDD-Lit. Peter Kenny gives an overview of the technique and reports on a workshop given by SPR mavens Dave Myszka and Rebecca Rich. He also covers some of the seminal papers in the field.
The second report is in the brand new journal ACS Medicinal Chemistry Letters. In it, Iva Navratilova and Andrew Hopkins of the University of Dundee provide practical advice on using SPR for fragment-screening.
The authors describe their work on using SPR to identify fragments that bind to carbonic anhydrase II, a popular target for proof-of-concept studies. They screened a library of 656 fragments with molecular weights between 94 to 341 Da, with an average of 187 Da, or about 13 non-hydrogen atoms. The entire screen, which was done at three concentrations (16.6, 50, and 150 micromolar) took 4 weeks from assay development to hit confirmation on a Biacore T100, and consumed a total of 27 micrograms of protein.
Importantly, Navratilova and Hopkins were keenly aware of the potential for false positives or nonspecific binders (of which there were 230 at the highest concentration!) One way they controlled for such artifacts was to include an unrelated reference protein; data could be corrected by subtracting the response to the reference protein from the response to the target protein. Another analytical method to reduce the number of false positives was to only consider compounds that exceeded a minimum threshold for ligand efficiency (a metric invented by Hopkins and co-workers), a decision justified here given the often high affinities observed for carbonic anhydrase inhibitors. After these filters, an examination of the stoichiometry of binding revealed a dozen specific binders and four non-specific binders, a hit rate of 1.8%.
My one reservation with this paper is that carbonic anhydrase is a particularly easy test case, unlikely to fairly represent many of targets that people screen. Indeed, the confirmed hits (all of which contain sulfonamides), have affinities from 0.13 to 14 micromolar – far better than a typical fragment screen, and comparable to many HTS screens. Still, the tools and analyses described should apply to more challenging targets.
Finally, it is worth noting that if you want access to SPR technology but don’t have the resources or expertise to do it yourself, at least a couple companies (Beactica and Graffinity) specialize in applying SPR to FBDD.
This post follows up on two recent publications. The first is an excellent summary of SPR by our friends at FBDD-Lit. Peter Kenny gives an overview of the technique and reports on a workshop given by SPR mavens Dave Myszka and Rebecca Rich. He also covers some of the seminal papers in the field.
The second report is in the brand new journal ACS Medicinal Chemistry Letters. In it, Iva Navratilova and Andrew Hopkins of the University of Dundee provide practical advice on using SPR for fragment-screening.
The authors describe their work on using SPR to identify fragments that bind to carbonic anhydrase II, a popular target for proof-of-concept studies. They screened a library of 656 fragments with molecular weights between 94 to 341 Da, with an average of 187 Da, or about 13 non-hydrogen atoms. The entire screen, which was done at three concentrations (16.6, 50, and 150 micromolar) took 4 weeks from assay development to hit confirmation on a Biacore T100, and consumed a total of 27 micrograms of protein.
Importantly, Navratilova and Hopkins were keenly aware of the potential for false positives or nonspecific binders (of which there were 230 at the highest concentration!) One way they controlled for such artifacts was to include an unrelated reference protein; data could be corrected by subtracting the response to the reference protein from the response to the target protein. Another analytical method to reduce the number of false positives was to only consider compounds that exceeded a minimum threshold for ligand efficiency (a metric invented by Hopkins and co-workers), a decision justified here given the often high affinities observed for carbonic anhydrase inhibitors. After these filters, an examination of the stoichiometry of binding revealed a dozen specific binders and four non-specific binders, a hit rate of 1.8%.
My one reservation with this paper is that carbonic anhydrase is a particularly easy test case, unlikely to fairly represent many of targets that people screen. Indeed, the confirmed hits (all of which contain sulfonamides), have affinities from 0.13 to 14 micromolar – far better than a typical fragment screen, and comparable to many HTS screens. Still, the tools and analyses described should apply to more challenging targets.
Finally, it is worth noting that if you want access to SPR technology but don’t have the resources or expertise to do it yourself, at least a couple companies (Beactica and Graffinity) specialize in applying SPR to FBDD.
Labels:
Beactica,
Biacore,
carbonic anhydrase,
false positive,
FBDD,
Graffinity,
Ligand efficiency,
SPR
06 February 2010
Molecular Medicine Tri-Conference 2010
The first event on our 2010 calendar, the Molecular Medicine Tri-Conference 2010, was held in San Francisco earlier this week. There were fragment talks and a roundtable, as well as a number of vendors selling fragment libraries – we’ve recently noted how rapidly this area has expanded.
Michael Hennig presented a nice overview of the history and development of fragment-screening at F. Hoffmann-La Roche (Basel). Work done there back in the late 1990s relied on NMR and crystallographic screening of a library of 300 fragments, described in the seminal “needle-screening” publication in J. Med. Chem. Today that library has grown to 6000 compounds following a relaxed rule-of-3 (allowing in particular more hydrogen-bond acceptors and higher lipophilicity) and requiring at least one hydrogen bond donor or acceptor and at least one ring. Also, the primary screening technique is now surface-plasmon resonance (SPR), with crystallographic follow-up; the entire collection can be screened on a single Biacore instrument in four weeks.
Hennig shared two case studies, one on BACE-1, the other on chymase. In the second case, a dozen fragments were successfully co-crystallized with the enzyme, and all but one of these bound in the S1 pocket, revealing the importance of this site for binding. In response to a question about how widely FBDD is used at Roche, Hennig said that it is applied to all targets that are technically feasible.
In another talk, James Madden described fragment-based discovery at Evotec. An increasingly stringent series of assays (from high-throughput high-concentration functional assays, through SPR and/or ligand-detected NMR, and finally crystallography and/or protein-detected NMR) helps keep the number of compounds manageable at each step. Madden also presented two cases studies, BACE-1 (clearly a popular target for FBDD, perhaps due to its intractability to many other approaches) and PDE10a.
A fun talk with relevance beyond FBDD was “Examples of X-ray Bloopers”, by Edward Kesicki of the Infectious Disease Research Institute (IDRI) in Seattle, WA. He described several cautionary tales from his own experience. In one case, a chemist provided the structure of the wrong enantiomer to a crystallographer, who duly refined the data, resulting in weeks of confusion and time-consuming follow-up experiments. In two others, crystallographers inadvertently omitted methylene units in fitting electron density. We’ve previously commented on the dangers of taking crystallographic data at face value, and Kesicki also mentioned an effort by Stephen Warren of Gonzaga University to comb through and correct structures in the protein data bank. He has a lot of work to do: of the 1000 structures examined thus far, roughly 20% have problems with the ligands.
Finally, in a panel discussion on “medicinal chemistry drivers,” someone asked about the role of fragment-based drug discovery. Consistent with the idea that fragment approaches are becoming increasingly integrated with other lead-finding activities, Hing Sham of Elan said that he was neither pro-fragment nor anti-fragment – “it’s just another tool in the toolbox.”
Michael Hennig presented a nice overview of the history and development of fragment-screening at F. Hoffmann-La Roche (Basel). Work done there back in the late 1990s relied on NMR and crystallographic screening of a library of 300 fragments, described in the seminal “needle-screening” publication in J. Med. Chem. Today that library has grown to 6000 compounds following a relaxed rule-of-3 (allowing in particular more hydrogen-bond acceptors and higher lipophilicity) and requiring at least one hydrogen bond donor or acceptor and at least one ring. Also, the primary screening technique is now surface-plasmon resonance (SPR), with crystallographic follow-up; the entire collection can be screened on a single Biacore instrument in four weeks.
Hennig shared two case studies, one on BACE-1, the other on chymase. In the second case, a dozen fragments were successfully co-crystallized with the enzyme, and all but one of these bound in the S1 pocket, revealing the importance of this site for binding. In response to a question about how widely FBDD is used at Roche, Hennig said that it is applied to all targets that are technically feasible.
In another talk, James Madden described fragment-based discovery at Evotec. An increasingly stringent series of assays (from high-throughput high-concentration functional assays, through SPR and/or ligand-detected NMR, and finally crystallography and/or protein-detected NMR) helps keep the number of compounds manageable at each step. Madden also presented two cases studies, BACE-1 (clearly a popular target for FBDD, perhaps due to its intractability to many other approaches) and PDE10a.
A fun talk with relevance beyond FBDD was “Examples of X-ray Bloopers”, by Edward Kesicki of the Infectious Disease Research Institute (IDRI) in Seattle, WA. He described several cautionary tales from his own experience. In one case, a chemist provided the structure of the wrong enantiomer to a crystallographer, who duly refined the data, resulting in weeks of confusion and time-consuming follow-up experiments. In two others, crystallographers inadvertently omitted methylene units in fitting electron density. We’ve previously commented on the dangers of taking crystallographic data at face value, and Kesicki also mentioned an effort by Stephen Warren of Gonzaga University to comb through and correct structures in the protein data bank. He has a lot of work to do: of the 1000 structures examined thus far, roughly 20% have problems with the ligands.
Finally, in a panel discussion on “medicinal chemistry drivers,” someone asked about the role of fragment-based drug discovery. Consistent with the idea that fragment approaches are becoming increasingly integrated with other lead-finding activities, Hing Sham of Elan said that he was neither pro-fragment nor anti-fragment – “it’s just another tool in the toolbox.”
Labels:
2010,
artifact,
Conferences,
crystallography,
Evotec,
FBDD,
IDRI,
Roche
Subscribe to:
Posts (Atom)