Showing posts with label NMR. Show all posts
Showing posts with label NMR. Show all posts

21 October 2014

Benchmark Your Process

So, not everybody agrees with me on what a fragment is.  As has been pointed out years ago, FBDD can be a FADD.  In this paper, from earlier this year, a group from AZ discusses how FBDD was implemented within the infectious disease group. Of course, because of the journal, it emphasizes how computational data is used, but you skim over that and still enjoy the paper :-). 

Hot Spots: This is a subject of much work, particularly from the in silico side.  In short, a small number of target residues provide the majority of energy for interaction with ligands.  Identifying these, especially for non-active site targets (read PPI), is highly enabling, for both FBDD and SBDD. To this end, the authors discuss various in silico approches to screening fragments.  They admit they are not as robust as would be desired (putting it kindly).  As I am wont to say, your computation is only as good as your experimental follow up.  The authors indicate that the results of virtual screens must be experimentally tested.  YAY!  They also state that NMR is the preferred method; 1D NMR in particular being the AZ preferred method.  [This is something (NMR as the first choice for screening) that I think has become true only recently.  Its something I have been saying for more than a decade, but I guarantee my cheerleading is not why.] They do note that of the two main ligand-based experiments, STD is far less sensitive than WaterLOGSY.  There is no citation, so I would like to put it out there, is this the general consensus of the community?  Has anyone presented data to this effect?  Specifically, they screen fragments 5-10 per pool with WaterLOGSY and relaxation-edited techniques.  2D screening is only done for small proteins (this is in Infection) and where a gram or more of protein is available.

Biophysics:  They have SPR, ITC, EPIC, MS, and X-ray.  They mention that SPR and MS require high protein concentrations to detect weak binders and thus are prone to artifacts.  They single out the EPIC instrument as being the highest throughput.  [As an aside, I have heard a lot of complaints about the EPIC and wonder if this machine is still the frontline machine at AZ.]  60% of targets they tried to immobilize were successful.  They also use "Inverse" SPR, putting the compounds down; the same technology NovAliX has in their Chemical Microarray SPR.  In their experience, 25% of these "Target Definition Compounds" still bind to their targets. 

They utilize a fragment-based crystallography proof of principle (fxPOP).  Substrate-like fragments (kinda like this?) are screened in the HTS, hits [not defined] are then soaked into the crystal system, and at least one structure of a fragment is solved.  This fragment is then used for in silico screening, pharmacophore models, and the like.  So, this would seem to indicate that crystals are required before FBDD starts.  They cite the Astex Pyramid where fragments of diverse shape are screened and the approach used at JnJ where they screen similar shaped fragments and use the electron density to design a second library to screen.

As I have always said, there are non-X-ray methods to obtain structural information.  AZ notes that SOS-NMR, INPHARMA, and iLOE are three ways.  These are three of the most resource intensive methods: SOS-NMR requires labeled protein (and not of the 15N kind), INPHARMA requires NOEs between weakly competitive ligands (and a boatload of computation), while iLOE requires NOEs of simultaneously binding ligands.  I think there are far better methods, read as requiring fewer resources, to give structural information more quickly (albeit at lower resolution).

The Library:  The describe in detail how they generated their fragment libraries.  They have a 20,000 fragment HCS library.  The only hard filter is to restrict HA<18 .="" 1200="" a="" as="" behind="" bias="" br="" built="" can="" fragment="" have="" hcs="" i="" infection="" library="" nbsp="" nmr="" on="" rules="" same="" stand="" targets="" that.="" the="" they="" towards="" with="">

The Process:   The authors list three ways to tie these methods together:
  1. Chemical Biology: Exploration of binding sites/development of pharmacophores.  I would add that this is also for target validation.  As shown by Hajduk et al. and Edfeldt et al., fragment binding is highly correlated to advancement of the project. 
  2. Complementary to HTS.  At the conference I am at today, one speaker (from Pfizer) said that HTS was for selectivity, FBDD was for efficiency (or Lord, here comes Pete with that one).  I really like that approach.
  3. Lastly, stand alone hit generation.  
I think this paper is a nice reference for those looking to see how one company put their FBDD process in place. Not every company will do it the same, nor should they.  But there is a FBDD process for every company.

20 October 2014

Caveat emptor

Practical Fragments rarely has guest bloggers, but we do make exceptions in special cases. What follows is a (lightly edited) analysis from Darren Begley that appeared on the Emerald blog last year, but since the company's transformation to Beryllium it is impossible to find. This post emphasizes how important it is to carefully analyze commercial compounds. (–DAE)

In a LinkedIn Discussion post, Ben Davis posed the following question:

Do any of the commercially available fragment libraries come with reference 1D NMR spectra acquired in aqueous solution?

Most commercial vendors of fragments do not offer nuclear magnetic resonance (NMR) reference spectra with their compounds useful to fragment screeners; if anything, the experiment is conducted in 100% organic solvent, at room temperature, at relatively low magnetic field strength (DAE: though see here for an exception). The NMR spectra of fragments and other small molecules are greatly affected by solvents, and can vary from sample to sample. Different buffers, solvents, temperatures and magnetic field strengths can generate large spectral differences for the exact same compound. As a result, NMR reference spectra acquired for fragments in organic solvent cannot be used to design fragment mixtures, a common approach in NMR screening. Furthermore, solubility in organic solvent is no measure of solubility in the mostly aqueous buffer conditions typically used in NMR-based fragment screening.

At Emerald [now Beryllium], we routinely acquire NMR reference spectra for all our commercially-sourced fragment screening compounds as part of our quality control (QC) procedures. This is necessary to ensure the identity, the purity and the solubility of each fragment we use for screening campaigns. These data are further used to design cocktails of 9-10 fragments with minimal peak overlap for efficient STD-NMR screening in-house. 

Recently, we selected a random set of commercial fragment compounds, and closely examined those that failed our QC analysis. The most common reason for QC failure was insolubility (47%), followed by degradation or impurities (39%), and then spectral mismatch (17%) (Since compounds can acquire multiple QC designations, total incidences > 100%.) Less than 4% of all compounds assayed failed because they lacked requirements for NMR screening (that is, sufficiently distinct from solvent peaks or lack of non-exchangeable protons). Failure rates were as high as 33% per individual vendor, with an overall average of 16% (see Figure).

These results highlight the importance of implementing tight quality control measures for preliminary vetting of commercially-sourced materials, as well as maintaining and curating a fragment screening library. They also suggest that 10-15% of compounds will fail quality control, regardless of vendor. Do these numbers make sense to you? How do they measure up with your fragment library?

Let us know what you think. (–DB)

18 August 2014

248th ACS National Meeting

The Fall ACS National Meeting was held in my beautiful city of San Francisco last week, and a number of topics of interest to Practical Fragments were on the agenda.

First up (literally – Sunday morning) was a session on pan-assay interference compounds (PAINS) organized by HTSPAINS-master Mike Walters of the University of Minnesota. Mike developed his interest in PAINS like many – from painful experience. After screening 225,000 compounds against the anti-fungal target Rtt109, he and his group found several hits that they identified as PAINS, but not before spending considerable time and effort, including filing a patent application and preparing a manuscript that had to be pulled. One compound turned out to be a “triple threat”: it is electrophilic, a redox cycler, and unstable in solution.

Mike had some nice phrases that were echoed throughout the following talks, including “subversively reactive compounds” and SIR for “structure-interference relationships,” the evil twin of SAR. To try to break the “PAINS cycle” Mike recommended more carefully checking the literature around screening hits and close analogs (>90% similarity). Of course, it’s better if you don’t include PAINS in your library in the first place.

Jonathan Baell (Monash), who coined the term PAINS back in 2010, estimated that 7-15% of commercial compounds are PAINS, and warned that even though PAINS may be the most potent hits, they are rarely progressable, advice that is particularly needed in academia. For example, the majority of patent applications around the rhodanine moiety come from academia, whereas the majority of patent applications around a more reasonable pharmacophore come from industry. Jonathan also warned about apparent SAR being driven by solubility. Finally, he noted that while it is true that ~6.5% of drugs could be classified as PAINS, these tend to have unusual mechanisms, such as DNA intercalation.

As we discussed last week, anyone thinking about progressing a PAIN needs to make decisions based on sound data. R. Kip Guy (St. Jude) discussed an effort against T. brucei, the causative agent of sleeping sickness. One hit from a cellular screen contained a parafluoronitrophenyl group that presumably reacts covalently with a target in the trypanosome and was initially deemed unprogressable. However, a student picked it up and managed to advance it to a low nanomolar lead that could protect mice against a lethal challenge. It was also well tolerated and orally bioavailable. Kip noted that in this case chemical intuition was too conservative; in the end, empirical evidence is essential. On that note he also urged people to publish their experiences with PAINS, both positive and negative.

There were a scattering of nice fragment talks and posters. Doctoral student Jonathan Macdonald (Institute of Cancer Research) described how very subtle changes to the imidazo[4,5-b]pyridine core could give fragments with wildly different selectivities. I was particularly tickled by his opening statement that he didn’t need to introduce the concept of fragment-based lead discovery in a general session on medicinal chemistry – another indication that FBLD is now mainstream.

Chris Johnson (Astex) told the story of their dual cIAP/XIAP inhibitor, a compound in preclinical development for cancer. As we’ve mentioned previously, most IAP inhibitors are peptidomimetics and are orders of magnitude more potent against cIAP than XIAP. Astex was looking for a molecule with similar potency against both targets. A fragment screen gave several good alanine-based fragments, as found in the natural ligand and most published inhibitors, but these were considerably more potent against cIAP. They also found a non-alanine fragment that was very weak (less than 20% inhibition at 5 mM!) but gave a well-defined crystal structure. The researchers were able to improve the affinity of this by more than six orders of magnitude, ultimately identifying compounds with low or sub-nanomolar activity in cells and only a 10-fold bias towards cIAP. This is a beautiful story that illustrates how important it is to choose a good starting point and not be lured solely by the siren of potency.

Alba Macias (Vernalis) talked about their efforts against the anti-cancer targets tankyrases 1 and 2 (we’ve previously written about this target here). In contrast to most fragment programs at Vernalis, this one started with a crystallographic screen, resulting in 62 structures (of 1563 fragments screened). Various SPR techniques, including off-rate screening, were used to prioritize and further optimize fragments, ultimately leading to sub-nanomolar compounds.

The debate over metrics and properties continued with back-to-back talks by Michael Shultz (Novartis) and Rob Young (GlaxoSmithKline). Michael gave an entertaining talk reprising some of his views (previously discussed here). I was happy to see that he does agree with the recent paper by Murray et al. that ligand efficiency is in fact mathematically valid; his previous criticism was based on use of the word “normalize” rather than “average”. While this is a legitimate point, it does smack of exegesis. Rob discussed the importance of minimizing molecular obesity and aromatic ring count and maximizing solubility, focusing on experimental (as opposed to calculated) properties. However, it is important to do the right kinds of measurements: Rob noted that log D values of greater than 4 are essentially impossible to measure accurately.

Of course, this was just a tiny fraction of the thousands of talks; if you heard something interesting please leave a comment.

30 July 2014

Fragments in the Caribbean

Last week saw the inaugural Zing FBDD conference in Punta Cana, Dominican Republic. Zing has been around only since 2007, and seems to focus on small conferences in exotic locales. The benefit is that they are able to attract high-profile speakers, as illustrated by the group photo below. However, in an era of shrinking travel budgets, getting approval to attend a conference at a resort is becoming a bit more challenging. That said, participants enjoyed nearly 30 presentations and great discussion – think of a Gordon Conference without the dorms, and breaks on the beach.

My favorite “equation” from the conference comes from Mike Serrano-Wu of the Broad Institute:
Undruggable = Undone
This was supported by some nice work on the anti-cancer target MCL-1, which makes a protein-protein interaction that was widely consider undruggable just a few years ago. An 19F NMR fragment screen gave a hit-rate of around 10%, leading eventually to low nanomolar leads. Fragment optimization was facilitated by a new crystal form of the protein that allowed the team to rapidly generate over a dozen protein-ligand co-crystal structures. Rumor has it that more details on this will be disclosed at FBLD 2014 in Basel in September (there are still a few openings available, but register soon.)

MCL-1 also figured heavily in talks by Andrew Petros (AbbVie, see also here) and Steve Fesik (Vanderbilt, see also here), who described cell-permeable molecules with high picomolar activity in biochemical assays. Steve also discussed programs against Ras and RPA, both also using SAR by NMR. As Mike Shapiro (Pfizer) pointed out in his opening presentation, one of the breakthrough ideas of SAR by NMR was to screen a library more than once per target, the second time in the presence of a first ligand to identify another. It is nice to see this strategy continuing to deliver against difficult targets, though preliminary results of our current poll (right hand side of page) indicate that linking is not necessarily easy.

One of the payoffs of doing fragment screens for many years on dozens of targets is a rich internal dataset. Chris Murray (Astex) mentioned that company researchers have solved close to 7000 protein crystal structures, more than a third of them with fragment ligands. A cross-target analysis found that hits tended to be more planar (ie, less “three-dimensional”, with apologies to Pete Kenny) than non-hits. This was particularly true for kinases; for six protein-protein interactions (PPIs) there was no correlation between shape and hit rate. Although defining complexity is difficult, Chris provided evidence that 3D fragments tend to be both larger and more complex.

Rod Hubbard (University of York and Vernalis) mentioned that Vernalis has determined more than 4000 protein crystal structures. Since 2002, 2050 fragments have been screened against more than 30 targets. Based on “sphericality” – the distance from the rod-sphere principle component axis – hits against kinases are marginally less spherical, while PPI hits reflect the shape of the overall library. So, despite the current push for more three-dimensional fragments, it remains to be seen whether this will be useful.

Jonathan Mason (Heptares) described how successful fragment approaches can be against membrane proteins such as GPCRs. Anyone who has worked on these targets will know that the SAR can be razor sharp, and their surfeit of structures is helping to explain this. For example, although many of the protein-ligand interactions appear merely hydrophobic, some displace high-energy water molecules, which can be revealed by crystal structures of both the free and bound forms of the protein. Displacement of high energy water molecules also helps to explain some “magic methyl” effects.

Fragment-finding methods were not neglected. Jonathan mentioned that, for the A2A receptor, SPR identified only orthosteric ligands, while TINS identified only allosteric ligands – the orthosteric ligands were actually too potent to be detected by this technique. John Quinn (Takeda, formerly SensiQ) and Aaron Martin (SensiQ) also discussed SPR, and in particular how variable temperature SPR analyses could be used to rank ligands based on their enthalpic binding, though as Chris Murray warned, this information can be difficult to use prospectively.

I also learned that a selective BCL-2 inhibitor from Vernalis and Servier has just entered into Phase 1 clinical trials. This has been the result of a long-running collaboration that has required creativity on the part of the scientists and patience on the part of management.

There is much more to tell – for example Teddy's extended metaphor of the Silk Road (this one, not this one!) – but in the interest of space I’ll stop here. Feel free to comment if you were there (or even if you weren’t!)

16 July 2014

You Probably Already Knew This...

Academics can spend time and resources doing, and publishing, things that people in the industry already "know".  This keeps the grants, the students, the invitations to speak rolling in.  It also allows you to cite their work when proposing something.  This is key for the FBHG community.  There are many luminaries in the FBHG field, and we highlight their work here all the time. Sometimes, they work together as a supergroup.  Sometimes, Cream is the result.

Brian Shoichet and Gregg Siegal/ZoBio have combined to work together.  In this work, they propose to combine empirical screening (TINS and SPR) with in silico screening against AmpC (a well studied target).  They ran a portion of the ZoBio 1281 fragment library against AmpC.  They got a 3.2% active rate, 41 fragments bound.  6 of these were competitive in the active site against a known inhibitor.  35 of 41 NMR actives were studied by NMR; 19 could have Kds determined (0.4 to 5.8 mM).  13 fragments had weak, but uncharacterizable binding; 3 were true non-binders. That's a 90% confirmation rate.  34 of 35 were then tested in a biochemical assay.  9 fragments had Ki below 10 mM.  Of the 25 with Ki > 10mM, one was found to bind to target by X-ray, but 25A from the active site.  They then did an in silico screen with 300,000 fragments and tested 18 of the top ranked ones in a biochemical assay.  

So, what did they find? 
"The correspondence of the ZoBio inhibitor structures with the predicted docking poses was spotty. "  and "There was better correspondence between the crystal structures of the docking-derived fragments and their predicted poses."
So, this isn't shocking, but it is good to know.  This is also consistent with this comment.  So, the take home from this paper is that in silico screening can help explore chemical space that the experimentally much smaller libraries miss.  To that end, the authors then do a a virtual experiment to determine how big a fragment library you would need to cover the "biorelevant" fragment space [I'll save my ranting on this for some other forum].  Their answer is here [Link currently not working, so the answer is 32,000.]

09 June 2014

Fluorinated Fragments vs G-quadruplexes

Recently we highlighted an example of fragment-based ligand discovery against a riboswitch. Of course, RNA can form all kinds of interesting structures, and in a new paper in ACS Chem. Biol. Ramón Campos-Olivas (Spanish National Cancer Research Centre) and Carlos González (CSIC, Madrid) and their collaborators describe finding fragments that bind G-quadruplexes.

G-quadruplexes, as their name suggests, consist of groups of four guanine residues hydrogen bonding to one another in a planar arrangement. These individual tetrads then stack on top of one another. They can form in guanine-rich regions of RNA or DNA. Most famously, G-quadruplexes are found in telomeres at the ends of chromosomes. However, they are also found in telomeric repeat-containing RNA (TERRA), and are required for cancer cells to proliferate indefinitely.

The researchers used 19F-NMR screening to identify fragments that bound to an RNA containing 16 (UUAGGG) repeats (TERRA16). 19F-NMR is a technique about which Teddy waxes rhapsodic, and in this incarnation involves examining the NMR spectra of fragments in the presence or absence of TERRA16. Fragments that bind to the RNA show changes in 19F spin relaxation, resulting in broader, lower intensity signals. The library consisted of 355 compounds from a variety of sources, and although most of them were fragment-sized, a couple dozen had molecular weights above 350 Da.

The initial screen produced a fairly high hit rate (20 fragments), of which seven were studied in detail. Standard proton-based STD NMR confirmed the 19F-NMR results. The researchers then turned to a shorter RNA containing only two repeats (TERRA2); this RNA sequence dimerizes to form a G-quadruplex. All seven fragments stabilized this complex against thermal denaturation, consistent with binding. Six of the fragments also induced changes to the 1H NMR spectrum of TERRA2, though one also caused general line broadening that could indicate aggregation. For the well-behaved fragments, dissociation constants (KD) were determined by measuring changes in chemical shifts with increasing concentrations of ligand. KD values ranged from 120 to 1900 micromolar, with modest ligand efficiencies ranging from 0.17-0.28 kcal/mol/atom.

Of course, selectivity against other nucleic acid structures is a major concern, so the researchers used 1H and 19F NMR to assess compound binding to a tRNA, a DNA duplex, and a DNA analog of TERRA2 also able to form a G-quadruplex. Aside from the putative aggregator, none of the seven compounds bound tRNA, and only two (including the aggregator) bound duplex DNA. However, all the compounds bound to the DNA G-quadruplex. Interestingly though, the DNA sequence used can form two types of G-quadruplexes in solution (parallel or antiparallel), whereas the equivalent RNA can only form a parallel dimer. In all cases the small molecules appeared to shift the equilibrium of the DNA to the parallel conformation, consistent with their initial identification as RNA binders.

Last year we highlighted another paper in which fragments were identified that may bind to a different DNA G-quadruplex. It would be interesting to functionally compare these two sets of hits. For example, do the hits identified initially against the DNA G-quadruplex also bind RNA G-quadruplexes? Of course, as with the riboswitch effort, there is a long way to go. It should be an interesting journey.

04 June 2014

Fragments vs riboswitches

Most of fragment-based lead discovery – indeed, most of lead discovery – is directed against proteins. However, RNA is also an essential biomolecule, and in a new paper in Chem. Biol. Adrian R. Ferré-D’Amaré and colleagues at the National Heart, Lung, and Blood Institute, along with collaborators at the University of Cambridge and the University of North Carolina Chapel Hill, demonstrate that fragments can potentially make an impact here as well. This is the first example I know of where crystallography has been used to assess fragment hits against RNA molecules.

The story begins several years ago, when Chris Abell and colleagues became interested in the TPP riboswitch thiM. This is a bacterial stretch of RNA that binds to the essential cofactor thiamine pyrophosphate (TPP). This binding causes a change in conformation that regulates protein translation; small molecules that interfere with this process could lead to new antibiotics. In 2010 the researchers described a fragment screen using equilibrium dialysis, in which the RNA was added to one chamber along with radiolabeled thiamine, which binds with low micromolar affinity. This chamber was separated from another chamber containing fragments by a dialysis membrane permeable to small molecules and fragments but not to (larger) RNA. Fragments were screened in pools of five, and pools that caused displacement of radioligand were then deconvoluted to identify the active fragments. A total of 20 fragment binders were identified out of roughly 1300 tested.

WaterLOGSY NMR was used to confirm the binding of these 20 fragments to the riboswitch, and all of them were then tested using isothermal titration calorimetry, which yielded dissociation constants for 17 of them ranging between 22 and 670 micromolar. When tested against a different riboswitch, 10 of them appeared to be selective for thiM. The chemical structures of all of these were reported in 2011, along with some speculation as to how they might bind.

Of course, speculation is just that, and in fact fragment hits have been identified against RNA and DNA before. In the new paper the researchers use X-ray crystallography to actually determine the structures of several fragments bound to the riboswitch. This provides several interesting observations.

First, despite the different chemical structures of the fragment hits, all four of those whose structures were determined bind in the same region where the pyrimidine moiety of the natural ligand TPP binds. In fact, fragment 1 (magenta), which is essentially a fragment of TPP (green), almost perfectly superimposes on the corresponding moiety of TPP.
More strikingly, the co-crystal structures of each of the fragments bound to the riboswitch reveal that one of the guanosine residues (magenta stick in figure above) rearranges to fill the pocket that would otherwise be occupied by the pyrophosphate moiety of TPP (orange and red above). This occurs with fragment 1 as well as other fragments that do not resemble the natural ligand.

The researchers also took the useful step of solving the crystal structure of thiamine (cyan) bound to thiM. Since thiamine is intermediate in size between TPP and fragment 1, you might expect the structure to resemble one or the other, but as it turns out it binds in yet a third mode in which the pyrimidine ring no longer superimposes with the other two structures, nor does the guanosine residue rearrange to fill the pyrophosphate-binding pocket. This provides an interesting example of fragmenting natural products (TPP to thiamine to fragment 1). Although all of the molecules bind with high ligand efficiencies, it is unlikely that their binding modes could have been accurately predicted.

As the researchers note, the conformational shifts observed with these fragments could lead to antibiotics that selectively target an inactive form of the riboswitch. Although they’ve got a long way to go, it is fun to see folks applying FBLD to non-traditional targets.

05 May 2014

Biofragments: extracting signal from noise, and the limits of three-dimensionality

What does this protein do? Now that any genome can be sequenced, this question gets raised quite often. In many cases it is possible to give a rough answer based on protein sequence: this protein is a serine protease, that one is a protein tyrosine kinase, but figuring out the specific substrates can be more of a challenge. In a recent paper in ChemBioChem, Chris Abell and collaborators at the University of Cambridge and the University of Manchester attempt to answer this question with fragments.

The bacterium Mycobacterium tuberculosis (Mtb), which causes tuberculosis, has 20 cytochrome P450 proteins (CYPs), heme-containing enzymes that usually oxidize small molecules. Although some are essential for the pathogen, it is not clear what many of them do. The researchers used an approach called “biofragments” to try to pin down the substrate of CYP126.

The biofragments approach starts by selecting a collection of fragments based on known substrates. Of course, the specific substrates are not known, so in this case the researchers started with a set of several dozen natural (ie, non-synthetic) substrates of various other CYPs, both bacterial and eukaryotic. They then computationally screened the ZINC database of commercial molecules for fragments most similar to these substrates and purchased 63 of them. Perhaps not surprisingly given their similarity to natural products, these turned out to be more “three-dimensional” than conventional fragment libraries, as assessed both by the fraction of sp3 hybridized carbons and by principal moment-of-inertia.

Next, the researchers screened their fragments against CYP126 using three different NMR techniques (CPMG, STD, and WaterLOGSY). Since they were primarily interested in hits that bind at the active site, they also used a displacement assay in which the synthetic heme-binding drug ketoconazole was competed against fragments. This exercise yielded 9 hits – a relatively high 14% hit rate.

Strikingly, all of the hits are aromatic, and 7 of them could reasonably be described as planar. In other words, even though the biofragment library was relatively 3-dimensional, the confirmed hits were some of the flattest in the library! The researchers interpreted this to mean that “CYP126 might preferentially recognize aromatic moieties within its catalytic site,” but there could be something more general going on – perhaps aromatics are simply less complex, and thus more promiscuous.

Examining the fragment hits more closely, the researchers found that one of them – a dichlorophenol – produced a spectrophotometric shift similar to that produced by substrates when bound to the enzyme. This led them to look for similar structures among proposed Mtb metabolites. Weirdly, pentachlorophenol came up as a possible hit, and a spectrophotometric shift assay reveals that this molecule does have relatively high affinity for CYP126. Whether this is a biologically relevant substrate for the enzyme remains to be seen.

This is an intriguing approach, but I do have reservations. First, in constructing fragment libraries based on natural products, it is essential to avoid anything too “funky”. The Abell lab is one of the top fragment groups out there, well aware of potential artifacts, and has a long history of studying CYPs, but researchers with less experience could easily populate a library with dubious compounds.

More fundamentally though, I wonder about the basic premise of biofragments. The whole point of fragments is that they have low molecular complexity and are thus likely to bind to many targets, so is it realistic to try to extract selectivity data from them? Indeed, as we’ve seen (here and here), fragment selectivity is not necessarily predictive of larger molecules.

That said, the approach is worth trying. Even if it doesn’t ultimately lead to new insights into proteins’ natural substrates, it could lead to new inhibitors.

29 April 2014

Ninth Annual Fragment-based Drug Discovery Meeting, Part 2

The first major fragment event of 2014 drew around 500 people to San Diego last week. This is part of CHI’s three-day Drug Discovery Chemistry conference, and although the official FBDD track was only one of six, it is a testimony to the vitality of the field that fragments made appearances in most of the other sessions. With 17 talks in the FBDD track alone this post will not attempt to be comprehensive; Teddy has already shared some impressions here.

Jim Wells (UCSF) gave a magisterial keynote address that emphasized how useful fragments can be for tackling difficult targets such as protein-protein interactions (PPIs). In fact, many of the talks in the protein-protein interaction track relied on fragments. That’s not to say it’s easy. Rod Hubbard (University of York and Vernalis) emphasized that advancing fragments to leads against such targets can take a long time and often requires patience that strains the management of many organizations. Fragment hits against PPIs usually have lower ligand efficiencies (0.23-0.25 kcal/mol/HA if you’re lucky), and improving potency can be a bear. Rhian Holvey (University of Cambridge) presented a nice example of how she was able to find millimolar fragments that bind to the anti-mitotic target TPX2, potentially blocking its interaction with importin-alpha, but even structural information was not enough to get to potent inhibitors.

G-protein coupled receptors (GPCRs) were thought to be unsuitable for fragments until recently, but both Iwan de Esch (whose work has been profiled several times, including here and here) and Jan Steyaert (Vrije University) presented success stories. In fact, Jan has only been working with the Maybridge fragment library for a few months, but has found agonists, antagonists, and inverse agonists for several GPCRs.

Another example of a difficult target is lactate dehydrogenase A (LDHA). We’ve previously highlighted cases where fragment linking was used to get to nanomolar binders (here and here); Mark Elban (GlaxoSmithKline) presented an example of fragment growing and using information from a high-throughput screen (HTS) to get to nanomolar binders. Mark also discussed a particularly disturbing false positive: HTS had generated dozens of confirmed hits spanning 7 chemotypes, but upon closer inspection it turned out that all of them came from a single vendor, and that – unreported by the vendor – they were all oxalate salts. Oxalate is a low micromolar inhibitor of LDHA, and is invisible in proton NMR, so I’m sure this was not fun to track down.

Ben Davis (Vernalis) also presented great examples of false positives and false negatives, and how to avoid them. In particular, the WaterLOGSY NMR technique is great for weeding out aggregators when run in the absence of protein.

A common theme throughout the conference was the integration of fragments with other methods, such as HTS. Nick Skelton (Genentech) actually titled his presentation “Fragment vs. HTS hits: does it have to be a competition?” Kate Ashton (Amgen) discussed how using information from a fragment screen helped solve pharmacokinetic issues with an HTS-derived hit. And Steven Taylor (Boehringer Ingelheim) presented a similar example (also covered here) of using fragments to fix a more advanced lead. Steven noted that fragment-based methods are now fully integrated into the organization, which marks a significant change from Sandy Farmer’s presentation at this meeting four years ago.

The roundtables are great opportunities to swap ideas and get feedback; Teddy already mentioned the excellent roundtable he chaired, but I wanted to also give a shout-out to one organized by Derek Cole (Takeda) focused on "practical aspects of fragment screening." We recently discussed discussed fragments that destabilize proteins in thermal shift assays, and it turns out that folks from both the Broad Institute and Takeda have also crystallographically characterized such fragments. There was the sense that either stabilizers or destabilizers should be considered hits, though the latter were less likely to lead to crystal structures than the former.

Finally, on the subject of library design, Damian Young (Baylor College of Medicine) described using diversity-oriented synthesis (DOS) to generate more “three-dimensional” fragments. He is planning to build a library of roughly 3000 fragments which he hopes to make widely available to the community; these should help answer the question of whether the third dimension is really an advantage.

The importance of library design was also emphasized by Valerio Berdini (Astex); they are currently on their seventh generation library, about 40% of which is non-commercial, and half of whose members have been solved in one or more of 6000+ crystal structures. Relevant to the rule of three, Astex is moving to ever smaller fragments, with an average of 12.6 non-hydrogen atoms, ClogP = 0.6, and MW = 179. Indeed, despite assertions that PPIs may require larger fragments, Rod noted that at Vernalis the average fragments hits against PPIs are only slightly larger (MW = 202 vs 189 against all targets) and more lipophilic (ClogP 1.2 vs 0.8).

CHI has already announced that next year’s meeting will be held in San Diego from April 21-23. As it will be the ten year anniversary, they’re planning something big, so put it on your calendar now!

24 March 2014

Fragments vs MCL-1, again and again

Last year we highlighted a paper from Stephen Fesik’s group at Vanderbilt in which he used SAR by NMR and fragment merging to identify nanomolar inhibitors of the protein MCL-1, an anti-cancer target that had previously been thought to be impervious to small molecules. In a recent paper in Bioorg. Med. Chem. Lett., Andrew Petros, Chaohong Sun, and other former colleagues of Fesik at AbbVie describe two additional series of inhibitors.

The researchers started with an NMR screen using MCL-1 in which the methyl groups of isoleucine, leucine, valine, and methionine were 13C-labeled. Screening this against a library of 17,000 fragments in pools of 30(!) gave dozens of hits, some of which inhibited in a biochemical assay (for aficionados, they assessed binding to the BH3 domain of Noxa using fluorescence polarization as a readout).

Fragment 1 turned out to be fairly potent, though it is super-sized and violates the rule of three. The researchers were unable to get co-crystal structures of any of their fragments bound to MCL-1, but they were able to use NOE-based NMR experiments to develop a model of how fragment 1 might bind. This led them to synthesize a number of analogs such as compound 17, for which they were able to obtain a co-crystal structure with the protein, ultimately leading to the mid-nanomolar compound 24.

Fragment 2 was much less potent than the other fragment but had a considerably higher ligand efficiency. In this case simple modeling suggested growing away from the acidic portion of the molecule, leading to compound 36 (which was characterized crystallographically bound to MCL-1) and the more potent compound 44.

Overlaying the co-crystal structures of compounds 17 (blue) and 36 (red) reveals that they both bind in the same region, where Fesik’s compound 53 (green) also binds. All three molecules place a carboxylic acid in a similar position, but the two more potent molecules thrust a hydrophobic moiety deep into a pocket of the protein. It is tempting to speculate that compound 44, the more potent analog of compound 36, may also take advantage of this pocket.

Andrew Petros presented some of this work at FBLD 2012, so it is nice to see it in print. Though reasonably potent, it is worth keeping in mind that the molecules are also quite lipophilic. Perhaps it is significant that, like the Fesik paper, no cell-based data are presented. Collectively, though, these papers establish that MCL-1 is ligandable. Whether it will be druggable remains an important – and as yet unanswered – question.

17 March 2014

This is another way to do it.

The key to doing something right is to following the directions.  How closely you follow the directions, or don't follow, can be the difference between brilliance and just a good performance, e.g. cooking.  Sometimes, directions are meant as guidelines, like the Pirate Code or the Voldemort Rule.  Late last year, and blogged about here, I published a paper in Current Protocols on how to prosecute an STD screen.  A recent paper in PLOSOne, shows how someone else runs their screens, but with details on library construction, solubility testing, and more.  What makes this paper of interest is the level of detail that they provide.

Library Design: They assembled a diverse fragment library with the following rules: 110≤ molecular weight ≤350, clogP≤3, number of rotable bonds ≤3, number of hydrogen bond doners ≤3, number of hydrogen bond acceptors ≤3, total polar surface area ≤110, and logSw (aqueous solubility) ≥ −4.5. 
I am little confused by the figure and what the text says.  In the text, they seem to have relaxed the MW cutoff, but the figure shows that anything not Voldemort Rule compliant is tossed.  They also preferred that the compound has at least one aromatic peak (for easier NMR detection).  They purchased 1008 from Chembridge, solubilized at 200 mM in DMSO-d6 (ease of NMR detection, again) and then tested the solubility at 1 mM in water.  I would have added some salt here, 50 mM, but that is a quibble.  For purity, they claim a low level of impurity (< 15%)!!!  To me, this is a whole lot of impurity.  But, as has been noted here, purity levels vary from library to library.
Solubility Testing:  They then made sure to experimentally test every fragment for solubility.  I can agree more emphatically with this approach.  Bravo!  They go into great detail, which I will not attempt to replicate here, but thanks to open access, they have included the scripts in the supplemental.  Acceptable compounds had > 0.1 mM aqueous solubility.  For me, this is too low, but to each their own.  They ended up with 893 total fragments (89% passed).  The real data I would like to see is how many fail if the cutoff is set at 0.5 mM or higher.  
Pooling: They then describe their pooling strategy.  I like open access articles for a lot of reasons, and tend to overlook small editorial problems (typos, grammar, etc.), but in this case, let me rant.  The authors state in the text that a random mixing of compounds would lead to severe overlap, exemplified in 3a.  To me, it does no such thing. 

Their approach is very similar to the Monte Carlo-based one that has previously been discussed on this blog.  Their final pools contain 10 fragments at 20 mM (I assume in 100 % DMSO-d6). 
Screening: They also acquired the 1H spectrum, STD (-0.7 ppm, > 1 ppm from any methyl), and WaterLOGSY spectrum of every pool for future reference.  This is a very clever approach as the STD should give no signal while the WaterLOGSY should give inverted peaks for all compounds in the pool (when interacting with a target they will be "right-side up").  Again, the figure may show that (I think if you blow up the figure the WaterLOGSY spectra does have peaks) but it is very difficult to see. 
Three of the 90 pools (3.3%) showed peaks in the aromatic region, most likely due to aggregation (they observed precipitation).  I would like to know if those compounds showed STD peaks also had those methyl groups within 1 ppm of the saturation frequency.  I would also like to know if they removed those compounds from the library, or just dealt with it.  For a paper with a great level of detail, it falls flat in this respect.  
Screening is performed at 10uM Target: 500uM ligand and the following parameters: acquisition time of 1 s, 32 dummy scans, and relaxation delay of 0.1 s, followed by a 2 s Gauss pulse train with the irradiation frequency at −0.7 ppm or −50 ppm alternatively. The total acquisition time was 15 minutes with 256 scans.
Screen Analysis: One of the first things they noticed was that there were difference between the reference spectra (plain water) and the screening sample (protein buffer).  They decided they could not automate the entire process and instead just scripted the data processing and display.  Then they confirmed each putative active as a singleton. 
What they are putting together is a "One Size Fits All" process.  I give them credit for doing this, but I think that you cannot find a single NMR-based process for all targets.  In particular, I think they could have used more typical conditions for the reference spectra.  The paper then goes on and discusses their application to targets of interest.  For me, that is irrelevant.  This paper is an excellent companion to the Current Protocol paper, and due to open access, most likely to get far more citations.

03 March 2014

FABS-ulous Screening Against Membranes

The blog is running on a 2 hour delay today thanks to the winter storm (we actually got a nothing burger here from it).
If you follow this blog, and actually read what I say, you will know I have a 19F-fetish.  Thus, whenever another paper comes out, I gravitate towards it.  Claudio Dalvit is really one of the primary (if not THE) drivers of 19F NMR screening development.  He has been discussed on this blog often.  Most recently, back in October, when he published an example of n-FABS against a membrane target, FAAH.   Now, he is back with this paper: "Fluorine NMR-based Screening on Cell Membrane Extracts".  I was immediately transported back to my days at Lilly where in our group we came up with the great idea to try to screen (using STD) against crude membrane preps.  I don't remember much but my lab mate being unsuccessful in the end for any number of reasons.  Obviously, the development of a robust, biophysical technique which can be applied to intact cells, cell lysates, or membrane preps would be a significant addition to the entire biophysical toolbox.  Currently, only biochemical assays largely based upon fluorescence can do this.  n-FABS, as decribed previously, relies on the substrate of the target (which is labeled with at least one 19F atom) being converted by target action into product, and thus causing a chemical shift change in the 19F.  This is easily detected by NMR and voila, an assay is born. This work is an extension of the previous work on FAAH and very similar to this work by Brian Stockman.  The 19F chemical shift of substrate and product are easily differentiated and roughly quantitatable:
The proper controls showed that this activity is solely due to the TOI.  What makes this assay so appealing is shown in the next figure:
This figure shows the 1H spectrum of the reaction at 2hr (top) and 24hr (bottom).  There is virtually no difference in this spectrum, indicating that it is impossible to follow the substrate due to large signals from detergents and endogenous protonated signals.  For me this is the key to this.  We all know membrane proteins are hard to do, especially with fragments.  I have always wondered where 19F fits in the biophysical toolbox, especially in light of recent discussions where it presumed that 19F could out perform 1H.  In discussions, I have said that 19F runs circles around 1H when the ligands are highly aliphatic.  Well, this is the converse, and still just as true, when the sample matrix is ugly with "other stuff", in this case the stuff that keeps the target in solution.  One major drawback is that this approach is NOT a binding approach, and thus would be of limited utility against non-enzymatic membrane targets, such as a majority of membrane targets.  In the majority of membrane targets, SPR may be the most robust approach.

17 February 2014

Druggable is as Druggable Does; Or a Million Ways to use NMR

As we all know, the closure of sites is a bad thing for those of us in Pharma.  One very small silver lining is that this frees up a lot of very nice work to be published.  The former BI site in Laval has been closed for a year and we are still seeing great papers coming out.  In this one in JMed ChemLaPlante and co-workers tell us about their fragment efforts against HCV helicase

HCV has recently had drugs approved for its treatment, but as with any virus, different modes of treatment are important.  The ATP-dependent helicase activity is found in the C-terminal 2/3 of the NS3 protein. Helicase activity is straight forward to measure and there has been some success in terms of non-viral specific inhibitors.  The inhibitors found to date have been found to act through undesireable mechanisms, but with a wealth of structural information there is no reason why helicase is inherently undruggable.  With this information in hand, they decided to target site 3+4 (green sticks are DNA from the structure), near the most conserved residue W501.  The ATP-binding site is 1+2 for reference. 
 Their first approach was to screen the 1,000,000+ corporate compound collection.  As you would expect for a paper blogged about here, they failed to find anything interesting (all the inhibitors worked by undesireable modes).  So, on to the FBDD campaign, to save the day once more.  The used a "shotgun" approach with their fragment screen:

One source of compounds came from an earlier HTS where they rejected fragment-like molecules for lack of potency, additional HCS screening of in house fragment collection, commercial fragments were screened in an SPR assay, virtual screening, and NMR.  They had a stringent workflow aimed at producing quality compounds for X-ray.  [The in-house fragment collection was 1000 compounds.]  This, along with NMR, validated ligands that bound to site 3+4.  They note one particularly noteworthy problem: high false positive rates due to the high ligand concentrations needed for the assays.  This lead to aggregation, solubility, and promiscuity.  This lead them to implement specific assays designed to eliminate these compounds (two NMR papers published in 2013, ref 18). 

They then clustered the best hits into 9 chemotypes:

 They used an "Analog by Catalog" approach and soaked or co-crytallized the best compounds into crystals.  S6, S7, and S9 were not found to bind to helicase in the crystallization trials and were deprioritized.  S5 was found at Site 3+4, but also others.  S1-4, and S8 were found to bind solely to site 3+4 (12 examples shown overlain). The key feature of this is the compounds are centralized in a wide groove over W501.  The topology of the binding site (wide groove and small lipophilic pocket) meant that optimizing for potency could be challenging.
From this, they decided S2-S4 were the most promising.  In the end, the focused on the S2 indole series as the most promising.  The S2 stereotype 1
was found from an STD-NMR screen of 3 fragment per sample (300 uM fragment and 3.5 uM helicase).  They then, much to my heart's delight, they reached into the NMR cabinet for line broadening and competition experiments confirming it binds in site 3+4.  X-ray confirmed the binding mode, but potency was not improved with chemistry.  So back into the NMR cabinet they went: a methyl resonance assay, 
 15N TROSY showing peaks shifting upon addition of a derivative of 1, and 19F NMR!  OMG, how awesome is this?  

In terms of the chemistry, removing the Br does not change the potency, but did change the orientation of the compound in the binding site.  Further elaboration led to this compound 19 (3 uM and 0.23 LE):
It contains a nitro group, think what you may.  In order to confirm the binding affinity of the compound without immobilizing protein, they used the methyl resonances to do the titrations.  The two separate peaks they followed gave values of 32 and 28 uM (+/- 8).  Given the broadness of these peaks, I think this is a pretty decent assay, although it is an order of magnitude different than the biochemical Kd.  However, subsequent structural studies revealed that there is significant structural dynamic differences between pH 6.5 and 7.5.  ITC gave the same number (33 uM and enthalpy driven); however, the ITC had to be run at high compound concentration and a different pH.  They then went off the deep end and decided to use CD (I can't link to a previous post of using CD because we have never had a post where someone used it).  With a horrible assay (don't even get me started on near-UV CD as a readout of tertiary structure), they got reasonably close to the Kds determined by ITC and methyl-NMR.  

This is a very nice example of not being afraid of a target and using all available tools to advance hits against it.  It also shows the WIDE range of NMR experiments that can be used and that are easy and practical.  In terms of full disclosure, Steven LaPlante is a FOT (Friend of Teddy) and I have been working with him. 

03 February 2014

How weak is too weak for PPIs?

Ben Perry brought up an interesting question in a comment to a recent post about fragments that bind at a protein-protein interface: “At what level of binding potency does one accept that there may not be any functional consequence?” I suspect the answer will vary in part based on the difficulty and importance of the target, and many protein-protein interactions (PPIs) rank high on both counts. In a recent (and open-access!) paper in ACS Med. Chem. Lett., Alessio Ciulli and collaborators at the University of Dundee, the University of Cambridge, and the University of Coimbra (Portugal) ask how far NMR can be pushed to find weak fragments.

The researchers started with a low micromolar inhibitor of the interaction between the von Hippel-Lindau protein and the alpha subunit of hypoxia-inducible factor 1 (pVHL:HIF-1α), an interaction important in cellular oxygen sensing. The team had previously deconstructed this molecule into component fragments, but they were unable to detect binding of the smallest fragments.

In the new study, the researchers again deconstructed the inhibitor into differently sized fragments and used three ligand-detected NMR techniques (STD, CPMG, and WaterLOGSY) to try to identify binders. As before, under standard conditions of 1 mM ligand and 10 µM protein, none of the smallest fragments were detected. However, by maintaining ligand concentration and increasing the protein concentration to 40 µM (to increase the fraction of bound ligand) or increasing concentrations of both protein (to 30 µM) and ligand (to 3 mM), the researchers were able to detect binding of fragments that adhere to the rule of three.

Of course, at these high concentrations, the potential for artifacts also increases, but the researchers were able to verify binding by isothermal titration calorimetry (ITC) and competition with a high-affinity peptide. They were also able to use STD data to show which regions of fragments bind to the protein, suggesting that the fragments bind similarly on their own as they do in the parent molecule. (Note that this is in contrast to a deconstruction study on a different PPI.) Even more impressively for a large (42 kDa) protein, the researchers were able to use 2-dimensional NMR (1H-15N HSQC) to confirm the binding sites.

Last year we highlighted a study that deconstructed an inhibitor of the p53/MDM2 interaction. In that case, the researchers were only able to find super-sized fragments, and they argued that for PPIs the rule of three should be relaxed. The current paper is a nice illustration that very small, weak fragments can in fact be detected for PPIs, though you may need to push your biophysical techniques to the limit.

But back to the original question of how weak is too weak. With Kd values from 2.7-4.9 mM, these are truly feeble fragments. Nonetheless, they could in theory have been viable starting points had they been found prospectively. That assumes, though, that these fragments would have been recognized as useful and properly prioritized. The ligand efficiencies (LE) of all the fragments, while not great, are not beyond the pale for PPIs. Previous research had suggested that much of the overall binding affinity in compound 1 comes from the hydroxyproline fragment (compound 6, which was originally derived from the natural substrate). Not discussed in the paper, but perhaps more significantly, the LLE (LipE) and LLEAT values are best for compound 6, which despite having the lowest affinity is the only compound that could be crystallographically characterized bound to the protein. In the Great Debate over metrics, this suggests that LLE and LLEAT may be more useful than simple LE for comparing very weak fragments.

13 January 2014

There can be too much of a Good Thing

One nice thing about being a consultant is that I get paid to think about things for people (sometimes).  One of the things I have been thinking about lately (on the clock) is the optimal size of fragment pools.  I got to started wondering if there can be too many fragments in a pool:
You know how you mother always said, don’t eat too much it will make you sick?  I never believed her until my own child was allowed to eat as much easter candy as possible and it actually made him sick.  [It was part of a great experiment to see how much "Mother Wisdom" was true, like Snopes.]  I have been working lately in library (re)-optimization and one thing that keeps coming up is how many fragments should go in a pool.  As pointed out here and discussed here, there are ways to optimize pools for NMR (and I assume the same approach can be done for MS).  So, we have always assumed that the more fragments in a pool the better off you are, and of course the more efficient.  
But is that true?  Is there data to back this up?  Probably not, I don’t [know if] anyone wants to run the control.  So, let’s do the gedanken. If you have 50 compounds in a pool (nice round number and easy to do math, so its my kind of number) you would expect for a “ligandable” target to have a 3-5% hit rate.  That means out of that pool you would expect 1.5-2.5 fragments to hit.  So, that means that you have 2 fragments that hit, these two would then compete and your readout signal would be reduced by 50%.  So, if you are already having trouble with signal you are going to have more.  Also, can you be sure that  negatives are real, or did they “miss” because of lowered signal due to the competition.  And what if one of the hits is very strong?  Also, how do you rank order the hits?  Do you scale the signal by the number of hits in the pool?
  I then reached out to the smart people I know who tend to be thinking about the same things I do, but in far greater depth.  I spoke to Chris at FBLD and he was putting together a large 19F library, aiming to get up 40 or more 19F fragments in a pool.  Well, Chris Lepre at Vertex was already thinking about this exact problem. He shared his thoughts with me and agreed to let me share them here (edited slightly for clarity).  

To accurately calculate the likelihood of multiple hits in a pool, I [Ed:Chris] used the binomial distribution.  For your hypothetical pools of 50 and a 3% hit rate, 44% of the samples will have multiple hits (25.6% with 2 hits, 12.6% with 3, 4.6% with 4); at a 5% hit rate this increases to 71% (26.1% = 2 hits, 22% = 3, 13.6% = 4, 6.6% = 5, 2.6% = 6).  So, the problem of competition is very real.  It's not practical to deconvolute all mixtures containing hits to find the false negatives:  the total number of experiments needed to screen and deconvolute is a minimum when the mixture contains approximately 1/(hit rate)^0.5 (i.e., for a 5% hit rate, mixtures of 5 are optimal). [Ed:Emphasis mine!] 

Then there's the problem of chemical reactions between components in the mixture.  Even after carefully separating acids from bases and nucleophiles from electrophiles in mixtures of 10, Mike Hann (JCICS 1999) found that 9% of them showed evidence of reactions after storage in DMSO. This implies a reaction probability of 5.2%, which, if extended to the 50 pool example, would lead one to expect reactions in 70% of those mixtures.  If this seems extreme, keep in mind that the number of possible pairwise interactions = npairs = n(n-1)/2 (n-1/2)*n [Ed: fixed equation], where n = the number of compounds in the pool.  So, a mixture of 10 has 45 possible interactions,  while a mixture of 50 has 1200.  Even with mixtures of only five, I've seen a fair number of reacted and precipitated samples.  Kind of makes you wonder what's really going on when people screen mixtures of 100 (4950 pairs!) by HSQC.  [Ed: I have also seen this as I am sure other people have.  I think people tend to forget about the activity of water.  For those who hated that part of PChem, here is a review.  Some fragment pools could be 10% DMSO in the final pool, and are probably much higher in intermediate steps.]

Finally, there's the problem of chemical shift dispersion.  Even though F19 shifts span a very large range and there are typically only one or two resonances per compound, the regions of the spectrum corresponding to aromatic C-F and CF3 become quite crowded.  And since the F19 shifts are relatively sensitive to small differences in % DMSO, buffer conditions, etc. it's necessary to separate them by more than would be necessary for 1H NMR.  Add to that that need to avoid combining potentially reactive compounds (a la Hann) and the problem of designing non-overlapping mixtures becomes quite difficult.  [Ed: They found that Monte Carlo methods failed them.]

I've looked at pools as large as 50, but at this point it looks like I'll be using less than 20 per pool.  I'm willing to sacrifice efficiency in exchange for avoiding problems with competition and cross-reactions.  The way I see it, fragment libraries are so small that each false negative means potentially missing an entire lead series, and sorting out the crap formed by a cross-reaction is a huge time sink (in principle, the latter could yield new, potentially useful compounds, but in practice it never seems to work out that way).  The throughput of the F19 NMR method is so high and the number of good F-fragments so low that the screen will run quickly anyway.  Screening capacity is not a problem, so there's not really much benefit in being able to complete the screen within 24 hrs vs. a few days.
The most common pool size (from one of our polls) was 10 fragments/pool.  This would mean that the expected hit rate was 1% or less.  This is a particularly low expected hit rate, or people are probably putting too many fragments in a pool.  So, is there an optimal pool size?  I would think that there is: between 10-20 fragments.  You are looking for a pool size that maximizes efficiency but you don't want to have so many that you also raise the possibility of competition. 

30 December 2013

Review of 2013 reviews

The year is coming to an end, and as we did last year, Practical Fragments is looking back at notable events as well as reviews that we haven’t previously highlighted.

The fragment calendar started in March in Oxfordshire, at the RSC Fragments 2013 conference, closely followed in April by CHI’s FBDD meeting in San Diego (here and here). Closing out the year for conferences that Teddy or I attended was the Novalix conference on Biophysics in Drug Discovery in Strasbourg (here, here, and here).

There weren’t any new books published (though the special issue of Aus. J. Chem. practically counts as one), but there were several notable reviews.

Stephen Fesik and colleagues at Vanderbilt University published “Fragment-based drug discovery using NMR spectroscopy” in J. Biomol. NMR. This is an excellent overview that covers library design, NMR screening methodologies, and compound optimization. The researchers make an interesting case for including multiple similar compounds and allowing for larger, more lipophilic fragments, while always being careful to avoid “bad actors”. They also do a good job of summarizing the various NMR techniques, including their strengths and limitations, in language accessible to a non-spectroscopist. Finally, the section on fragment linking discusses the theoretical gains in affinity, the practical challenges to achieving these, and strategies to overcome them.

Turning to the other high-resolution structural technique, Rocco Caliandro and colleagues at the CNR-Istituto di Cristallografia in Italy published “Protein crystallography and fragment-based drug design” in Future Med. Chem. This provides a fairly technical description of X-ray crystallography and its role in FBDD, along with a table summarizing around 30 examples, five of which are discussed in some detail.

Of course, it’s always best to use multiple techniques for finding fragments, so it’s well worth perusing “A three-stage biophysical screening cascade for fragment-based drug discovery,” published in Nature Protocols by Chris Abell and colleagues at the University of Cambridge. This expands on a gauntlet of biophysical assays (involving differential scanning fluorimetry (DSF), NMR, crystallography, and isothermal titration calorimetry (ITC)) that we discussed earlier this year. Nature Protocols are highly detailed, with lots of troubleshooting tips, so this is a great resource if you’re exploring any of these techniques.

Finally, Christopher Wilson and Michelle Arkin at the University of California San Francisco published “Probing structural adaptivity at PPI interfaces with small molecules” in Drug Discovery Today: Technologies. Protein-protein interactions are frequent targets for FBLD: see for example here, here, here, here, and here – and that’s just for 2013! The current review gives a nice overview of the technology called Tethering, focusing on the cytokine IL2 and an allosteric site on the kinase PDK1.

And with that, Practical Fragments thanks you for reading and says goodbye to 2013. May your 2014 be happy and fulfilling!