Over the past decade, fluorine
NMR has established itself as a powerful fragment-finding method due to the advantages
Teddy laid out in his classic “fluorine fetish” post. One feature of 19F
NMR is that the chemical shifts of organofluorine molecules span a very wide
range, in theory allowing large mixtures to be screened. However, existing NMR methods
do not work across such large spectral windows, thereby requiring multiple experiments
to screen an entire library. This limitation has now been overcome as described
in a paper just published in Angew. Chem. by Andreas Lingel, Andreas
Frank, and collaborators at Novartis and Karlsruhe Institute of Technology.
The researchers developed an experiment
based on “broadband universal rotation by optimized pulses” (BURBOP). I confess
that the details evade me (though they are all there in the supporting
information if you wish to try it at home), but the upshot is a type of CPMG experiment
in which fluorine-containing fragments bound to a protein show decreased peak
intensities. Crucially, a single experiment can cover the full frequency range
of pharmacologically relevant fluorine-containing molecules, spanning about 210 ppm.
Previously, this required four two separate experiments.
Such increased throughput led the
researchers to revamp their library, increasing the size from 1600 to 4000
fragments in an augmented library dubbed LEF4000. The paper has a nice, broadly
applicable description of their curation process. Candidate members were brought
in from both commercial and in-house sources and chosen to complement existing
library members in terms of diversity. A modified rule of three was applied,
with trifluoromethyl-containing fragments allowed to go up to 350 Da.
An in-house analysis of 25,000
fragments revealed that only about half of those with a clogD7.4 greater
than 3 were soluble above 0.5 mM, so this was applied as an upper limit. Fragment
solubilities were experimentally measured, and only compounds with solubilities
above 0.2 mM were kept. (Although fluorine NMR is often done at low concentrations,
complementary biophysical experiments are not.) Additional quality control
measures included NMR and LC-MS purity assessments and removal of compounds
that formed soluble aggregates as assessed by CPMG. Ultimately, 3969 of 5600
candidate molecules passed the gauntlet, and were combined in 131 mixtures of
about 30 compounds each.
Having built their library, the
researchers screened it against the antibacterial target CoaD, which is involved
in coenzyme A synthesis. The screen took just two days, and automated hit identification
took only a few hours on a standard laptop. The overall hit rate was ~6%, and
some of the hits were confirmed using two-dimensional protein-observed NMR
methods, revealing that they bind in the enzyme active site with affinities in
the mid micromolar to low millimolar range.
Pushing the technique further,
the researchers built a “Supermixture” of 152 compounds, including five of the
hits spanning a wide range of chemical shifts, from -50 to -220 ppm. Even under
these conditions the binders were readily identifiable, and the paper states
that libraries exceeding 20,000 fragments could in principle be screened in a
few days.
In 2009 I wondered why 19F
NMR was not used more widely. How things change! At Novartis the LEF4000
library has been screened against “a wide variety of disease-related targets”
and identified “tractable hits for each of the screened targets, among them
many considered undruggable by small molecules such as transcription factors, a
cytokine, a nuclear receptor, and a repeat RNA.” Practical Fragments
looks forward to seeing some of these appear in the growing list of FBDD-derived
clinical candidates.
Small typos: actually, as described in the paper, previously, the screening required two and not necessary four separate experiments.
ReplyDeleteThanks Anonymous - you are correct. Although the paper states, "four independent experiments covering spectral windows of ~50 ppm and recorded with different center frequencies are required..." it goes on to mention that this can be cut in half with a previously described single echo experiment.
ReplyDeleteI don't have access to this journal....so can someone tell me which method/software they used to calculate logP? Calculated values vary by surprising large amounts!
ReplyDeleteHi Anonymous,
ReplyDeleteThe Supporting Information is open access, and they say they used the program Moka to calculate clogD7.4.