11 November 2024

Poll results: fragment finding methods and structural information needed for fragment-to-lead efforts

Our most recent poll asked about fragment finding methods. The poll ran from September 21 through November 8 and received 135 responses from 20 countries. Two thirds of these were from the US, about 12% were from the UK, 4% from Germany, 3% from the Netherlands, and 2% from Australia.
 
The first question asked how much structural information you need to begin optimizing a fragment. In contrast to 2017, when we first asked this question, crystallography has significantly increased at the expense of the other choices. 
 
 
I confess to being surprised, as I expected that by now people would be more comfortable beginning optimization in the absence of structural information, an approach that has been quite successful as discussed in a 2019 open-access Cell Chemical Biology review by Ben Davis, Wolfgang Jahnke, and me. Perhaps the increasing speed and accessibility of new methods has so lowered the bar to getting crystal structures that people have the luxury of waiting. Of course, with an online poll there is always the risk that many respondents from the same organization may skew the results.
 
The second question asked which methods you use to find and validate fragments. This is the fifth time we’ve run this poll, starting in 2011. As with our first question, X-ray crystallography came out on top, with nearly 80% of respondents choosing it. This was followed by SPR, at 67%, and thermal shift and ligand-detected NMR, each around 55%. 
 
 
Functional screening was used by nearly half of respondents, with computational methods, protein-detected NMR, and literature starting points used by around a third. Mass spectrometry and ITC were each used by slightly more than a quarter of respondents.
 
For the first time we asked about cryo-EM, and nearly 20% of respondents reported using this technique.
 
MST and affinity-based methods each came in at 13%, with just 4% of respondents using BLI, and 5 individual respondents using other methods. I’d be curious to know what these are.
 
The average respondent reported using just over 5 different techniques, which is down slightly from 6 in 2019 but up from 4 in 2016. Using multiple orthogonal methods is clearly well established as best practice, even if the precise number varies.
 
How do these results compare with your own practices?