Fragment-based Lead Discovery Conference 2009 just concluded in York, UK; it was the second in what will hopefully be a continuing series. With more than two dozen talks and as many posters spread over three days, most of them very high quality, it is impossible to summarize even the highlights (and I don’t want to scoop pending publications). Instead I’ll just jot down a few impressions.
On the broad topic of why FBLD is useful, an interesting shift in emphasis seems to have occurred. A few years ago a key argument in favor of fragments was getting compounds to the clinic faster, but there is now a greater focus on quality over speed. In summarizing over a decade of fragment work at Abbott, Phil Hajduk noted that FBLD hits consistently bind more efficiently than those from HTS. Similarly, Chris Murray of Astex noted that, among their five clinical candidates (four of which target kinases), the average ClogP was 1.7 (vs 4.1 for a set of 45 reported orally active kinase inhibitors), while the average molecular weight was 390 (vs 457).
One theme that differentiated this meeting from others was a strong focus on modeling: an entire day was devoted to sessions on “fragments, scoring functions and docking” and “design from fragments.” This concluded with a lively round table discussion, chaired by Vernalis’ James Davidson, titled “Chemistry challenging modeling.” But challenges didn’t only come from chemists: one prominent modeler noted that there have been no fundamentally new approaches to modeling in the past two decades; another asked why, despite the number of interesting new chemistries out there, so many modelers restrict themselves to the same old standbys such as amide bonds.
Part of the problem with modeling, of course, is separating hits from noise: true hits often show up near – but not at – the top of a ranked list, so how does one decide what is worth pursuing? Phil Hajduk discussed the use of “Belief Theory”, in which the similarity of an unknown molecule to a known active is used to evaluate the unknown.
Another problem is the quality of primary data: As Hajduk noted, “no one takes experimental error into account” when predicting ligand binding, and a recent analysis suggests that over-fitting data is a substantial problem with many computational approaches. This is all the more problematic when the data are not just noisy but spurious; Practical Fragments has noted the problem of aggregation, and UCSF’s Brian Shoichet emphasized this point, noting that 85-95% of hits from a high-throughput screen could be artifacts, while 85-100% of what remains could also be bogus. He did note, though, that fragments are less problematic in this regard than larger molecules. And Genentech’s Tony Giannetti, Vernalis’ James Murray, and others illustrated how surface plasmon resonance is effective at weeding out bad actors.
Getting better data will clearly be essential to getting better models, but one essential category, the forces involved in protein-small molecule interactions, is still poorly understood. Gerhard Klebe of the University of Marburg presented a detailed and elegant set of experiments exploring the effects of chemical structure on the enthalpy and entropy of binding to the protein thrombin. He emphasized that desolvation of fragments from water is critical, and only possible if compensated by strong interactions with the protein. This also implies that you want fragments that have low desolvation penalties as well as high solubilities – a tricky balancing act.
FBLD 2009 was held barely six months after Fragments 2009, and it is a testament to the vibrancy of the field that both conferences managed to be so successful and exciting while sharing very few speakers in common.
For the other two hundred plus attendees at the conference, what were some of your impressions?