11 April 2012

Library design, search methods, and applications of fragment-based drug design

This is the title of a new book, edited by Rachelle Bienstock, which comes out of two symposia she organized at recent ACS Meetings (one of which is summarized here).

The book starts with an overview of fragment-based drug design by Bienstock. This is a thorough summary of the talks in both symposia, including those that did not end up as full chapters. This chapter also includes a useful table of available software relevant for FBLD.

The longest section of the book is devoted to designing and searching fragment libraries. In chapter 2, Dimitar Hristozov and colleagues at Eli Lilly and the University of Sheffield describe an algorithm for the de novo design of new molecules based on known reactions. Chapter 3, by John Badger of DeltaG Technologies, addresses the question of how to design libraries for crystallographic screening, with some emphasis on software. Chapter 4, by Ammar Abdo and Naomie Salim at the University Teknologi Malaysia, describes a “Bayesian interference network” for virtual screening as an alternative to conventional similarity searching. And chapter 5, by François Moriaud and colleagues at MEDIT, describes the researchers’ mining of the protein data bank (PDB) to understand the relationship between ligands and their protein pockets. This information is used for generating bioiosteric replacements and to generate new compound libraries for specific targets.

The next section of the book is focused on docking. In chapter 6, Zsolt Zsoldos of SimBioSys describes their high-speed eHiTS (electronic High Throughput Screening) engine for docking fragments. This chapter delves deeply into a statistical scoring function, and should be of particular interest for the mathematically-inclined. Chapter 7, by Peter Kolb at UCSF, discusses DAIM (Decomposition and Identification of Molecules), a program designed to break larger molecules into fragments, as well as computational methods for docking these derived fragments. He describes the use of this software to discover high affinity ligands for the kinase EphB4 and other targets.

The last section is devoted to fragment growing and linking. Chapter 8, by Eugene Shakhnovich and colleagues at Harvard, discusses FOG (Fragment Optimized Growth), which grows molecules by adding fragments such that the resulting molecules resemble those in a training set. This allows one to focus on regions of chemical space that are believed to be particularly productive or drug-like. And finally, in chapter 9 Zenobia founder Vicki Nienaber describes how fragment-based approaches are ideally suited for discovering drugs targeting the central nervous system.

By my count this marks the fifth book completely dedicated to fragment-based lead discovery, but its focus on computational methods still sets it apart from the others. That’s the fun thing about fragments: there’s something for everyone.

1 comment:

-Rima said...

Dan, I am glad to read about eHiTS in this book. I've used eHiTS and LASSO in past for HepC related CADD work and found eHiTS to provide the best correlation between the predicted binding affinity and actual IC50 values. Almost 10% improvement over other commercial & more popular programs like Glide and Gold.
Also, at Pfizer, we used to use a method similar to what is described in Chapter 7. We decomposed large molecules and rebuilt new ones from the fragments - eventually we had a huge combinatorial library that we could screen targets against. It is good to see these kind of methods gain more prominence.