21 October 2014

Benchmark Your Process


So, not everybody agrees with me on what a fragment is.  As has been pointed out years ago, FBDD can be a FADD.  In this paper, from earlier this year, a group from AZ discusses how FBDD was implemented within the infectious disease group. Of course, because of the journal, it emphasizes how computational data is used, but you skim over that and still enjoy the paper :-). They break their process into several steps.
Hot Spots: This is a subject of much work, particularly from the in silico side.  In short, a small number of target residues provide the majority of energy for interaction with ligands.  Identifying these, especially for non-active site targets (read PPI), is highly enabling, for both FBDD and SBDD. To this end, the authors discuss various in silico approches to screening fragments.  They admit they are not as robust as would be desired (putting it kindly).  As I am wont to say, your computation is only as good as your experimental follow up.  The authors indicate that the results of virtual screens must be experimentally tested.  YAY!  They also state that NMR is the preferred method; 1D NMR in particular being the AZ preferred method.  [This is something (NMR as the first choice for screening) that I think has become true only recently.  Its something I have been saying for more than a decade, but I guarantee my cheerleading is not why.] They do note that of the two main ligand-based experiments, STD is far less sensitive than WaterLOGSY.  There is no citation, so I would like to put it out there, is this the general consensus of the community?  Has anyone presented data to this effect?  Specifically, they screen fragments 5-10 per pool with WaterLOGSY and relaxation-edited techniques.  2D screening is only done for small proteins (this is in Infection) and where a gram or more of protein is available.

Biophysics:  They have SPR, ITC, EPIC, MS, and X-ray.  They mention that SPR and MS require high protein concentrations to detect weak binders and thus are prone to artifacts.  They single out the EPIC instrument as being the highest throughput.  [As an aside, I have heard a lot of complaints about the EPIC and wonder if this machine is still the frontline machine at AZ.]  60% of targets they tried to immobilize were successful.  They also use "Inverse" SPR, putting the compounds down; the same technology NovAliX has in their Chemical Microarray SPR.  In their experience, 25% of these "Target Definition Compounds" still bind to their targets. 

They utilize a fragment-based crystallography proof of principle (fxPOP).  Substrate-like fragments (kinda like this?) are screened in the HTS, hits [not defined] are then soaked into the crystal system, and at least one structure of a fragment is solved.  This fragment is then used for in silico screening, pharmacophore models, and the like.  So, this would seem to indicate that crystals are required before FBDD starts.  They cite the Astex Pyramid where fragments of diverse shape are screened and the approach used at JnJ where they screen similar shaped fragments and use the electron density to design a second library to screen.

As I have always said, there are non-X-ray methods to obtain structural information.  AZ notes that SOS-NMR, INPHARMA, and iLOE are three ways.  These are three of the most resource intensive methods: SOS-NMR requires labeled protein (and not of the 15N kind), INPHARMA requires NOEs between weakly competitive ligands (and a boatload of computation), while iLOE requires NOEs of simultaneously binding ligands.  I think there are far better methods, read as requiring fewer resources, to give structural information more quickly (albeit at lower resolution).

The Library:  The describe in detail how they generated their fragment libraries.  They have a 20,000 fragment HCS library.  The only hard filter is to restrict HA less than 18.  I fully support that.  They also generated a 1200 fragment NMR library biased towards infection targets.

The Process:   The authors list three ways to tie these methods together:
  1. Chemical Biology: Exploration of binding sites/development of pharmacophores.  I would add that this is also for target validation.  As shown by Hajduk et al. and Edfeldt et al., fragment binding is highly correlated to advancement of the project. 
  2. Complementary to HTS.  At the conference I am at today, one speaker (from Pfizer) said that HTS was for selectivity, FBDD was for efficiency (or Lord, here comes Pete with that one).  I really like that approach.
  3. Lastly, stand alone hit generation.  
I think this paper is a nice reference for those looking to see how one company put their FBDD process in place. Not every company will do it the same, nor should they.  But there is a FBDD process for every company.

10 comments:

Pete said...

So HTS for selectivity, FBDD for efficiency and Pfizer for arm-waving? On more serious note, the contribution of an intermolecular contact (or group of contacts) to affinity is not, in general, an experimental observable.

Dr. Teddy Z said...

My post can't even be called "Pete-bait". Should I post your thoughts in the post, to save you time?

Pete said...

No need to. Instead, think about whether of not you agree with the assertion (also made in our LEM Perspective) that the contribution of an intermolecular contact to affinity is not, in general, an experimental observable. Sometimes we hear that affinity comes from hydrophobic contact and selectivity comes from hydrogen bonds. Penetrating insight or voodoo thermodynamics.

Dr. Teddy Z said...

Pete, I agree. That's what the Safran Zunft challenge is trying to get at. Since you can't get at individual contributions, what are they as a whole? Are bad ones equally potent as a good one?

Dr. Teddy Z said...

As to the WaterLOGSY vs. STD, it was pointed out to me that a recent paper was published that says WaterLOGSY is more sensitive than STD. I call monkeyshines. http://www.quantumtessera.com/waterlogsy-vs-std/ here.

Fred said...

What complaints have you heard about the EPIC instrument? I know folks who are considering purchasing one ...

Dr. Teddy Z said...

@Fred. Its great for identifying aggregators and false positives. Everything else it isn't so grand for.

Fred said...

Double yikes.

Fred said...

So these "examples" must be "representative" data?

http://catalog2.corning.com/LifeSciences/en-US/TDL/techInfo.aspx?categoryname=assay|label-free%20dectection|Biochemical

Dr. Teddy Z said...

I only report the news, I don't make it. I saw a talk recently where easily half of the data were false positives, yet the presenter said how useful EPIC was. I heard differently from his colleague. "useful" is a highly fungible term.