29 January 2014

Kill Them Bugs!

Bugs are bad.  I hate bugs.  Bugs of all kinds.  In our part of the world we have a particularly noxious, invasive bug called the stink bug.  Ewwww.  And they are everywhere.  And in the winter they are particularly prevalent because they get in your attic, soffets, etc. and then creep into your house.  I would love to be part of a global effort to eradicate these horrible creatures.  I may lose my green bona fides advocating the genocide of an entire species, but so be it.  It is also not so practical, so really not germane to this blog.

However, targeting bacteria is practical, and important.  Antibiotic resistance is on the rise globally and only two antibiotics with novel modes of action have been approved in this century.  Dire consequences meet pressing need.  Many antibiotics with improved efficacy are due to higher to higher potency or resistance to degradation.  But, this avenue has a limited lifespan and novel targets are needed.  Into this breach steps Astra Zeneca, with this paper.  The topoisomerases DNA gyrase and Topisomerase IV Top IV) have already been shown clinically to be validated targets.  The A subunits contains the DNA cleavage domain while the B subunits contain the ATP binding and hydrolysis domain.  DNA gyrase inhibitors also typically inhibit TopIV.  Fluoroquinolones (the DNA complex) and aminocoumarins (the ATP site) target these enzymes. Aminocoumarins have not received much attention to due PK and safety issues. There are a wide variety of ATP-targeting compounds.

Cpds 1 and 2 have been shown by X-Ray to bind in the ATP site and extend outside that site to generate additional interactions with R144.  The team's design goal was a new scaffold that would merge these two compounds attributes.  They chose 2-pyridylureas which had not previously been explored.  Modeling showed that 5-substitution reaches towards R144 with a carboxylate and 4-substitution allows for exploration into more open space.  6-substitution abuts a hydrophobic region and should not be messed with.

Cpds 3-13 were synthesized (or were commercially available) to test these hypotheses with Cpd 6 clearly the best.  Then they explored the 4 and 5 substitutions 9see the actual paper for Tables 1-3).  The chemistry and isozyme exploration they performed was very detailed.  The two best compounds ended up being 31 and 35

Then the paper gets into the details (it's 24 pages long and the results/discussionare pp 5-13).  I highly recommend reading that part on your own.  I am really impressed by the work.  As they discuss, the Xtal structures support many of the design hypotheses.  This cannot be understated.  Fragment-based drug design (and in this case it really is DESIGN) was effective and robust.  In the end, their compounds were able to realize potent inhibition of 4 topoisomerases across three bacterial species. Importantly, bacterial growth was realized through inhibition of both the gyrase and Top IV which is the key criterion for continued optimization.  Efficacy in a mouse model was demonstrated with 35.   

22 January 2014

Fragments vs procaspase-6: preventing activation

The caspases are involved in a plethora of cellular processes and have been targeted by many groups using a variety of methods. They are cysteine proteases that cleave substrates immediately after an aspartic acid residue – and they pose several challenges for drug hunters. Because of their active site cysteine, they are particularly susceptible to non-specific inhibitors such as PAINs. Another difficulty is that their predilection for negatively charged substrates (such as aspartate) complicates efforts to identify cell-permeable inhibitors. To address both these issues, Jeremy Murray and colleagues at Genentech collaborated with Adam Renslo and colleagues at UCSF to avoid the active site altogether. They’ve recently published this work in ChemMedChem.

Caspases are initially translated as inactive zymogens that form dimers. These are cleaved to form a dimer of dimers, and this complex works as the active protease. The researchers wanted to see if they could find molecules that bind to the zymogen dimer interface and so block activation.

They started by performing a surface plasmon resonance (SPR) screen using 2300 fragments against both the active and inactive (procaspase) forms of the protein. Initial screening was done at a relatively low concentration of fragment (50 µM), with full dose-response assays being run on hits from either assay. This resulted in 84 hits against procaspase-6, with dissociation constants from 3 µM to 2300 µM, most of which were selective for the inactive zymogen.

Crystallography revealed that several of the fragments were in fact binding to a pocket at the dimer interface. In particular, fragments 1 and 6 bound (separately) at partially overlapping sites, suggesting the possibility of merging. This led to compound 8, with an improvement in affinity, albeit at a cost to ligand efficiency. However, modeling suggested that the bound conformation of this molecule would be strained, a situation which could be rectified by moving the position of the nitrogen atoms in the pyrimidine ring. Gratifyingly, this led to nearly a 10-fold boost in potency (compound 11), and further tweaking improved the dissociation constant to sub-micromolar (compound 12).


At the same time, the researchers noticed that the dimer interface is symmetrical, and that compound 6 binds near the center of the symmetry axis. This led them to design symmetrical compound 14, which displayed a modest improvement in potency but at a cost to ligand efficiency. Again modeling came to the rescue, this time suggesting a larger central element to yield sub-micromolar binders such as compound 16.

Whether or not targeting procaspase-6 will be useful therapeutically, this paper is a nice example of fragment merging against an unconventional binding pocket. It is also an excellent example of cooperation on multiple levels: first between biologists, biophysicists, chemists, and modelers, and second between academia and industry.

15 January 2014

Poll: how do you store your fragment libraries?

Following up on the recent post on pool size, we thought we’d poll readers on the very practical question of how they store their libraries. I’ve heard some vigorous debates, so it will be interesting to see the results. Please vote on the right side of the page. Also, please note that this question refers to working libraries as opposed to master stocks.

13 January 2014

There can be too much of a Good Thing

One nice thing about being a consultant is that I get paid to think about things for people (sometimes).  One of the things I have been thinking about lately (on the clock) is the optimal size of fragment pools.  I got to started wondering if there can be too many fragments in a pool:
You know how you mother always said, don’t eat too much it will make you sick?  I never believed her until my own child was allowed to eat as much easter candy as possible and it actually made him sick.  [It was part of a great experiment to see how much "Mother Wisdom" was true, like Snopes.]  I have been working lately in library (re)-optimization and one thing that keeps coming up is how many fragments should go in a pool.  As pointed out here and discussed here, there are ways to optimize pools for NMR (and I assume the same approach can be done for MS).  So, we have always assumed that the more fragments in a pool the better off you are, and of course the more efficient.  
But is that true?  Is there data to back this up?  Probably not, I don’t [know if] anyone wants to run the control.  So, let’s do the gedanken. If you have 50 compounds in a pool (nice round number and easy to do math, so its my kind of number) you would expect for a “ligandable” target to have a 3-5% hit rate.  That means out of that pool you would expect 1.5-2.5 fragments to hit.  So, that means that you have 2 fragments that hit, these two would then compete and your readout signal would be reduced by 50%.  So, if you are already having trouble with signal you are going to have more.  Also, can you be sure that  negatives are real, or did they “miss” because of lowered signal due to the competition.  And what if one of the hits is very strong?  Also, how do you rank order the hits?  Do you scale the signal by the number of hits in the pool?
  I then reached out to the smart people I know who tend to be thinking about the same things I do, but in far greater depth.  I spoke to Chris at FBLD and he was putting together a large 19F library, aiming to get up 40 or more 19F fragments in a pool.  Well, Chris Lepre at Vertex was already thinking about this exact problem. He shared his thoughts with me and agreed to let me share them here (edited slightly for clarity).  

To accurately calculate the likelihood of multiple hits in a pool, I [Ed:Chris] used the binomial distribution.  For your hypothetical pools of 50 and a 3% hit rate, 44% of the samples will have multiple hits (25.6% with 2 hits, 12.6% with 3, 4.6% with 4); at a 5% hit rate this increases to 71% (26.1% = 2 hits, 22% = 3, 13.6% = 4, 6.6% = 5, 2.6% = 6).  So, the problem of competition is very real.  It's not practical to deconvolute all mixtures containing hits to find the false negatives:  the total number of experiments needed to screen and deconvolute is a minimum when the mixture contains approximately 1/(hit rate)^0.5 (i.e., for a 5% hit rate, mixtures of 5 are optimal). [Ed:Emphasis mine!] 

Then there's the problem of chemical reactions between components in the mixture.  Even after carefully separating acids from bases and nucleophiles from electrophiles in mixtures of 10, Mike Hann (JCICS 1999) found that 9% of them showed evidence of reactions after storage in DMSO. This implies a reaction probability of 5.2%, which, if extended to the 50 pool example, would lead one to expect reactions in 70% of those mixtures.  If this seems extreme, keep in mind that the number of possible pairwise interactions = npairs = n(n-1)/2 (n-1/2)*n [Ed: fixed equation], where n = the number of compounds in the pool.  So, a mixture of 10 has 45 possible interactions,  while a mixture of 50 has 1200.  Even with mixtures of only five, I've seen a fair number of reacted and precipitated samples.  Kind of makes you wonder what's really going on when people screen mixtures of 100 (4950 pairs!) by HSQC.  [Ed: I have also seen this as I am sure other people have.  I think people tend to forget about the activity of water.  For those who hated that part of PChem, here is a review.  Some fragment pools could be 10% DMSO in the final pool, and are probably much higher in intermediate steps.]

Finally, there's the problem of chemical shift dispersion.  Even though F19 shifts span a very large range and there are typically only one or two resonances per compound, the regions of the spectrum corresponding to aromatic C-F and CF3 become quite crowded.  And since the F19 shifts are relatively sensitive to small differences in % DMSO, buffer conditions, etc. it's necessary to separate them by more than would be necessary for 1H NMR.  Add to that that need to avoid combining potentially reactive compounds (a la Hann) and the problem of designing non-overlapping mixtures becomes quite difficult.  [Ed: They found that Monte Carlo methods failed them.]

I've looked at pools as large as 50, but at this point it looks like I'll be using less than 20 per pool.  I'm willing to sacrifice efficiency in exchange for avoiding problems with competition and cross-reactions.  The way I see it, fragment libraries are so small that each false negative means potentially missing an entire lead series, and sorting out the crap formed by a cross-reaction is a huge time sink (in principle, the latter could yield new, potentially useful compounds, but in practice it never seems to work out that way).  The throughput of the F19 NMR method is so high and the number of good F-fragments so low that the screen will run quickly anyway.  Screening capacity is not a problem, so there's not really much benefit in being able to complete the screen within 24 hrs vs. a few days.
The most common pool size (from one of our polls) was 10 fragments/pool.  This would mean that the expected hit rate was 1% or less.  This is a particularly low expected hit rate, or people are probably putting too many fragments in a pool.  So, is there an optimal pool size?  I would think that there is: between 10-20 fragments.  You are looking for a pool size that maximizes efficiency but you don't want to have so many that you also raise the possibility of competition. 

09 January 2014

Poll results: affiliation, fragment-finding methods, and library size

Here’s a summary of the latest poll.

Readership demographics have not changed significantly since 2010, aside from a slight shift towards industry (~58% today vs 51% in 2010).


The next question asked about screening methods, and here things get more interesting.


The first thing to notice is that, with one minor exception, most fragment-finding techniques are being used more, with SPR and ligand-detected NMR now being used by more than half of all respondents. As a consequence, the average number of techniques being used jumped from 2.4 in 2011 to 3.6 in 2013.

The overall order of popularity doesn’t seem to have changed much, the major exception being X-ray crystallography, which is way up. However, this may be an artifact; a comment on the 2011 poll suggested that some voters interpreted the question to be about primary screening methods, as opposed to all methods.

(Technical disclosure: feel free to skip this paragraph unless you’re a data geek. Due to issues with polling in Blogger, this year’s poll was run in Polldaddy, the free version of which gave total votes for this question but not the number of individual respondents. However, since the two other questions in the poll allowed only single answers, the number of responses was equal to the number of respondents: 95 for the demographic question and 97 for the library question below. I thus assumed 97 respondents for this question, which is coincidentally identical to the number in 2011. Note also that the categories BLI and MST were new for 2013.)

Finally, the question on fragment library size shows that most folks are using libraries of 1000-2000 fragments, with only ~10% of respondents using very small (≤500) or very large (≥5000) collections of fragments.

This result is strikingly similar to the median of 1300 fragments that Jamie Simpson, Martin Scanlon, and colleagues found in an analysis of 22 published libraries. Teddy’s notes from FBLD 2012 put the median slightly higher: around 2500 for 17 libraries. Perhaps people with larger libraries tend to broadcast their size? Of course, in the end, it’s not the size of your library that matters; it’s what’s in it, and what you do with it.

Thanks again for participating, and if you have ideas for new polls, please let us know.

06 January 2014

That's One Way How to Do It

There is no right way to do science, that's what makes science awesome.  However, when you are entering a new field, or trying something new the first thing you do is find a current review.  I remember in grad school whenever it was my turn to present literature at group meeting, I would search the topic in TIBS (Trends In Biochemical Sciences).  That was always the best starting point.  However, when you want to really do something, as in practical applications you looked for a Methods in Enzymology paper or Current Protocols.  This always give you a way to do something, with in depth technical hints and tricks of the trade.  On this blog, we discuss a lot of different techniques and oftentimes it is out of the users area of expertise.  We try to make it understandable and I think we are largely successful.  

In this paper, yours truly, Darren Begley, and colleagues from Emerald put forth one way to run and analyze Saturation Transfer Difference NMR for fragment campaigns.  STD is the subject of MANY posts here.  One of the things I want to point out is that opinions are like belly buttons, everyone has one.  So, in this Protocol we put forth a way to perform STD it is not the only way to do, but I think it is a rather robust method.  Not everything will translate to every company.  For example, most companies don't have extremely small, highly soluble fragments like the Fragments of Life and thus 500mM stocks will not be achievable.  I believe 100 mM is a better generic concentration.  However, I would love to hear in the comments what other people think.  Additionally, there are computational approaches that make the manual creation of pools unnecessary.  In terms of analysis, there are a million different approaches.  Most companies that make NMR software have some sort of automation.  I really like the implementation from Mnova.  However, it is important to keep in mind that your results are really only as good as your understanding of the experiment. 

What this all really comes down to is that NMR, and STD in particular, is not a black box.  You still need an expert user running the experiments and analyzing the data.  My goal with this paper though is to enable better understanding of the STD experiment for the lay user.  Hopefully, this leads to greater use of the experiment and concomitant increased success in screening.  I would really like to hear comments about what people do differently and why. 

02 January 2014

Fragment events in 2014 and 2015

Lots of great events coming up this year and next, so start making your travel plans soon!

2014

February 18-19: Select Biosciences Discovery Chemistry Congress will be held in Barcelona, Spain, and includes a number of fragment-based presentations as well as a short course taught by Ben Davis on February 17.

April 24-25: CHI’s Ninth Annual Fragment-Based Drug Discovery will be held in San Diego. You can read impressions of last year's meeting here and here, the 2012 meeting here, the 2011 meeting here, and 2010 here. Also, Teddy and I will be teaching a short course on the topic over dinner on April 24.

May 21-22: Cambridge Healthtech Institute’s Fourteenth Annual Structure-Based Drug Design will be held in Boston, with several talks on FBLD. 

June 1-4: Newly added! Developments in Protein Interaction Analysis (DiPIA 2014) will be held in La Jolla, CA. This event is organized by GE Healthcare Life Sciences so it should be a great opportunity to learn about recent developments in fragment screening by SPR, ITC, and DSC.

July 19-22: Zing conferences is holding its first-ever Fragment Based Drug Discovery and Structure Based Drug Design in Punta Cana, Dominican Republic. In addition to the amazing location there are some great speakers, so definitely check this one out.

September 21-24: Finally, FBLD 2014 will be held in Basel, Switzerland. This marks the fifth in an illustrious series of conferences organized by scientists for scientists, the last of which was in San Francisco in 2012.  I believe this will also be the first major dedicated fragment conference in continental Europe. You can read impressions of FBLD 2010 and FBLD 2009.

2015

June - TBA: NovAliX will hold its second conference on Biophysics in Drug Discovery in Strasbourg, France. Though not exclusively devoted to FBLD, there is lots of overlap; see here, here, and here for discussions of last year's event.

December 15-20: Finally, the first ever Pacifichem Symposium devoted to fragments will be held in Honolulu, Hawaii. The Pacifichem conferences are held every 5 years and are designed to bring together scientists from Pacific Rim countries including Australia, Canada, China, Japan, Korea, New Zealand, and the US. There is lots of activity in these countries, and since travel to mainland US and Europe is onerous this should be a great opportunity to meet many new folks - in Hawaii no less!

Know of anything else? Add it to the comments or let us know!