17 March 2014

This is another way to do it.

The key to doing something right is to following the directions.  How closely you follow the directions, or don't follow, can be the difference between brilliance and just a good performance, e.g. cooking.  Sometimes, directions are meant as guidelines, like the Pirate Code or the Voldemort Rule.  Late last year, and blogged about here, I published a paper in Current Protocols on how to prosecute an STD screen.  A recent paper in PLOSOne, shows how someone else runs their screens, but with details on library construction, solubility testing, and more.  What makes this paper of interest is the level of detail that they provide.

Library Design: They assembled a diverse fragment library with the following rules: 110≤ molecular weight ≤350, clogP≤3, number of rotable bonds ≤3, number of hydrogen bond doners ≤3, number of hydrogen bond acceptors ≤3, total polar surface area ≤110, and logSw (aqueous solubility) ≥ −4.5. 
I am little confused by the figure and what the text says.  In the text, they seem to have relaxed the MW cutoff, but the figure shows that anything not Voldemort Rule compliant is tossed.  They also preferred that the compound has at least one aromatic peak (for easier NMR detection).  They purchased 1008 from Chembridge, solubilized at 200 mM in DMSO-d6 (ease of NMR detection, again) and then tested the solubility at 1 mM in water.  I would have added some salt here, 50 mM, but that is a quibble.  For purity, they claim a low level of impurity (< 15%)!!!  To me, this is a whole lot of impurity.  But, as has been noted here, purity levels vary from library to library.
Solubility Testing:  They then made sure to experimentally test every fragment for solubility.  I can agree more emphatically with this approach.  Bravo!  They go into great detail, which I will not attempt to replicate here, but thanks to open access, they have included the scripts in the supplemental.  Acceptable compounds had > 0.1 mM aqueous solubility.  For me, this is too low, but to each their own.  They ended up with 893 total fragments (89% passed).  The real data I would like to see is how many fail if the cutoff is set at 0.5 mM or higher.  
Pooling: They then describe their pooling strategy.  I like open access articles for a lot of reasons, and tend to overlook small editorial problems (typos, grammar, etc.), but in this case, let me rant.  The authors state in the text that a random mixing of compounds would lead to severe overlap, exemplified in 3a.  To me, it does no such thing. 

Their approach is very similar to the Monte Carlo-based one that has previously been discussed on this blog.  Their final pools contain 10 fragments at 20 mM (I assume in 100 % DMSO-d6). 
Screening: They also acquired the 1H spectrum, STD (-0.7 ppm, > 1 ppm from any methyl), and WaterLOGSY spectrum of every pool for future reference.  This is a very clever approach as the STD should give no signal while the WaterLOGSY should give inverted peaks for all compounds in the pool (when interacting with a target they will be "right-side up").  Again, the figure may show that (I think if you blow up the figure the WaterLOGSY spectra does have peaks) but it is very difficult to see. 
Three of the 90 pools (3.3%) showed peaks in the aromatic region, most likely due to aggregation (they observed precipitation).  I would like to know if those compounds showed STD peaks also had those methyl groups within 1 ppm of the saturation frequency.  I would also like to know if they removed those compounds from the library, or just dealt with it.  For a paper with a great level of detail, it falls flat in this respect.  
Screening is performed at 10uM Target: 500uM ligand and the following parameters: acquisition time of 1 s, 32 dummy scans, and relaxation delay of 0.1 s, followed by a 2 s Gauss pulse train with the irradiation frequency at −0.7 ppm or −50 ppm alternatively. The total acquisition time was 15 minutes with 256 scans.
Screen Analysis: One of the first things they noticed was that there were difference between the reference spectra (plain water) and the screening sample (protein buffer).  They decided they could not automate the entire process and instead just scripted the data processing and display.  Then they confirmed each putative active as a singleton. 
What they are putting together is a "One Size Fits All" process.  I give them credit for doing this, but I think that you cannot find a single NMR-based process for all targets.  In particular, I think they could have used more typical conditions for the reference spectra.  The paper then goes on and discusses their application to targets of interest.  For me, that is irrelevant.  This paper is an excellent companion to the Current Protocol paper, and due to open access, most likely to get far more citations.

No comments:

Post a Comment