Showing posts with label in silico. Show all posts
Showing posts with label in silico. Show all posts

24 February 2025

Fragments beat lead-like compounds in a screen against OGG1

The twin rise of make-on-demand libraries and speedy in silico docking has supercharged fragment screening and optimization: we’ve written previously about V-SYNTHES, Crystal Structure First and a related method. Another advance is described by Jens Carlsson (Uppsala University) and a large group of multinational collaborators in an (open access) Nat. Commun. paper.
 
The researchers were interested in 8-oxoguanine DNA glycosylase (OGG1), a DNA-repair enzyme and potential anti-inflammatory and anticancer target. They started with a crystal structure into which they docked 14 million fragments (MW < 250 Da) or 235 million lead-like molecules (250-350 Da) from ZINC15. Multiple conformations and thousands of orientations were sampled for each molecule. In all, 13 trillion fragment complexes and 149 trillion lead-like complexes were evaluated using DOCK3.7, a process that took just 2 hours and 11 hours on a 3500 core cluster.
 
After removing PAINS and molecules similar to previously reported OGG1 inhibitors, the top-scoring 0.05-0.07% molecules from each screen were clustered and, after manual evaluation, 29 fragments and 36 lead-like compounds were purchased from make-on-demand catalogs. These were tested at 495 µM (for fragments) or 99 µM (for larger molecules) in a DSF screen. None of the lead-like compounds significantly stabilized the protein, while several fragments did. Four of the fragments were successfully crystallized with OGG1, and in all cases the key interactions predicted in the computational screens were confirmed in the actual crystal structures.
 
Compound 1 showed the greatest stabilization of OGG1 (2.8 ºC) and some inhibition in an enzymatic assay, but not enough to calculate an IC50. Searching for analogs that contained compound 1 as a substructure in the Enamine REAL database of 11 billion compounds produced few hits, but, as before, thinking in fragments proved fruitful. Searching for molecules containing just the core heterocycle and amide (colored blue below) yielded nearly 43,000 possibilities. Docking these and making and testing a few dozen led to compound 5, with mid-micromolar inhibition. Further iterations led to low micromolar compound 7.


At this point the researchers turned from make-on-demand libraries to synthetically accessible virtual libraries to fine-tune the molecule. After docking 6720 virtual molecules, they synthesized and tested 16, of which 12 were more potent than compound 7, with five of them being submicromolar. Compound 23 showed low micromolar activity in two different cell assays and was selective against four other DNA repair enzymes.
 
The same high-throughput docking approach was applied to three other protein targets: SMYD3, NUDT5, and PHIP. In each case crystal structures of bound fragments were available to use as starting points. Multiple compounds with improved docking scores compared to the initial fragments were identified, though no compounds were actually synthesized and tested.
 
The success in finding compound 1 demonstrates experimentally the advantage fragments have in efficiently searching chemical space. The researchers note that 97% of the >30 billion currently available make-on-demand compounds have molecular weights >350 Da, while only 50 million are < 250 Da. Screening all of these fragments in silico is possible; screening everything, less so. Although the fragment hits for OGG1 were weak, this isn’t always the case, as noted here. The fact that fragment 1 could be advanced to a sub-micromolar inhibitor after synthesizing just a few dozen molecules also testifies to the efficiency of in silico approaches.
 
The paper contains lots of useful details and suggestions for streamlining the process and is well worth perusing if you are trying to find hits against a structurally-enabled protein.

28 October 2015

Hidden gem of a finding or not?

Todays paper is from a group in Korea.  It's a typical "we did some in silico screening, limited biochemical testing, made a compound or two, and voila!" paper.  In this case, the target is Tyk2 (the target of Xeljanz).
Figure 1.  Xeljanz (tofacitinib)
2000 diverse fragments were selected from the Otava library and docked against Tyk2.  64 top ranked fragments were selected and 9 were selected that had inhibition over 50% at 100 microM, with the best compound (1) having 60% inhibition at 3 microM.  

Figure 2.  Cpd 1 docked to Tyk2. 
What I don't like here is that they didn't do full dose-response curves.  That seems lazy.  Also, the only structures they show are the docked structures.  Maybe its just me, but show me some line drawings.  They then did some limited SAR (3 cpds) based on 1 as the scaffold.  Cpd 12 was the best compound 
Figure 3.  Cpd 12
(10nM IC50).  In the end, 12 was equipotent (or superior) with tofacitinib in terms of shutting down Tyk2/Stat3 signalling.  However, they could not rule out that this is due to non-specific inhibition of other JAK proteins. So, is this a great result?  If so, why BOMCL (not to be snobby)?

01 September 2015

Polypharmacology for Kinases

We've been a roll with epigenetics and PPIs lately.  So, its a nice break when a kinase paper comes out.  But, in keeping with the theme of hard targets, today's paper is about a tyrosine kinase.  We've started to see more and more FBDD on TKs.  One problem is that TKs can acquire resistance to drugs, quickly eliminating their therapeutic usefulness.  One way around this is to use polypharmacology: "optimized inhibitory profiles for critical disease-promoting kinases, including crucial mutant targets."  In this work, they are targeting RET and VEGFR2 dual inhibitor using a in silico/fragment approach.  

Compound design was largely based upon homology modeling the "DFG-out" RET structure utilizing the VEGFR2 structure as a template.  Their Kinase Directed Fragments (KDF) are shown in Figure 1.
Figure 1.
Their fragment design rationale makes some interesting comments.  They state that a "hinge binding" fragment alone can aggregate at high concentrations needed to achieve activity in a biochemical screen.  So, their fragments have an additional moiety that interact with the lipophilic or ribose pocket.  
Accordingly, KDFs have larger molecular weights and are generally more active than the fragments contained in traditional libraries, permitting screening in the micromolar range.
I would say the first statement is conjecture and the second untrue.  17 heavy atoms is squarely in the regime of what people consider "fragment" sized.  I think instead the authors are using the wrong tool for the job.  Using a biochemical screen to find fragment actives is akin to hammering a nail with a screwdriver.  Sure, you can do it, but why would you?  

Rather expectedly, they identified compound 1 as a promising starting platform.  Of course, the criteria for selecting this compound are kept highly secret.  It did "effectively" inhibit RET at 100 (63%) and 20 uM (28%) in the presence of 190 uM ATP [Km for RET 12uM].  It had VEGFR2 activity of 59% at 100 uM. 
Add caption
Modeling allowed them to generate the compounds showed in Figure 2.
Figure 2. 
Pz-1 had activity less than 1 nM against RET, RET(V804M/L)[a gatekeeper mutant],  and VEGFR2.  This equipotency was also demonstarted in cell-based assays.  Against a panel of 91 other kinases at 50 nM, Pz-1 had significant activity against 7 others (TRKB, TRKC, GKA, FYN, SRC, TAK1, MUSK).  So, in the end using primarily modeling and a biochemical assay they were able to generate a polypharmacological TK inhibitor.  I leave it to those more well versed in the biology whether or not those 7 other kinases pose a potential problem.  I however would argue that they generated an agent with polypharmacology against 9 kinases not 2. 

19 August 2015

Caveat Emptor...or marketing does not always tell you whats really in the package.

In case you missed it, I spoke at the ACS on Sunday.  It was in a computational session looking at designing libraries and I am pretty sure I was the only non compchemist.  It was about all the problem compchemists have caused in library design.  My talk was even live tweeted by Ash (@curiouswavefn) and was well received.  So, looking at the next paper in my queue, its a computational-focused paper.  So, After spending several hours on a Sunday listening to compchemists, have I softened?  

This paper is the subject of today's post.  It is an extension of this paper which describes their virtual screen.  From a 2 million compound virtual screen, they tested 17 compounds in vitro leading to 2 micromolar compounds.  This paper is the story of the most potent of the two micromolar compounds.  The target is CREBBP, which is another in the long line of epigenetic targets.  Compound A was one of the original in silico actives that was tested.  Three analogs were obtained and tested (B-D).
Figure 1.  Original active, A.  Analogs B-D.  Common structural motif is shown in blue.
Compound B was the most potent and become the focus of their optimization efforts. Of course, my eyes are drawn to that potential michael acceptor, but the authors dismiss it based upon their docking results: the only alkylatable residue in the area of its putative binding is well buried.  It is a moot point anyway because they were able to replace it with a isopthalate group and increase potency by 5x, 0.9uM (Compound 6).  Interestingly,the potency of 6 is different depending on the assay used: 0.8 um in a competition binding assay and 8.7uM in a TR-FRET assay.
Figure 2.  Compound 6
This compound was crystallized and showed that the predicted binding mode was correct. 

They then performed some gobbledy-gook MD calculations (finite-difference Poisson, warning PDF) in order to evaluate the electrostatic contribution of the polar contribution to binding of 6 and 7.  Compound 6 had more favorable electrostatic interactions (0.8 kcal/mol) than 7, which had more favorable van der Waals interactions (1.4 kcal/mol).  With this crucial information AND the crystal structure in hand, they then explored additional chemical space.  

Despite the authors' claim, I don't think they actually improved the potency significantly.  Compound 6 is 8 uM in the TR-FRET assay and the best compounds they claim are 1 or 2 uM.  I really have to call monkeyshines here.  They use the different assays interchangeably, yet never explain the one is used for what purpose.  Its cherry picking values.  When talking about selectivity, they switch to using thermal shift values.  And we all know the value of that.  So, I find it hard to believe their "most potent" this or "selectivity" that. The title of the paper includes "nanomolar", but that is only in one assay.  That's like saying I can run a 6 minute mile, since I did it once under optimum conditions.  Honestly, my typical times (WAY back when) were more 8:30 miles.  That honesty in data reporting.  Since they obviously had access to different assays, why weren't all compounds run in one, or optimally both.  I don't see that the MD calculations had any positive impact.  Maybe its the heat, but this paper is a not a sham, but definitely full of deceptive advertising.

08 April 2015

Fragment-based methods in drug discovery

FBLD generates a plethora of reviews, as evidenced by Practical Fragments’ annual round-ups (see for example 2014, 2013, and 2012). However, for the past three years there have been no new books. The drought has now ended, starting with the publication of Methods in Molecular Biology Volume 1289, edited by Anthony E. Klon of Pennsylvania Drug Discovery Institute. Computational chemistry is probably one of the most rapidly changing disciplines within FBLD, and thus it is appropriate that this is the primary focus.

The book is part of the Springer Protocols series, which offers highly specific step-by-step instructions. Many of the chapters have a common organization: Introduction, Materials, Methods, and Notes. While this can work well for established molecular biology techniques such as cloning, it can be trickier to apply to computational approaches. Some of the chapters are quite brief and assume extensive specialized knowledge, while others are extremely detailed. Of course, it is impossible to satisfy everyone; hopefully the following summary will help you find what is most useful for you.

Part I (Preparation) consists of five short chapters. The first is by Rachelle Bienstock, editor of the most recent (and also computationally intensive) book. As we’ve noted, water plays a pivotal role in protein-ligand interactions, and Rachelle concisely but thoroughly summarizes available computational methods. Chapter 2, by Yu Zhou and Niu Huang at the National Institute of Biological Sciences in Beijing, outlines how to use DOCK to assess binding site druggability. In chapter 3, Raed Khashan (King Faisal University, Saudi Arabia) describes a free software tool called FragVLib for generating virtual fragment libraries to compare different ligand binding pockets. Chapter 4, by Jennifer Ludington (formerly of Locus Pharmaceuticals), discusses practical issues in preparing a virtual fragment library, such as conformer and partial charge assignment. Finally, in chapter 5 Peter Kutchukian discusses how he and his Merck colleagues enlisted medicinal chemists to help fill the gaps in their fragment collection.

The second section is titled Simulation. In chapter 6, Kevin Teuscher and Haitao Ji (University of Utah) summarize “fragment hopping,” including an extensive table of available software tools. Chapter 7, by Olgun Guvench (University of New England), Alexander MacKerrel (University of Maryland), and coworkers describes SILCS: site identification by ligand competitive saturation. This program, developed by SilcsBio LLC, soaks proteins in virtual solutions containing very tiny fragments (think propane and methanol) to look for binding sites. Molecular dynamics simulations include methods to prevent aggregation of the ligands or denaturation of the protein.

Chapter 8, by Álvaro Cortés-Cabrera, Federico Gago (Universidad de Alcalá, Madrid) and Antonio Morreale (Repsol Technology Center, Madrid), describes how ligand efficiency indices can be used to guide fragment growing. Of course, metric skeptics will still ask, “sure it works in practice, but does it work in theory?” And in chapter 9, Jui-Chih Wang and Jung-Hsin Lin (Academia Sinica, Taipei) introduce a new scoring function for fragment-docking, including several pages of detailed instructions for implementing it in AutoDock. As we’ve noted, calculating binding affinities for fragments can be difficult, and the new function seems to be accurate to about ±2.1 kcal/mol for a series of compounds tested

Part III, Design, begins with another chapter by Rachelle Bienstock in which she outlines the process of fragment-based ligand design, highlighting various software tools available at each stage. This includes library design, growing, linking, and downstream considerations such as ADME. Chapter 11, by Zenon Konteatis of Agios, is a brief primer of the process, including an example for the kinase TGF-beta. The last chapter in this section, by Jennifer Ludington, focuses on binding site analysis to assess whether a protein site is druggable (or at least ligandable). She focuses on the procedure used at Locus Pharmaceuticals, which involved soaking a virtual protein in a solution containing fragments and then lowering the chemical potential of the system until only the tightest fragments remain bound. Clusters of probe fragments indicate possible hot spots.

Finally, Part IV consists of Case Studies, starting with a chapter on kinase inhibitors by Jon Erickson (Lilly). More than a third of FBLD-derived clinical candidates target kinases, so it is always good to have an updated overview, though there is at least one structural error.

The last two chapters are both by Frank Guarnieri, founder of Locus Pharmaceuticals and currently at Virginia Commonwealth University School of Medicine. These are highly opinionated (with lots of first-person singular pronouns) and fun to read. They both describe the simulated annealing of chemical potential (SACP) approach that formed the basis of Locus (and is also discussed by Jennifer Ludington above). Chapter 14 describes a small molecule erythropoietin (EPO) mimetic program. The protein EPO binds to and activates a dimeric receptor, and a small molecule functional mimetic would indeed be an exciting breakthrough. Unfortunately, the primary data presented are not compelling, and I remain unpersuaded, though perhaps readers are aware of more convincing evidence.

Chapter 15 describes the Locus program to develop a highly selective orally available p38 inhibitor. The discussion offers a rare window into life at a small biotech, including disagreements over strategies and interpretation of data. It now appears that p38 is probably not a good target for inflammation, which had unfortunate repercussions:

The business decision at Locus to put so many resources into this program along with other questionable business decisions resulted in the company going bankrupt after about 10 years in existence.

Some of the most important lessons are negative, and it’s nice to see these appear in print. Success stories are inspirational, but this chapter is a healthy reminder of the very many things that must succeed for fragment-based approaches to yield new drugs.

04 February 2015

Structure based Design on Membrane Proteins

GPCRs are a big target class, which have historically be unamenable to FBDD/SBDD.  However, recent work has changed this thinking.   Membrane proteins are being viewed as increasingly ligandable and amenable to FBDD.  In this paper, Vass and colleagues show their computational approach to indentifying multiple fragment binding sites amenable to linking.  

Recent clinical evidence supports the effectiveness of dual dopamine D2 and D3 antagonists or partial agonists in schizophrenia, depression, and bipolar mania. D2 antagonism is required for the antipsychotic effect, and D3 antagonism contributes to cognitive enhancement and reduced catalepsy.  Dual acting compounds should show higher activity to D3 than D2 (due to differential expression levels).  To this end, they apply their sequential docking protocol to identify potential points for fragment linking on the D3 crystal structure and D2 homology model.  These two targets have almost identical primary binding sites, but selectivity can be modulated through the secondary site.

In short, their in house fragment library consisted of 196 amine containing fragments for the primary site.  Second library of 266 fragments of cyclohexyl or piperidines.  Then, the first library was
docked to the apo receptor structures,then the docking poses were merged with the receptor, new grids were constructed including the merged ligands, and the second fragment library was docked to the partially occupied binding sites.  
Table 1.
As shown in Table 1, they synthesized three of their compounds and did generate potent and selective D3/D2 antagonists.  Linking is hard.   It still comes down to the right linker and all that entails.  Finding that right linker is made much easier by having structural data, as shown here.  This is a nice example of experimentally verifying in silico predictions. 

14 January 2015

A Great New Tool....for what?

As has been noted here, frequently, is that in silico design of fragments is very hard, fraught with problems, and often leads to crap.  As was pointed out elsewhere recently, computational tools are getting more powerful, but still don't have chemical intuition leading to suspect structures.  I am assuming that computational scientists have heard the critiques because we are seeing better and better work, with more experimental verification.  Now, what about better structures?  In this paper from Kaken Pharmaceutical and Toyohashi University of Technology, the propose a way to do this.  

In silico tools can be divided into two classes, structure-based and ligand-based design (TOPAS and Flux are two examples of the latter).  These methods are based upon biological evolution: reproduction, mutation, cross-over, and selection.  Mutation and cross-over are vital for creating new chemical structures.  Mutation can be atom or fragment-based.  In a previous study by these authors, the atom-based method was used for the mutation, in which an atom is modified into another atom to explore the chemical space. The method often resulted in a lot of unfavorable structures that contained invalid hetero−hetero

atom bonds such as O−O and N−F. The fragment mutation approach avoids this problem, especially when the fragments are from known molecules (this assumes they were synthesized and thus could be again). This is one key to their approach: chemical feasibility is considered.

Figure 1.
The method (Figure 1) uses a known molecule to "navigate a chemical space to be explored." [I love this phrase, but immediately I think of this.]  The reference molecule is also used to generate the seed fragments (Figure 2), which can be rings, linkers, or side chains.  
Figure 2
 With a good set of seeds, connection rules, and so forth, the key is the mutation and cross-over events.  A parent molecule is randomly selected and then one of three operations occurs: 1. add a fragment, 2. remove a fragment, or 3. change a fragment.  For "Add Fragment", if the base fragment is ring, then a new linker, side chain, or ring is chosen.  If the base fragment is linker or side chain, then a ring is added. "Remove fragment" removes a terminal fragment.  "Replace fragment" is a fragment for fragment swap (Figure 3). The cross-over function is also shown in Figure 3. 
Figure 3
Then they used this protocol to design ligands against GPCR (AA2A and 5HT1A). 
Figure 4.
Figure 4 shows some of the results against AA2A.  They were able to generate a molecule that is very similar to a known active and because of the generation of the fragments these are all presume to be chemically feasible.  
 
So, my first complaint here is where's the experimental verification?  OK, this is not a medchem journal, but still...  I am not nearly as savvy as some of our regular readers, but I am completely missing the forest for the trees here.  This paper first struck me as pretty neat, but then the "neat-o" factor fell away and I was left asking "what is it for?"  To me, this would seem to be a patent-busting tool.  We need to generate a structure that is very similar to billion dollar compound A, but it cannot contain fragments X, Y, and Z.  Is this better than locking your favorite medchemists in a room with a few pads of paper?  I am not being flippant here.  If I am missing something, please let me know in the comments.








07 January 2015

Spinach affects the Water

People often ask what a fragment is.  I like to paraphrase Justice Potter and say that it is like pornography; it is in the eye of the beholder.  I am not one for hard and fast rules as to what a fragment should be.  But, I also have a definite opinion what a fragment is NOT.  To me, what a fragment should be is easily described: relatively unadorned molecules.  I have a whole set of rules as to what the substituents should look like (coined the Zartler Optical Filter or ZOF by a cheeky comp chem friend).  In this paper, a group from Merck Serono decide to probe exactly what role the spinach on fragments play.  

Specifically, they deconstructed a TIE2 inhibitor (Figure 1) into its core hinge binding motif (Figure 2). 
Figure 1.  Crystal Structure of the Intact Inhibitor
This hinge binding motif has the advantage in that "decoration" can be introduced at the 4 or 8 position (Figure 2) as well as giving three donor/acceptor moieties. 
Figure 2. 4-Amino-8H-pyrido[2,3-d] pyrimidin-5-one (compound 1)
as core hinge binding motif.
They determined crystal structures for this molecule and four related fragments (Figure 3)
Figure 3.  Fragments for this study.
and then went to town on them with in silico methods to study the roles of water.  In one of those "gotta love it" moments, they classified the waters as "happy" or "unhappy", depending on whether they have positive or negative free energy, respectively.

So, what do we learn?  First, changes in the decoration leads to different binding modes.  In this case, they conclude that replacement of different water molecules leads to differences in binding modes.  Well, not surprising.  But, I think this is part of a trend, studying water and how fragments affect them, and vice versa.  In fact, the authors suggest using WaterMap could help to rationalize the roles of waters.  So, are we entering a brave new world of experimental verification of in silico predictions?

21 October 2014

Benchmark Your Process


So, not everybody agrees with me on what a fragment is.  As has been pointed out years ago, FBDD can be a FADD.  In this paper, from earlier this year, a group from AZ discusses how FBDD was implemented within the infectious disease group. Of course, because of the journal, it emphasizes how computational data is used, but you skim over that and still enjoy the paper :-). They break their process into several steps.
Hot Spots: This is a subject of much work, particularly from the in silico side.  In short, a small number of target residues provide the majority of energy for interaction with ligands.  Identifying these, especially for non-active site targets (read PPI), is highly enabling, for both FBDD and SBDD. To this end, the authors discuss various in silico approches to screening fragments.  They admit they are not as robust as would be desired (putting it kindly).  As I am wont to say, your computation is only as good as your experimental follow up.  The authors indicate that the results of virtual screens must be experimentally tested.  YAY!  They also state that NMR is the preferred method; 1D NMR in particular being the AZ preferred method.  [This is something (NMR as the first choice for screening) that I think has become true only recently.  Its something I have been saying for more than a decade, but I guarantee my cheerleading is not why.] They do note that of the two main ligand-based experiments, STD is far less sensitive than WaterLOGSY.  There is no citation, so I would like to put it out there, is this the general consensus of the community?  Has anyone presented data to this effect?  Specifically, they screen fragments 5-10 per pool with WaterLOGSY and relaxation-edited techniques.  2D screening is only done for small proteins (this is in Infection) and where a gram or more of protein is available.

Biophysics:  They have SPR, ITC, EPIC, MS, and X-ray.  They mention that SPR and MS require high protein concentrations to detect weak binders and thus are prone to artifacts.  They single out the EPIC instrument as being the highest throughput.  [As an aside, I have heard a lot of complaints about the EPIC and wonder if this machine is still the frontline machine at AZ.]  60% of targets they tried to immobilize were successful.  They also use "Inverse" SPR, putting the compounds down; the same technology NovAliX has in their Chemical Microarray SPR.  In their experience, 25% of these "Target Definition Compounds" still bind to their targets. 

They utilize a fragment-based crystallography proof of principle (fxPOP).  Substrate-like fragments (kinda like this?) are screened in the HTS, hits [not defined] are then soaked into the crystal system, and at least one structure of a fragment is solved.  This fragment is then used for in silico screening, pharmacophore models, and the like.  So, this would seem to indicate that crystals are required before FBDD starts.  They cite the Astex Pyramid where fragments of diverse shape are screened and the approach used at JnJ where they screen similar shaped fragments and use the electron density to design a second library to screen.

As I have always said, there are non-X-ray methods to obtain structural information.  AZ notes that SOS-NMR, INPHARMA, and iLOE are three ways.  These are three of the most resource intensive methods: SOS-NMR requires labeled protein (and not of the 15N kind), INPHARMA requires NOEs between weakly competitive ligands (and a boatload of computation), while iLOE requires NOEs of simultaneously binding ligands.  I think there are far better methods, read as requiring fewer resources, to give structural information more quickly (albeit at lower resolution).

The Library:  The describe in detail how they generated their fragment libraries.  They have a 20,000 fragment HCS library.  The only hard filter is to restrict HA less than 18.  I fully support that.  They also generated a 1200 fragment NMR library biased towards infection targets.

The Process:   The authors list three ways to tie these methods together:
  1. Chemical Biology: Exploration of binding sites/development of pharmacophores.  I would add that this is also for target validation.  As shown by Hajduk et al. and Edfeldt et al., fragment binding is highly correlated to advancement of the project. 
  2. Complementary to HTS.  At the conference I am at today, one speaker (from Pfizer) said that HTS was for selectivity, FBDD was for efficiency (or Lord, here comes Pete with that one).  I really like that approach.
  3. Lastly, stand alone hit generation.  
I think this paper is a nice reference for those looking to see how one company put their FBDD process in place. Not every company will do it the same, nor should they.  But there is a FBDD process for every company.

15 October 2014

When a Fragment is DEFINITELY not a Fragment

There are lots of papers that use "fragments" or "fragment approaches".  I find a lot of computational papers do this, is it because FBDD has won the field, or its sexy?  Well, in this paper the authors take an interesting spin on the term fragment. For many targets (particularly PPI), peptides are the only tool to assess binding, or the best binders.  However, despite a small vocal minority, I think most people don't consider peptides to be drugs, but instead good starting points.  The REPLACE (Replacement with Partial Ligand Alternatives through Computational Enrichment) method is used to identify fragments for the CDK2A system to identify fragment alternatives to N-terminal portions of the peptide and especially the crucial arginine residue.  As I say, repeatedly, Your Computation is only as good as your Experimental Follow up

This group took a very cautious approach to the initial modeling, understanding that PPIs are difficult to study via computational methods.  They used crystal structures of FLIPs (Fragment Ligated Inhibitory Peptides) and modeled in the compounds against subunit B and D.  Subunit B gave better results and so that was used for further modeling.  [I hate this kind of stuff, strikes me as wrong.]  After further work, they concluded that the modeling was validated and would be predictive for new compounds.  Then designed a library based on a pharmacophore model using scaffolds phenylacetate, five member heterocycles, and picolinates.  
Modeled Compounds.  Cyclin residues have three letter code, peptides one letter codes.  The solid lines show interactions between acidic cyclin D1 residues and the piperinzinylmethyl group of the inhibitor.
They then, bless their hearts, made some compounds. 
In the end, they showed that it is possible to turn peptides into small molecule-ish compounds.  Please note these activities are in millimolar!  So, even with the current debate as to what PPI fragments should look like, I find it very hard to believe that these molecules are in anyway fragments.  Grafting a fragment looking something onto a big something is not "Fragment based Discovery". 

16 July 2014

You Probably Already Knew This...

Academics can spend time and resources doing, and publishing, things that people in the industry already "know".  This keeps the grants, the students, the invitations to speak rolling in.  It also allows you to cite their work when proposing something.  This is key for the FBHG community.  There are many luminaries in the FBHG field, and we highlight their work here all the time. Sometimes, they work together as a supergroup.  Sometimes, Cream is the result.

Brian Shoichet and Gregg Siegal/ZoBio have combined to work together.  In this work, they propose to combine empirical screening (TINS and SPR) with in silico screening against AmpC (a well studied target).  They ran a portion of the ZoBio 1281 fragment library against AmpC.  They got a 3.2% active rate, 41 fragments bound.  6 of these were competitive in the active site against a known inhibitor.  35 of 41 NMR actives were studied by NMR; 19 could have Kds determined (0.4 to 5.8 mM).  13 fragments had weak, but uncharacterizable binding; 3 were true non-binders. That's a 90% confirmation rate.  34 of 35 were then tested in a biochemical assay.  9 fragments had Ki below 10 mM.  Of the 25 with Ki > 10mM, one was found to bind to target by X-ray, but 25A from the active site.  They then did an in silico screen with 300,000 fragments and tested 18 of the top ranked ones in a biochemical assay.  

So, what did they find? 
"The correspondence of the ZoBio inhibitor structures with the predicted docking poses was spotty. "  and "There was better correspondence between the crystal structures of the docking-derived fragments and their predicted poses."
So, this isn't shocking, but it is good to know.  This is also consistent with this comment.  So, the take home from this paper is that in silico screening can help explore chemical space that the experimentally much smaller libraries miss.  To that end, the authors then do a a virtual experiment to determine how big a fragment library you would need to cover the "biorelevant" fragment space [I'll save my ranting on this for some other forum].  Their answer is here [Link currently not working, so the answer is 32,000.]


12 December 2013

Upon Request

Dan and I blog here because we love it; we don't get paid, it takes a lot of time, and has very little reward.  I love it when I meet someone new and they say, "Oh, I read your blog."  However, this allows us to have freedom to review what we want, when we want, and how we want.  We don't sell advertising, we don't generate revenue, and so on.  Sometimes people agree with us, sometimes they don't.  These posts are our opinions and like bellybuttons, everyone has one.  Sometimes, we get pinged by somebody who just published a paper and would really like to see us blog about it.  Sometimes we do, sometimes we already have and they missed it, and sometimes we don't.

I received a polite email recently, pointing out this paper.  It was already on my radar to blog about, so I bumped it up in the queue.  This paper caught my attention because it is a fragment screen against a DNA-target, specifically the G-quadruplex from c-MYC.  G-quadruplexes are found in the promoters to many oncogenes and the supposition is that by stabilizing them you can reduce their transcription.  It is an intriguing idea which has already been investigated with a number of compounds to date.  These authors decided to use fragments against the G-quadruplex without knowing if fragments would bind to a nucleic acid target with sufficient affinity and selectivity.  Their primary screen was an Intercalator Displacement Assay (IDA) which has been used previously to find G-quadruplex binding ligands.  A 1377 fragment library (@5mM) (previously used against riboswitches) was used and it obeyed the Voldemort Rule, had >95% purity, and 1mM aqueous solubility. The top 10 hits from this screen could be placed in three groups.
Then, in order to confirm their biochemical assay results they decided to dock them these top 10 fragments.  WHHAAAAT you say?  That was my initial reaction.  Why oh why doth they vex me so?  They then go into EXCRUCIATING detail about the docking results, even concluding from the results some SAR hypotheses.  I kid you not.  They also evaluated these top 10 fragments in a cellular assay (125um and 250uM) using a Western blot readout.  These concentrations were chosen in order to not show short or long-term toxicity, but Mirabile dictu, Data Not Shown.  All fragments, except two (7A3 and 2G5), showed significant changes in c-Myc expression levels. Interestingly, "no significant changes" still gives a 20% reduction in c-Myc levels. 
Four fragments were able to reproduce this effect, of which 11D6 was the best.  The four best were then run pair-wise to and every combination induced a significant reduction of c-Myc.  

So what does this tell us?  Well, I think they have found fragments which bind to the c-MYC promoter G-quadruplex.  It may be exhibiting this binding in the cells.  There are a few experiments that I would like to see (and would have asked for if I had reviewed this paper): a binding assay (SPR, ITC, NMR, whatevs) being he primary one.  We also continue to know that docking really does not add anything to the discovery process. 


20 November 2013

Fragments against PPI Hot Spots

Protein-Protein interactions are important to so many physiological processes.  There is mounting literature examples of utilization of fragments to block PPIs.  In this paper, Rouhana et al. show how they approached the PPI of Arno and ARF1, ADP-ribosylation factor (part of the RAS superfamily). Arno is part of the brefeldin A-resistant GEFs and share a 200 amino acid domain called SEC7.  SEC7 interacts with ARF through insertion of ARF switch regions into hydrophobic regions of SEC7.  This interaction is interesting from a ligand design standpoint is very interesting because it does not involved an alpha-helix inserting into the partner's hydrophobic groove.  Rather SEC7 has a rather large interface denoted by "hot spots". 


The figure shows their "innovative" FBDD strategy.  First, a Voldemort Rule compliant library was screened in silico.  Since in silico screening is not typically used for fragment screening (but becoming more common) they imposed some initial rules: docking site is small (1-2 residues!), hot spots defined by interaction energy (>1kcal/mol from alanine scan), and very strict selection criteria.  3000 fragments from the Chembridge library were screened.  33 molecules were selected and 40 random fragments chosen as negative controls. 

This was followed by a fluorescence assay (2mM fragments) to test their computational results, just as I say you should do.  Promiscuous binders were removed, not by using detergent, but using protein polarization to directly detect interaction with the target.  This seems like over-complexation of an assay, but without knowing the details of system there may be a very good reason for this approach. 
Compounds 1-4 were identifed as inhibitors (35%, 16%, 38%, and 23% inhibition at 2mM respectively) from each of the "hot spots".  I think it is interesting that these compounds were predicted to have affinities of 10uM or better from the docking.  To me, that just illustrates that predicted affinites are rediculous.  Why do people even report them?  Compound 1 had a Kiapp of 3.7mM which is a LEAN of 0.12!  These were then compared to the PAINS list and 3 is "ambiguous".  Compounds 5 and 6 were chosen as negative controls.  SPR confirmed the binding of 1,2, and 4, but at less than stoichiometric binding levels (the assay was run at 250uM).  3 could not be confirmed as a binder.  Does this mean anything for ambiguous PAINS? 
STD NMR was then used to confirm binding.  In a nice departure, they actually talk about conditions they used: 10 and 30uM ARNO with 0.1mM and 1mM compounds at 32 and 12C.  30uM ARNO with 1mM fragments @12C was what worked (33x fold excess fragments). Confirming the SPR, compounds 1, 2, and 4 were shown to bind, while "ambiguous" 3 had some binding. Finally, compounds were soaked with fragments 1, 2 and 4.  This led to crystal structures which could then be used for more model building, compound design, etc.  This led to the following compound (1.61mM KiApp, LEAN = 0.13) (the methoxy derivative of 1) for further analysis:
By and large, this is a well done, thoughtful work.  They really understand how to setup and interpret STD-NMR. However, these compounds are really atom inefficient.  Is that a consequence of the type of interaction they are inhibiting?  As a fragment, there is nothing wrong with it. 

[Quibble: The authors claim that this is an innovative approach, but I am not seeing it.  They claim their in silico screen first then following up by biophysical techniques is the innovation. ] 
Supplemental Information here.