18 August 2014

248th ACS National Meeting

The Fall ACS National Meeting was held in my beautiful city of San Francisco last week, and a number of topics of interest to Practical Fragments were on the agenda.

First up (literally – Sunday morning) was a session on pan-assay interference compounds (PAINS) organized by HTSPAINS-master Mike Walters of the University of Minnesota. Mike developed his interest in PAINS like many – from painful experience. After screening 225,000 compounds against the anti-fungal target Rtt109, he and his group found several hits that they identified as PAINS, but not before spending considerable time and effort, including filing a patent application and preparing a manuscript that had to be pulled. One compound turned out to be a “triple threat”: it is electrophilic, a redox cycler, and unstable in solution.

Mike had some nice phrases that were echoed throughout the following talks, including “subversively reactive compounds” and SIR for “structure-interference relationships,” the evil twin of SAR. To try to break the “PAINS cycle” Mike recommended more carefully checking the literature around screening hits and close analogs (>90% similarity). Of course, it’s better if you don’t include PAINS in your library in the first place.

Jonathan Baell (Monash), who coined the term PAINS back in 2010, estimated that 7-15% of commercial compounds are PAINS, and warned that even though PAINS may be the most potent hits, they are rarely progressable, advice that is particularly needed in academia. For example, the majority of patent applications around the rhodanine moiety come from academia, whereas the majority of patent applications around a more reasonable pharmacophore come from industry. Jonathan also warned about apparent SAR being driven by solubility. Finally, he noted that while it is true that ~6.5% of drugs could be classified as PAINS, these tend to have unusual mechanisms, such as DNA intercalation.

As we discussed last week, anyone thinking about progressing a PAIN needs to make decisions based on sound data. R. Kip Guy (St. Jude) discussed an effort against T. brucei, the causative agent of sleeping sickness. One hit from a cellular screen contained a parafluoronitrophenyl group that presumably reacts covalently with a target in the trypanosome and was initially deemed unprogressable. However, a student picked it up and managed to advance it to a low nanomolar lead that could protect mice against a lethal challenge. It was also well tolerated and orally bioavailable. Kip noted that in this case chemical intuition was too conservative; in the end, empirical evidence is essential. On that note he also urged people to publish their experiences with PAINS, both positive and negative.

There were a scattering of nice fragment talks and posters. Doctoral student Jonathan Macdonald (Institute of Cancer Research) described how very subtle changes to the imidazo[4,5-b]pyridine core could give fragments with wildly different selectivities. I was particularly tickled by his opening statement that he didn’t need to introduce the concept of fragment-based lead discovery in a general session on medicinal chemistry – another indication that FBLD is now mainstream.

Chris Johnson (Astex) told the story of their dual cIAP/XIAP inhibitor, a compound in preclinical development for cancer. As we’ve mentioned previously, most IAP inhibitors are peptidomimetics and are orders of magnitude more potent against cIAP than XIAP. Astex was looking for a molecule with similar potency against both targets. A fragment screen gave several good alanine-based fragments, as found in the natural ligand and most published inhibitors, but these were considerably more potent against cIAP. They also found a non-alanine fragment that was very weak (less than 20% inhibition at 5 mM!) but gave a well-defined crystal structure. The researchers were able to improve the affinity of this by more than six orders of magnitude, ultimately identifying compounds with low or sub-nanomolar activity in cells and only a 10-fold bias towards cIAP. This is a beautiful story that illustrates how important it is to choose a good starting point and not be lured solely by the siren of potency.

Alba Macias (Vernalis) talked about their efforts against the anti-cancer targets tankyrases 1 and 2 (we’ve previously written about this target here). In contrast to most fragment programs at Vernalis, this one started with a crystallographic screen, resulting in 62 structures (of 1563 fragments screened). Various SPR techniques, including off-rate screening, were used to prioritize and further optimize fragments, ultimately leading to sub-nanomolar compounds.

The debate over metrics and properties continued with back-to-back talks by Michael Shultz (Novartis) and Rob Young (GlaxoSmithKline). Michael gave an entertaining talk reprising some of his views (previously discussed here). I was happy to see that he does agree with the recent paper by Murray et al. that ligand efficiency is in fact mathematically valid; his previous criticism was based on use of the word “normalize” rather than “average”. While this is a legitimate point, it does smack of exegesis. Rob discussed the importance of minimizing molecular obesity and aromatic ring count and maximizing solubility, focusing on experimental (as opposed to calculated) properties. However, it is important to do the right kinds of measurements: Rob noted that log D values of greater than 4 are essentially impossible to measure accurately.

Of course, this was just a tiny fraction of the thousands of talks; if you heard something interesting please leave a comment.

7 comments:

  1. Of course Mike Schultz could have pointed out the formula for LE in the Murray et al critique of his study was itself mathematically invalid. Correct me if I'm wrong, but didn't Molecular Obesity reproduce the plot of promiscuity against median(ClogP) that had a starring role in Correlation Inflation ( see http://dx.doi.org/10.1007/s10822-012-9631-5 ) ?

    ReplyDelete
  2. When people start to take note of data based on MEASURED hydrophobicity values - realsing that for insoluble compounds the octanol water system is a random number generator - then people will understand the patterns emerging. Mike Schultz's bottom line that LipE, LLE... (i.e. potency minus lipophilicity in some form) is what really counts - is exactly in line with the anti-fat movement! (Of course we should add flat to that statement too). What does stand the test of time is the quality of Hansch's work and how well ClogP performs (but is ignored - or not corroborated by flawed OW measurements). Note too, a forgotten statement of the great man “Without convincing evidence to the contrary, drugs should be made as hydrophilic as possible without loss of efficacy”
    Hansch C., Bjorkroth J., Leo A., J. Pharm. Sci., 76, 663 (1987). Ignore correlations - look at the emerging patterns and probablistic implications of the data (especially PFI!). And LE DOES have value - when used with due care!

    ReplyDelete
  3. Part 1/2

    Hello Anonymous,
    I disagree with your recommendation, “Ignore correlations - look at the emerging patterns and probablistic implications of the data (especially PFI!)” and I remain unfamiliar, even after googling, with the term PFI. If you want people to use patterns, emerging, emerged or otherwise, to make decisions then you need stop waving your arms and be honest with those people about how strong the associated trends are. The large data sets available to Big Pharma researchers often lead to eye-wateringly good P-values for trends that have no real predictive value. In the prediction business, effect size trumps significance. Knowing with confidence that a coin comes up heads 51% of the time is of no real value to the person charged with calling the result of the next throw.

    Some (most?) of the drug-likeness data analysis in the literature appears to be more about giving weight to pre-existing opinions than answering questions objectively. While making trends appear stronger than they are does add weight to one’s opinions, there is a credibility risk should one get caught. In this regard, it may be instructive to remember the fable about the boy who cried wolf and the saying about what opinions have in common with hemorrhoids. It’s also worth thinking what would happen to a professional sportsman if he were caught doing the equivalent of inflating a correlation.

    Since you’ve highlighted the importance of “MEASURED hydrophobicity” values, you’ll know that it is logD that is measured and not logP (although the two quantities are the same when the compound is ionized to a negligible extent at the pH of the measurement). It can be argued that logP is the more appropriate measure of lipophilicity for modelling some phenomena while logD may be more appropriate in other situations. However, I would usually try separate the ionization and logP components of logD prior to any modelling. I certainly agree that lipophilicity is a physicochemical property that has to be managed and my challenge to wannabe key opinion leaders is all about how it should be managed and not at all about whether or not it needs to be managed. For example, why is pIC50 – ClogP a better metric than pIC50 – 0.5 x ClogP? An analogous challenge was presented to the solubility forecast index. Put another way, if you want people to use your metrics, indices and rules of thumb in decision making then these metrics, indices and rules of thumb need to be based on relevant, relevant and honest analysis of the available data.

    ReplyDelete
  4. Part 2/2

    The assertion, “Without convincing evidence to the contrary, drugs should be made as hydrophilic as possible without loss of efficacy” has some value as an aspirational goal but does not provide much guidance as to how lipophilicity should be managed in a non-ideal world where hydrophobic interactions need to be exploited to achieve the affinity required for activity. Probably a good idea in a discussion like this to skip the hagiography and take a close look instead at how we ended up with octanol/water as the preferred partitioning system.

    I should respond to, “Mike Schultz's bottom line that LipE, LLE…(i.e. potency minus lipophilicity in some form) is what really counts - is exactly in line with the anti-fat movement! (Of course we should add flat to that statement too)”. As mentioned earlier, you need to say why we should use potency minus lipophilicity (as opposed to potency minus 0.5 x lipophilicity). I do have an issue with saying that “we should add flat to that statement” and the “of course” cannot, of course, hide the weakness of the some of the data analysis in that area. For example, the analysis of the relationship between solubility and Fsp3 greatly exaggerates the trend in the data and represents a particularly tortured way to explore the relationship between two continuous quantities. As an aside, you may find it instructive to take a look at the structures of the compounds in that data set (which is publically available). Some of the other work in this area fails to acknowledge that number of aromatic rings may simply be a surrogate for molecular size (which is widely assumed to be a risk factor in drug discovery. I was puzzled one study in which continuous variables were binned to generate graphical data representations which were then used to assert that one trend was stronger than another. The following quotes should give you an idea of what I’m referring to:

    “The clearer stepped differentiation within the bands is apparent when log DpH7.4 rather than log P is used, which reflects the considerable contribution of ionization to solubility”.

    “This graded bar graph (Figure 9) can be compared with that shown in Figure 6b to show an increase in resolution when considering binned SFI versus binned c log DpH7.4 alone.”

    ReplyDelete
  5. A key statement Pete makes is:

    "I certainly agree that lipophilicity is a physicochemical property that has to be managed and my challenge to wannabe key opinion leaders is all about how it should be managed and not at all about whether or not it needs to be managed."

    Given that we all agree that lipophilicity needs to be managed, does it really matter all that much which metrics we use? Sure, LLE, LLEAT, LELP, etc. are all somewhat arbitrary, but that doesn't mean they can't be useful for managing lipophilicity and molecular weight.

    ReplyDelete
  6. Hi Dan, If we’re going to invoke usefulness in defense of LEMs, we do need to articulate clearly exactly how they are being used. If is often stated that the function of LEMs is to normalize activity with respect to risk factors such as molecular size and lipophilicity. Unfortunately, the people making these statements rarely (if ever) state exactly what they mean by ‘normalize’ which makes it difficult for the users to know whether or not the proposed LEMs are fit for purpose. In our Perspective, we argued that arbitrary assumptions (e.g. standard/reference concentration for LE; coefficient of lipophilicity in LipE) are made when defining LEMs. We also showed that making equally plausible assumptions (e.g. changing standard/reference concentration) can change the ranking of compounds. When we assert that LEMs are useful because people are using them, we are actually making a statement about the users rather than the science. Personally, I would regard a view of a system as invalid if the view changed when we changed the units of the quantities that define the system. Pauli might have noted that such a view was, “not even wrong”.

    In the Perspective we also argued a case for modelling the response of activity to risk factor as an alternative to using LEMs. We also made the point that using LEMs distorts analysis. Although we suggest using residuals to quantify ‘bang for buck’ of compounds in drug discovery projects, you could easily use the intercept (pIC50@0) from the linear fit of pIC50 to HA to redefine LE as (pIC50-pIC50@0)/HA. It would also be feasible to do something similar for LipE (e.g. we defined a generalized lipophilic efficiency in the Perspective). One advantage of modelling the activity data and using residuals is that you’re not restricted to linear responses of activity to risk factor. I see efficiency as defined by the response of activity to risk factor(s) and my main criticism of LEMs is that they distort our perception of that response. The key question (which we posed in the Perspective) is whether or not you would regard pIC50 values lying on a straight line when plotted against risk factor as representing equal efficiency with respect to that risk factor.

    The Perspective will have hopefully at least got some members of the FBDD community asking themselves why they use LEMs and whether they consider it valid to mix results from different assays in LEM-based analyses (e.g. mapping chemico-biological space; size-dependency of LE). One point that I make when presenting ‘Data-analytic sins in property-based molecular design’ ( ) is that if we do bad data analysis (or bad science) then we open ourselves to suggestions that some of the difficulties in drug discovery may be of our own making. The anti-fat/anti-flat brigade and self-appointed arbiters of compound quality may also find it prudent to consider this point carefully.

    ReplyDelete
  7. Forgot to include link for 'Data-analytic sins in property-based molecular design' in previous comment. Apologies for omission and here's the missing link: http://www.slideshare.net/pwkenny/data-analytic

    ReplyDelete