30 October 2023

NMR for SAR: All about the ligand

In last week’s post we described a free online tool for predicting bad behavior of compounds in various assays. But as we noted, you often get what you pay for, and computational methods can’t (yet) take the place of experimentation. In a new (open-access) J. Med. Chem. paper, Steven LaPlante and collaborators at NMX and INRS describe a roadmap for discovering, validating, and advancing weak fragments. They call it NMR by SAR
 
Unlike SAR by NMR, the grand-daddy of fragment-finding techniques which involves protein-detected NMR, NMR for SAR focuses heavily on the ligand. The researchers illustrate the process by finding ligands for the protein HRAS, for which drug discovery has lagged in comparison to its sibling KRAS.
 
The researchers started by screening the G12V mutant form of HRAS in its inactive (GDP-bound) state. They screened their internal library of 461 fluorinated fragments in pools of 11-15 compounds (each at ~0.24 mM) using 19F NMR. An initial screen at 15 µM protein produced a very low hit rate, so the protein concentration was increased to 50 µM. After deconvolution, two hits confirmed, one of which was NMX-10001.
 
The affinity of the compound was found to be so low that 1H NMR experiments could not detect binding. Thus, the researchers kept to fluorine NMR to screen for commercial analogs. They used 19F-detected versions of differential line width (DLW) and CPMG experiments to rank affinities, and the latter technique was also used to test for compound aggregation using methodology we highlighted in 2019. Indeed, the researchers have developed multiple tools for detecting aggregators, such as those we wrote about in 2022.
 
Ligand concentrations were measured by NMR, which sometimes differed from the assumed concentrations. As the researchers note, these differences, which are normally not measured experimentally, can lead to errors in ranking the affinities of compounds. The researchers also examined the 1D spectra of the proteins to assess whether compounds caused dramatic changes via pathological mechanisms, such as precipitation.
 
The researchers turned to protein-detected 2D NMR for orthogonal validation and to determine the binding sites of their ligands. These experiments revealed that the compounds bind in a shallow pocket that has previously been targeted by several groups (see here for example). Optimization of their initial hit ultimately led to NMX-10095, which binds to the protein with low double digit micromolar affinity. This compound also blocked SOS-mediated nucleotide exchange and was cytotoxic, albeit at high concentrations.

I do wish the researchers had measured the affinity of their molecules towards other RAS isoforms as this binding pocket is conserved, and inhibiting all RAS activity in cells is generally toxic. Moreover, the best compound is reminiscent of a series reported by Steve Fesik back in 2012.
 
But this specific example is less important than the clear description of an NMR-heavy assay cascade that weeds out artifacts in the quest for true binders. The strategy is reminiscent of the “validation cross” we mentioned back in 2016. Perhaps someday computational methods will advance to the point where “wet” experiments become an afterthought. But in the meantime, this paper provides a nice set of tools to find and rigorously validate even weak binders.

23 October 2023

A Liability Predictor for avoiding artifacts?

False positives and artifacts are a constant source of irritation – and worse – in compound screening. We’ve written frequently about small molecule aggregation as well as generically reactive molecules that repeatedly come up as screening hits. It is possible to weed these out experimentally, but this can entail considerable effort, and for particularly difficult targets, false positives may dominate. Indeed, there may be no true hits at all, as we noted in this account of a five-year and ultimately fruitless hunt for prion protein binders.
 
A computational screen to rapidly assess small molecule hits as possible artifacts would be nice, and in fact several have been developed. Among the most popular are computational filters for pan-assay interference compounds, or PAINs. However, as Pete Kenny and others have pointed out, these were developed using data from a limited number of screens in one particular assay format. Now Alexander Tropsha and collaborators at University of North Carolina Chapel Hill and the National Center for Advancing Translational Science (NCATS) at the NIH have provided a broader resource in a new J. Med. Chem. paper.
 
The researchers experimentally screened around 5000 compounds, taken from the NCATS Pharmacologically Active Chemical Toolbox, in four different assays: a fluorescence-based thiol reactivity assay, an assay for redox activity, a firefly luciferase (FLuc) assay, and a nanoluciferase (NLux) assay. The latter two assays are commonly used in cell-based screens to measure gene transcription. The thiol reactivity assay yielded around 1000 interfering compounds, while the other three assays each produced from 97 to 142. Interestingly, there was little overlap among the problematic compounds.
 
These data were used to develop quantitative structure-interference relationship (QSIR) models. The NCATS library of nearly 64,000 compounds was virtually screened, and around 200 compounds were tested experimentally for interference in the four assays, with around half predicted to interfere and the other half predicted not to interfere. The researchers had also previously built a computational model to predict aggregation, and this – along with the four models discussed here – have been combined into a free web-based “Liability Predictor.”
 
So how well does it work? The researchers calculated the sensitivity, specificity, and balanced accuracy for each of the models and state that “they can detect around 55%-80% of interfering compounds.”
 
This sounded encouraging, so naturally I took it for a spin. Unfortunately, my mileage varied. Or, to pile on the metaphors, lots of wolves successfully passed themselves off as sheep. Iniparib was recognized correctly as a possible thiol interference compound. On the other hand, the known redox cycler toxoflavin was predicted not to be a redox cycler – with 97.12% confidence. Similarly, curcumin, which can form adducts with thiols as well as aggregate and redox cycle, was pronounced innocent. Quercetin was recognized as possibly thiol-reactive, but its known propensity to aggregate was not. Weirdly, Walrycin B, which the researchers note interferes with all the assays, got a clean bill of health. Perhaps the online tool is still being optimized.
 
At this point, perhaps the Liability Predictor is best treated as a cautionary tool: molecules that come up with a warning should be singled out for particular interrogation, but passing does not mean the molecule is innocent. Laudably, the researchers have made all the underlying data and models publicly available for others to build on, and I hope this happens. But for now, it seems that no computational tool can substitute for experimental (in)validation of hits.

16 October 2023

Spacial Scores: new metrics for measuring molecular complexity

Molecular complexity is one of the theoretical underpinnings for fragment-based drug discovery. Mike Hann and colleagues proposed two decades ago that very simple molecules may not have enough features to bind tightly to any proteins, whereas highly functionalized molecules may have extraneous spinach that keeps them from binding to any proteins. Fragments, being small and thus less complex, are in a sweet spot: just complex enough.
 
But what does it mean for one molecule to be more complex than another? Most chemists would agree that pyridine is more complex than methane, but is it more complex than benzene? To decide, you need a numerical metric, and there are plenty to choose from. The problem, as we discussed in 2017, is that they don’t correlate with one another, so it is not clear which one(s) to choose. In a new (open access) J. Med. Chem. paper, Adrian Krzyzanowski, Herbert Waldmann and colleagues at the Max Planck Institute Dortmund have provided another. (Derek Lowe also recently covered this paper.)
 
The researchers propose the Spacial Score, or SPS. This is calculated based on four molecular parameters for each atom in a given molecule. The term h is dependent on atom hybridization: 1 for sp-, 2 for sp2-, 3 for sp3-hybrized atoms, and 4 for all others. Stereogenic centers are assigned an s value of 2, while all other atoms are assigned a value of 1. Atoms that are part of non-aromatic rings are also assigned an r value of 2; those that are part of an aromatic ring or linear chain are set to 1. Finally, the n score is set to the number of heavy-atom neighbors.
 
For each atom in a molecule, h is multiplied by s, r, and n2. The SPS is calculated by summing the individual scores for all the atoms in a molecule. Because there is no upper limit, and because it is nice to be able to compare molecules of the same size, the researchers also define the nSPS, or normalized SPS, which is simply the SPS divided by the number of non-hydrogen atoms in the molecule. Although SPS can be calculated manually, the process is tedious and the researchers have kindly provided code to automate the process. Having defined SPS, the researchers compare it to other molecular complexity metrics, including the simple fraction  of sp3 carbons in a molecule, Fsp3, which we wrote about in 2009. 
 
The researchers next calculated nSPS for four sets of molecules including drugs, a screening library from Enamine, natural products, and so-called “dark chemical matter,” library compounds that have not hit in numerous screens. The results are equivocal. For example, the nSPS for dark chemical matter is very similar to that for drugs. On the other hand, natural products tend to have higher nSPS scores than drugs, as expected. Interestingly, the average nSPS score for compounds in the GDB-17 database, consisting of theoretical molecules having up to 17 atoms, is also quite high.
 
The researchers assessed whether nSPS correlated with biological properties, and found that compounds with lower nSPS tended to have lower potencies against fewer proteins, as predicted by theory. That said, this analysis was based on binning compounds into a small number of categories, and as Pete Kenny has repeatedly warned, this can lead to spurious trends.
 
The same issue of J. Med. Chem. carries an analysis of the paper by Tudor Oprea and Cristian Bologa, both at University of New Mexico. This contextualizes the work and confirms that drugs do not seem to be getting more complex over time, as measured by nSPS. This may seem odd, though Oprea and Cristian note that by “normalizing” for size, nSPS misses the increasing molecular weight of drugs.
 
This observation also raises other questions, such as the fact that SPS explicitly excludes element identity. Coming back to benzene and pyridine, both have identical SPS and nSPS, which does not seem chemically intuitive. One could quibble more: why square the value of n in the calculation of SPS? Why allow s to be only 1 and 2, as opposed to 1 and 5?
 
In the end I did enjoy reading this paper, and I do think having some metric of molecular complexity might be valuable. I’m just not sure where SPS will fit in with all the existing and conflicting metrics, and how such metrics can lead to practical applications.

09 October 2023

Fragments finger the BPTF PHD Finger

Plant homeodomain (PHD) fingers, despite their name, are found in nearly 300 human proteins. They are small (50-80 amino acid) domains that typically recognize post-translational modifications such as trimethylated lysine residues in histones. The PHD finger in BPTF is implicated in certain types of acute myeloid leukemia. However, because of the large number of PHD fingers as well as their small binding sites, few attempts have been made to develop corresponding chemical probes. (Indeed, the only mention of them on Practical Fragments was in 2014.) In a just-published ACS Med. Chem. Lett. paper, William Pomerantz and collaborators at University of Minnesota and St. Jude Children’s Research Hospital report the first steps.
 
The researchers started by screening a library of 1056 fragments (from Life Chemicals) against the BPTF PHD finger using ligand-observed (1H CPMG) NMR. Fragments were at 100 µM in pools of up to five. This gave a preliminary hit rate of 5.7%, but only ten compounds (<1%) reproduced when compounds were repurchased and retested individually.
 
These ten fragments were next tested by SPR (at 400 µM), which confirmed six of them. Also, all ten CPMG hits were tested in an AlphaScreen assay in which they competed with a known peptide binder. This confirmed nine, including the six that confirmed by SPR.
 
Interestingly, the most potent fragment in the AlphaScreen assay was the starting point for the KRAS inhibitor we highlighted last year. However, this fragment did not show binding to the BPTF PHD finger by SPR, and the researchers had identified the 2-aminothophene substructure as a hit against an unrelated protein. Whether this fragment is privileged or pathological may be context dependent.
 
This and the top three fragments that confirmed in all assays were used as starting points for SAR by catalog, and a handful of analogs were purchased. The researchers also resynthesized two of the compounds. Oddly, resynthesized F2 turned out to be three-fold more active in the AlphaScreen assay than the commercial material. One analog, compound F2.7, showed mid-micromolar activity.

 
 
Docking and two-dimensional protein-observed (1H,15N HSQC) NMR experiments suggest that most of the fragments bind in the “aromatic cage” which normally recognizes methylated lysine residues, but F2 may bind in an adjacent region. Both subpockets were also identified as being ligandable using the program FTMap.

This paper is a nice example of using orthogonal methods to find and carefully validate fragments against an underexplored class of targets. The researchers conclude by stating that “these hits are suitable for further SAR optimization and development into future methyl lysine reader chemical probes.” I look forward to seeing more publications.

02 October 2023

Discovery on Target 2023

Last week CHI’s Discovery on Target was held in Boston. This was the Twentieth Anniversary edition, though oddly last year also claimed to be the twentieth. Regardless, attendance surpassed pre-pandemic levels, with some 1200 attendees, 90% of them in person. Eight or nine concurrent tracks over the course of three days competed with one another, while a couple pre-conference symposia and a handful of short courses were held before the main event. Outside obligations kept me from seeing many talks, including plenary keynotes by Jay Bradner (Novartis), Anne Carpenter, and Shantanu Singh (both at the Broad Institute), but most of these were recorded and will be made available for a year, and I look forward to watching them. Here I’ll just touch on a few of the fragment-relevant talks I was able to attend.
 
“Protein Degraders and Molecular Glues” was a popular track during all three days of the main conference, and in a featured presentation Steve Fesik (Vanderbilt) described how he is using NMR-based FBLD to identify tissue-specific E3 ligases and β-catenin degraders. In the case of β-catenin, a difficult oncology target, a fragment screen identified a 500 µM hit that was optimized to 10-20 nM. This has no functional activity on its own, but combining it with a ligand for an E3 ligase to generate a bivalent PROTAC causes degradation of the protein. Steve is currently optimizing the pharmaceutical properties of these molecules.
 
One exciting application for PROTACs is tissue-specific targeted protein degradation, which could avoid systemic toxicity for proteins such as Bcl-xL. Steve said that for the past five years he has been pursuing ligands against E3 ligases preferentially expressed in certain tissues, and he presented brief vignettes for three of them. These came from an initial list of 20 E3 targets, but many of them turned out to be too difficult to express.
 
Steve typically screens a library of nearly 14,000 fragments, large according to our recent poll, but this has proven fruitful as only about 10% of proteins he has screened have turned out to be “teflon.” He noted the odd little fragment hit that proved so impactful to the KRAS program we highlighted last year as being something that might have been excluded from a smaller library.
 
We wrote last week about ligands for the E3 ligase DCAF1, and Rima Al-Awar (Ontario Institute for Cancer Research) described another series. She also described ligands against the oncology target WDR5, a target Steve Fesik has pursued as well.
 
Continuing the theme of targeted protein degradation, Jing Liu described Cullgen’s discovery of fragment-sized ligands for a broadly-expressed E3 ligase which could be an alternative to CRBN-targeting ligands when resistance (inevitably) arises. Although he did not specify the E3 ligase, Cullgen has filed a patent application for ligands targeting DCAF1.
 
Rounding out targeted protein degradation, Kevin Webster, my colleague at Frontier Medicines, described the discovery of covalent ligands for the E3 ligase DCAF2 (or DTL) using chemoproteomics and a variety of other techniques including cryo-electron microscopy. Consistent with Steve’s comments, considerable effort went into successfully obtaining a soluble, well-behaved protein.
 
The late Nobel laureate Sydney Brenner said that “progress in science depends on new techniques, new discoveries, and new ideas, probably in that order.” Harvard’s Steve Gygi, one of Frontier’s Scientific Advisory Board members, described multiple new techniques in a featured presentation focused on cysteine-based profiling. These included multiplexed methods to more rapidly find covalent ligands for targets across the proteome. A just-released mass spectrometry instrument made by Thermo Fisher called the Astral further accelerates the process with order-of-magnitude improvements in both speed and sensitivity compared to existing machines.
 
The cell-based covalent screening described by Steve Gygi is very powerful, but so is investigating a single protein, as demonstrated by the discovery of sotorasib. AstraZeneca did early work on covalent screening (which Teddy noted in 2015), and they have continued to build their platform, as described by Simon Lucas. The company has around 12,000 covalent fragments, some beyond the rule of three, with molecular weights between 200 and 400 Da and logP between 0 and 4. More than 90% are acrylamides, a clinically validated warhead, and the researchers are careful to avoid particularly reactive molecules that would be non-specific.
 
In contrast to the electrophilic fragments that comprise most covalent libraries, Megan Matthews (University of Pennsylvania) is exploring nucleophilic fragments for “reverse polarity activity-based protein profiling,” as we highlighted last year. This has led to the discovery of unusual post-translational modifications. For example, the sequence of the protein SCRN3 suggests that it should be a cysteine hydrolase, but the purified protein has no cysteine hydrolase activity, and in cells the N-terminal cysteine is processed to form a glyoxylyl moiety.
 
Finally, Alex Shaginian provided an overview of DNA-encoded library screening (DEL) at HitGen. The company currently has 1.2 trillion compounds spread across more than 1500 libraries, and an obvious question is whether this is overkill. Alex noted that one protein has been screened three times over the course of several years. In the original screen, a modest (30 µM) hit was found from 4.2 billion compounds screened. A later screen of 130 billion compounds produced nothing new, but a more recent screen of 1 trillion compounds led to four mid-nanomolar series. As Steve Fesik noted, screening larger libraries, whether experimentally or computationally, really can be helpful, especially for the hardest targets.
 
Despite only attending half the conference this post is getting long, but for those of you who were there, which talks would you recommend watching?