The first major fragment event of
2016, CHI’s Drug Discovery Chemistry, was held last week in San Diego. FBDD was the main focus of one track, and fragments played
starring roles in several of the others as well, including inflammation, protein-protein
interactions, and epigenetics. Also, for the first time this year the event
included a one-day symposium on biophysical approaches, which also included
plenty of fragments.
In agreement with our polls, surface plasmon resonance (SPR)
received at least a mention in most of the talks. John Quinn (Genentech) gave
an excellent overview of the technique, packed with lots of practical advice.
At Genentech fragments are screened at 0.5 mM in 1% DMSO at 10°C using
gradient injection, which permits calculation of affinities and ligand efficiencies directly from the primary screen. Confirmation of SPR hits in NMR
is an impressive 80%. A key source of potential error in calculating affinities
is rebinding, in which a fragment dissociates from one receptor and rebinds to
another. That problem can be reduced by increasing the flow rate and minimizing
the amount of protein immobilized to the surface. Doing so also lowers the
signal and necessitates greater sensitivity, but happily the baseline noise has
decreased by 10-fold in the past decade.
Some talks focused on using SPR
for less conventional applications. Paul Belcher (GE) described using the
Biacore S200 to measure fragments binding to wild-type GPCRs. In some cases
this provided different hits than those detected against thermally stabilized
GPCRs. And Phillip Schwartz (Takeda) described using SPR to characterize
extremely potent covalent inhibitors for which standard enzymatic assays can
produce misleading results. These screens require exotic conditions to
regenerate the chip, so it helps that the SensiQ instrument has particularly durable
plumbing.
In theory, SPR can be used to
measure the thermodynamics of
binding by running samples at different temperatures, but John Quinn pointed
out that enthalpic interactions dominate for most fragments, so the extra
effort may not be worthwhile. Several years ago many researchers felt that enthalpically
driven binders might be more selective or generally superior. Today more people are realizing that thermodynamics is not quite so simple, and Ben Davis
(Vernalis) may have put the nail in the coffin by showing that, for a set of 22 compounds, enthalpy and
entropy of binding could vary wildly simply by changing the buffer from HEPES to PBS! (Free energy of binding remained the same with either buffer.)
Thermal shift assays (TSA or DSF) continued to be controversial,
with Ben finding lack of agreement between the magnitude of the shift and
affinity, though there was a correlation with success in crystal trials. In
contrast, Mary Harner (BMS) reported good agreement between thermal shift and
affinity. She also found that it seemed to work better when the fragments bound
in deep pockets than when they bound closer to the surface. However, Rumin
Zhang (Merck), who has tested more than 200 proteins using TSA, mentioned that some
HCV protease inhibitors could be detected despite the shallow active site.
Rumin also pointed out that a low response could indicate poor quality protein –
if most of the protein is unfolded it might be fine for biochemical assays but
not for TSA. Negative thermal shifts are common and, according to Rumin,
sometimes lead to structures, though others found this to be the case less
often.
What to do when assays don’t agree was the subject of
lively discussion. Mary Harner noted that out of 19 targets screened in the
past two years at BMS using NMR, SPR, and TSA, 45% of the BMS library hit in at
least 1 assay. However, 68% of hits showed up in only a single assay. Retesting
these did lead to more agreement, but even many of the hits that didn’t confirm
in other assays ultimately led to leads. All techniques are subject to false
negatives and false positives, so lack of agreement shouldn’t necessarily be
cause for alarm. Indeed, Ben noted that multiple different soaking conditions
often need to be attempted to obtain crystal structures of bound fragments.
Crystallography in general is benefiting from dramatic advances in
automation. Jose Marquez described the fully
automated system at the EMBL Grenoble Outstation, which is open to academic collaborators. And Radek Nowak
(Structural Genomics Consortium, Oxford) discussed the automated crystal
harvesting at the Diamond Light Source, which is capable of handling 200
crystals per hour. He also revealed a program called PANDAA (to be released soon) that speeds up the
analysis of crystallographic data.
Crystallography was used as a
primary screen against KEAP1, as discussed by Tom Davies (Astex). A subset of
330 of the most soluble fragments was tested in pools of four, which revealed
several hot spots on the protein. Interestingly, an in-house computational
screen had not identified all of these hot spots, though Adrian Whitty
(Boston University) noted that they could be detected with FTMap. The fragments
themselves bound exceptionally weakly, but intensive optimization led to a low
nanomolar inhibitor.
Another case in which extremely weak fragments turned out to be useful was described by Matthias Frech (EMD Serono). A full HTS failed to find any confirmed hits against cyclophilin D, but screening by SPR produced 168 fragments, of which six were characterized crystallographically. Although these were all mM, with unimpressive ligand efficiencies, they could be linked or merged with known ligands to produce multiple leads – a process which took roughly one year from the beginning of the screen. Matthias noted that sometimes fragment efforts are started too late to make a difference, and that it is essential to not be dogmatic.
Another case in which extremely weak fragments turned out to be useful was described by Matthias Frech (EMD Serono). A full HTS failed to find any confirmed hits against cyclophilin D, but screening by SPR produced 168 fragments, of which six were characterized crystallographically. Although these were all mM, with unimpressive ligand efficiencies, they could be linked or merged with known ligands to produce multiple leads – a process which took roughly one year from the beginning of the screen. Matthias noted that sometimes fragment efforts are started too late to make a difference, and that it is essential to not be dogmatic.
Huifen Chen discussed Genentech's MAP4K4 program. Of 2361 fragments screened by SPR,
225 had affinities better than 2 mM. Crystallography was tough, so docking was
used instead, with 17 fragments pursued intensively for six months, ultimately leading
to two lead series (see here and here), though one required bold changes to the core. This program is a nice reminder of why having multiple fragment
hits can be useful, as the other 15 fragments didn’t pan out.
Finally, George Doherty (AbbVie)
gave a good overview of the program behind recently approved venetoclax, which involved
hundreds of scientists over two decades. He also described intensive medicinal
chemistry which led to a second generation compound, ABT-731, with improved
solubility and oral bioavailability.
We missed Teddy at this meeting,
and there is plenty more to discuss, so please
add your comments. If you did not attend, several excellent events are still coming
up this year. And mark your calendar for 2017, when CHI returns to San Diego
April 24-26.
6 comments:
Hi Dan, It’s worth pointing out that Kd is just as thermodynamic as the changes in enthalpy associated with binding and for an even more complete description of binding thermodynamics one can also measure volume changes. It is my understanding that the variation of deltaH with buffer is typically used to characterize protonation changes associated with binding and it’d be a good idea for you or Ben to clarify whether or not binding of the compounds that he talked about are associated with uptake from or release of protons to buffer.
Hi Pete, these are good points, though Kd is related to free energy of binding and these didn't change much in this particular case. The main point I was trying to convey is that optimizing for enthalpy may not even be possible, regardless of whether or not it's a good idea in theory.
Pete hit's the nail on the head here really. Many ITC neophytes don't know the power of the instruments, they don't just measure the ligand binding interaction, but all linked reactions. To say you have 'measured the enthalpy' often only means you know it for one buffering system. The 'intrinsic' enthalpy is a bit more difficult to determine. It doesn't mean ITC isn't good for ranking compounds though, and SAR by thermodynamics can be enlightening in the absence of crystals.
For a good demonstration of why enthalpy may change but KD remains constant in different buffers see:
http://bmcbiophys.biomedcentral.com/articles/10.1186/2046-1682-5-12
Dan, excellent write up as usual!
A nuance that I didn't dig into during my lecture, but is apparent from the commitment expression (Cc), is that seemingly impotent compounds may still yield misleading results by enzymatic progress curve analysis. This occurs when you have full forward partitioning (full commitment) from the reversible target-inhibitor complex, but a very slow association rate constant leading to a low biochemical potency value.
For the conference I presented three two-step covalent enzyme inhibitors. I actually measured four, and the fourth surprisingly fit into the above-mentioned category. It's all about the partitioning!
Thanks for the write-up, Dan - nice summary of a very good conference. Also thanks for the heads up about this discussion.
@Peter and P.San - to clarify, the point of the slide in question was not to address the enthalpy vs entropy discussion but to point out that you have to consider the whole system (ie the protein, ligand _and_ the buffer) when looking at interactions, be it by NMR, ITC or any other method. Without a deal of careful analysis, deltaH and TdeltaS are too sensitive to conditions to use reliably, whereas deltaG is pretty robust and extremely useful for ranking compounds.
Whether, with careful analysis, deltaH and TdeltaS are useful guides was not the point of the slide in question, and would probably require a whole different talk.
Hi Ben, Thanks for your response. When using thermodynamics, it is important to focus on the thermodynamic functions that are relevant to the phenomena of interest. Even if deltaG measurement was less robust than deltaH measurement, relevance would still dictate a focus on deltaG if interested in binding of ligands to proteins at constant pressure (just as you would use Helmholtz free energy for binding at constant volume). It’s worth noting that when simulations are used for free energy calculations, the relevant changes in G are computed directly from the simulation (rather than using simulation to first calculate changes in H and S). Your point about needing to consider the whole system is well taken and it’s also worth noting that the contribution of an intermolecular contact to affinity (or any changes in other thermodynamic functions associated with binding) is not, in general, an experimental observable.
One problem that biophysics practitioners face is that there is a certain amount of flaky interpretation of thermodynamic data that has the potential to taint the field. I believe that serious biophysics practitioners need to explicitly distance themselves from what might be termed ‘voodoo thermodynamics’. This post should alert you to the problem as may the following quote from an article in Pravda Med Chem.
“It has been demonstrated that enthalpic compounds have typically better profile of physicochemical parameters than that of the high-entropy compounds”
Post a Comment