20 November 2008

More Fragment Events for 2009

In addition to FBLD 2009 from September 21-23 of next year, there are at least three other interesting conferences on our calendar.

First up are a few fragment events at the CHI Molecular Medicine Tri-Conference extravaganza next February in San Francisco, CA. A pre-conference course on fragment-inspired medicinal chemistry will be held on February 24, followed by several fragment-based talks on February 25 and 26 (full disclosure: Teddy and I will both be presenting at this one).

Fragments 2009, held on March 4 and 5 in Alderley Park, UK (near Manchester), is organized by the Biological and Medicinal Chemistry Sector of the Royal Society of Chemistry. There is still space for posters, with abstracts due January 5.

And on April 7 and 8, Cambridge Healthtech Institute’s Fragment-Based Techniques will be held in sunny San Diego.

Know of anything else? Let us know and we’ll get the word out!

10 comments:

  1. As a student, I am curious how widely used this method is in the industry (pharma or biotech).. Thanks!

    ReplyDelete
  2. Mekie -

    The short answer is that these techniques are used throughout industry. For one of the best and most recent analyses, see the C&EN article that published last summer (and was highlighted and linked on this blog on 26 July).

    The more nuanced answer is that the current economic crisis is taking a severe toll on the drug industry, with the “creative destruction” of capitalism tipping more towards destruction than creation. This was already starting to happen when the C&EN article was written, and has only accelerated. As companies cut functions to save money, they generally retain activities that are closer to market, and since fragment-based activities generally happen near the start of a program they are being hit hard.

    ReplyDelete
  3. Thank you, Dan. The C&EN article is interesting. I am trying to understand the market for fragment based screening for a class project. So, your answer was very useful.

    I have 2 other questions. Is the cost of running fragment-based analysis (using X-ray/NMR/SPR) also one of the reasons for the reduction? Would making a cheaper analysis technique revive this activity?

    Thank you!

    ReplyDelete
  4. Mekie,
    You ask a multivariate question, I will try to give a univariate answer. It isn't the cost of actually running X-ray or NMR that is a problem, that is it is not a cost per sample, rather it is a long term expense. NMR is an expensive group to have and maintain (a cryo 600 machine costs a million dollars, and then depreciation over a many year timeframe). X-ray much less so, but still expensive to some extent. I can go on at length about how technologies are misused, but that is really another topic. The real problem with widespread acceptance and use of FBDD is much more human in its origins (in my humble opinion).

    I think the main reason that FBDD is not more widely used is hubris.
    That won't make me friends, but let me explain. Chemists are the main drivers of hit/lead generation efforts. Chemists get recognized for making compounds, and making the right decisions about which compounds to make. Thus, it is against the financial interest to be guided by other technologies, instead they would rather rely on "New Technologies" to confirm rather than guide their decisions. Additionally, I can't tell you how many times I have heard, "Why do I need fragments? I have gotten molecules into the clinic before."

    This is of course a generalization and there are many chemists who aren't limited by such things. However, as our industry finally understands the need to innovate, I think FBDD will be more widely accepted, however, those companies which embraced this years ago will be the ones to innovate, everyone else will follow.

    This is an interesting conversation, I hope more people weigh in.

    ReplyDelete
  5. Teddy brings up some interesting points with which I agree in part. However, as a chemist, I think that pragmatism is more often the issue than hubris. There is a long, dark history of weak screening hits that defy any attempts at optimization. The reasons for these artifacts are becoming understood, due in large part to the work of UCSF’s Brian Shoichet and colleagues, but the hard-earned lesson in industry among chemists for many years was that a weak hit is at best a long way from the clinic, and at worst a dangerous diversion. These attitudes take time, and data, to change.

    To return to your question, “would making a cheaper analysis technique revive [FBLD],” the answer is that it would help in the sense that bringing down the activation energy will encourage more people to try a new technique. However, in the current economic environment it is not just FBLD that is facing challenges, but all early stage drug discovery.

    ReplyDelete
  6. Thank you, Dr. Teddy and Dan, for so patiently explaining these different points. I will look up Shoichet's work and will keep track of the happenings on this blog. Thanks again!

    ReplyDelete
  7. This is a great thread and I thought I'd toss in my 2 cents. Weak screening hits were usually the realm of high concentration enzyme screening, which does lead to a lot of artifactual hits that can't be optimized. The arrival of additional biophyiscal methods for the characterization of binding of fragments or HTS hits to targets has helped us clean up early screening lists and give better molecules to chemists.

    NMR and X-ray screens are expensive to run, but SPR assays that are becoming more widely recognized as a way of identifying fragment hits is generally quite inexpensive on a 'per well' basis, and the instrumentation, while expensive, is much less expensive than NMRs or X-ray equipment.

    However, as pointed out, the cost of developing a compound is much higher than most any screen. What chemists really need is reliable data and biophysical techniques are starting to fill that.

    Why would a company want to do this? Where I've worked and implemented fragment techniques we adopt and attitude that fragments are a complementary approach to all the methods currently employed to generate early lead compounds. It is not intrinsically better or worse than HTS, patent busting, combing the literature, etc. Each technique has something to offer and what you implement depends on how you weight the various pros and cons of each technique.

    The advantages to fragments are that you tend to get highly efficient interactions. For such small compounds to bind detectably, H-bonds, van der Waals contacts, etc., need to be pretty optimally aligned. This shows you how to take the best advantage of binding opportunities in a given pocket. Fragment libraries can, in principle, comb a more vast area of medicinally relevant chemical space. The result is that you can get novel intellectual property, unique binding modes, and new ideas for how to approach your target. Also, since many fragments represent fundamental units of molecular recognition (usually a greasy ring and some hydrophilic bits) you can sometimes identify new pockets on your target.

    It's also been shown again and again that despite being weak, you can evolve fragments for potency quite rapidly. Through hit expansion (substructure searching a hit against your corporate library looking for more developed compounds) you can evolve things rapidly. I've been on two projects that used that approach to drive potency down multiple orders of magnitude in one step (150 uM to 70 nM and 1000 uM to 5 uM) with no synthesis. Most fragment programs I have been on were into the hundreds of nanomolar within a few weeks to months from the initial hit. It just takes a project team willing to support it and, generally we consider structural enablement (X-ray usually) as a requirement, though I have seen compounds advance without it, it's just harder.

    The best programs are ones where the chemists have a vast array of information to draw from as they make decisions about how to proceed. I found a fragment once that had a very unusual an exciting binding bode to a target and represented new ideas about how to make an inhibitor. Unfortunately, the fragment was a chemical nightmare and there was nothing you could do to it to make it drug like. The chemists on the project designed a new scaffold that could maintain the novel interactions, and then attached to one end of it a piece of a different molecule from a competitor's patent, and to the other end a piece from a previous in-house developed series, and rapidly developed nanomolar inhibitors with unique and patenetable chemical matter. We would have never gotten there without the fragment.

    But every technique contributes. I saw another project where the HTS campaign delivered a 1 nM hit with excellent drug-like properties and completely unique chemical matter, and minimal development was needed to identify the clinical candidate. A fragment screen had been planned for that project as well but all chemical resources were diverted to the HTS hit when such an exciting molecule was found to progress it more rapidly. A fragment screen would most likely be used in a backup project for it though since there aren't many other interesting hits from the HTS campaign to drive a new series development.

    Every project ends up being really unique and it's hard to say, "This is the way discovery should be done." It depends a lot on your company's infrastructure, experience, libraries, and resources. At the two places I've worked the philosophy has been, in general, to hit the targets with everything you can including patents, literature, HTS, fragments, and creativity to get a program started. In some cases that has meant a full HTS and fragment campaign, in others just HTS, in others just fragments, and in a few kinase programs I've seen lead series emerge from chemistry before screening of any type was available because we had a lot of kinase-type molecules available in the library and data about them in the database that the chemists were able to create something new by merging multiple series together.

    However, it is generally very exciting work and I think you'll see fragment appearing more and more, both as adaptation has increased as more examples of clinical candidates derived from fragments has come out, and as the techniques to find them improve, and more and more chemists get experience doing it. Remember, the first fragment screens at Abbot didn't happen until the early 90's. Given that, as a rule of thumb, it's 10 years minimum from discovery at the bench to the market it takes a while for things to wind through the system, and it can be hard to publish details of this early.

    Speaking of papers, Hajduk from Abbot has presentations and probably papers now describing the work at Abbott using HTS and fragments. What they show is that there are some targets that HTS gives you hits for where fragments don't/can't (we're not good at it on membrane proteins yet), but there's also many targets where fragments delivered hits that HTS did not. After that, however, the attrition rate through the various hurdles in drug discovery is the same for HTS and fragment based hits. Thus, in a sense, neither is actually better at getting you to a final drug, but fragments seem to enable more targets overall.

    Hope that helps,
    Tony

    ReplyDelete
  8. One point relevant to Mekie’s question about the market for fragment-based methods is that FBDD is a way that a small company (or academic group) can neutralize the advantages that come with the large screening collections that established pharmaceutical companies have built up over the years.

    There are a number of ways for folk to participate in this market. You might run an integrated project with the objective of generating something that you’ll get ready for clinical trials. Alternatively you might deliver a couple of lead series suitable for further optimization. Flogging leads can be quite tricky, especially if they’re relatively unoptimized so you might run a fee for service operation, for example, providing crystal structures for hits from a fragment screen that you or somebody else has run using some technically demanding technique. Another option would be to also sell/lease equipment for fragment work or to supply compounds to be bought on an individual basis, or if your library design capability is up to the task, complete screening libraries.

    I’d also like to comment on why FBDD doesn’t get used more. As Tony points out, the fragment-based approaches are less applicable to membrane proteins (there are rumors that some folk in Pharma may be interested in GPCRs and ion channels). This is partly a screening technology issue but also a protein structure issue because you normally need structural data to guide you when your 200 MW fragment hit binds at 2 mM.

    There are reasons why Pharma can appear slow to adopt new technologies. One problem is deciding whether the new approach is going to live up the promises made by its supporters. There is a tendency to look for panaceas (I’m thinking human genome, combichem, high throughput screening, protein structure, docking-n-scoring, QSAR…) and novel approaches can occasionally get oversold. Over time, this overselling can lead to a more conservative mindset amongst those who have to make tough decisions about what to prioritize for funding. It also means that a technology might fail to find its niche because it has been presented as the solution to all problems.

    ReplyDelete
  9. One of the main reasons that fragment-based approaches have found modest traction in big pharma is that in general terms they require resource--specifically synthetic and medicinal chemistry but also structural biology resource--to be committed earlier in the lead generation pipeline than HTS-based approaches. Committing lots of resource early on looks stupid to the R&D budget holder because this is where projects are most likely to suffer attrition and the resource is most likely to be wasted. Couple this with the fact that in many companies fragment screening has been deployed as a tool of last resort against already highly suspect targets and it's not hard to see why it's been a bit of an uphill battle to embed FBDD in the big pharma LG culture. We as practitioners cannot entirely evade blame for this, having often at least implied, if it was not stated outright, that FBDD might be a way of addressing 'intractable' targets.

    A more subtle assessment of the risk profile of FBDD might run along the lines that the early investment of significant structural, biophysical etc. effort in understanding intimately both the target and the lead series, and careful attention to retaining good phys props, ligand efficiency and so on, might pay off in reduced downstream attrition compared with an HTS-derived lead series that has not been designed, like a Ferrari, from first principles but, rather, crudely pimped from a second-hand Skoda. This, of course, is little more than speculation -- not to say an unfair characterisation of HTS which by many measures can be a very successful approach when done well. Sadly, we seldom get the chance to perform such an evaluation because of the limited database of comparative examples we have.

    One may be on firmer ground in asserting that one reason why some biotechs have appeared to be very successful in getting FBDD to work well is that they are looking at the problem from a different perspective. Whereas big pharma are driven to work on the targets with the highest degree of clinical validation and disease linkage, biotechs have had the luxury of being able to cherry-pick targets that conveniently yield to their chosen technology (at least for the purposes of dressing their shop window with examplary wares). Don't get me wrong -- many of them are highly skilled and efficient at what they do, but a cursory look at their portfolios will usually reveal them to be populated with a roster of highly tractable targets covering pretty well-trodden ground.

    But it is not all gloomy news. There are some nice examples of FBDD success out there, and increasing cost-consciousness is focusing the minds of R&D budget holders on the very considerable consumables costs of running full collection HTS as a default. This, coupled with a reduced supply of novel targets that fall into well-precedented target classes, does seem to be stimulating a more imaginative and less monolithic approach to lead generation.

    NMR-Soul

    ReplyDelete
  10. Who knows where to download XRumer 5.0 Palladium?
    Help, please. All recommend this program to effectively advertise on the Internet, this is the best program!

    ReplyDelete