Boys will be boys, won't they. There is some gene buried on the Y chromosome that makes them think if something works as is, the bigger you make it the more success you will have. I bring this up because as we all know, many companies are slow to pick up FBDD; it is typically the "Hail Mary" pass, so success is slow in coming, as is uptake of the method as a primary tool. But once there is success, everybody jumps on the bandwagon. So, then the "crappy" little library you had needs to be increased in size (because obviously that's why it wasn't deliver superdrugs before); thank god for the medchemists!!!
So, what is the optimal size of a fragment library.
This is a table put together by the Good Folks® at Evotec for this fabulous book, with some added data bny the Editor of said Tome. What does this show us? FBDD libraries range all over in size. They obviously know something. What do they know?
No matter your screening paradigm, fragment hits need to be confirmed by an orthogonal method. So, even though you can blow through 50-100k compounds by a biochemical screen, you are still limited by your ability to confirm the hits by NMR or SPR or whatever.
If you don't think you need orthogonal methods (and really you do?) then your fragments need to be whistle-clean, because even low level impurities screened at 1-10mM can negatively impact the results of the screen. So, you think you have 10k-50k whistle-clean compounds?
Then, comes this [Shoichet and Austin (J Med Chem. 2008 Apr 24;51(8):2502-11)]. Aggregators!!!
Egads, it's like the Vikings crashing across the North Sea. So, to filter them out, you still need an orthogonal method(s).
So, creating massive (relatively) fragment libraries because you can screen them biochemically cheaply and quickly, doesn't buy you anything, because you still need to confirm them (typically by the same methods you chose not to use in the first place).
In conclusion, it's not the size, but how you use it.