Fixing Big Pharma Research: Expected cost savings have to be recoverable

All too often the cost savings from efficiency improvements of various kinds are dramatically over estimated, especially when large numbers of small improvements are aggregated.

Companies of all kinds (not just Pharma) often justify investments in new equipment or software based on predicted time savings that result from the purchase. “If we buy a new AutoStamper-2000 for $10,000, we’ll save 1000 hours a year in applying postage stamps. That’s $50,000 in salary costs, about what we pay one employee, so we can reduce our headcount by one.” Sounds like a no-brainer.

But… what if those predicted savings are based on an estimate of 1000 employees saving 1.2 minutes per week of stamp-licking? Is that cost savings really recoverable? Will those weekly minutes actually be re-deployed on a more useful job-related activity? What about the potential additional cost of those 1000 employees walking 3 minutes every week to use the AutoStamper-2000 on the 3rd floor? And was the full cost for a new AutoStamper maintenance guy included? Sounds like a silly example, but this sort of thing happens all the time, whether it’s implementing a new piece of automation, a new contract employee (who will do “low value” work so your high-paid employees don’t have to), or a new outsourcing relationship. In the real world, I’ve seen proposed numbers like 15 or 30 min a week of time savings, summed over a few hundred employees. It’s almost certain that less than 100% of that time is actually meaningfully redeployed on work. Saving an hour or several hours a week per person — now that’s real time savings. But not 15 min per week.

Here’s another related example. Companies often tally up the fully loaded cost per employee, including the cost of their benefits, office space, computer, and other allocated overhead. In industrial research, the laboratory reagents and supplies that an actively working scientist might use are also sometimes included in the total. Things can go wrong however, when you use that entire cost number to calculate the savings from using more automation or cheaper contract employees. The problem is that a lot of those costs might be fixed, and won’t actually go away unless you close a building or make other major changes. The new contract employee still needs a computer and an office. The research building doesn’t get any smaller, and the same reagents will need to be purchased if the same research projects are active. As a result, the true cost of the entire operation is not reduced as much as expected — and in the worst case, costs might actually rise.

Bottom line: Make sure calculated cost savings are actually recoverable, especially if you need to spend real money to make the proposed improvement!

Posted in Fixing Big Pharma Research, Management | Tagged , , , , , | Leave a comment

How much science can you fit in 6 seconds?

GE’s 6 Second Science Fair has produced a really neat compilation of short videos that illustrate a wide range of scientific principles.

Visit Joe Hanson’s blog for a (nearly) complete list of explanations.

There’s potato batteries, antacid-propelled rockets, a Tesla coil that generates enough electricity through empty air to illuminate a lightbulb, “instant snow”, several things on fire, and much, much more!

Posted in Fun Science | Tagged | Leave a comment

Late summer day on Jordan Pond, Acadia National Park

Jordan_Pond

Like every photo I’ve posted so far, this was shot with an iPhone 5 and has not been imaged processed.  Other cameras in my bag include a Panasonic Lumix DMC-TZ5 and a Pentax K-7 digital SLR.  But, as they say, the best camera is the one you have in your hand!

popover

And let’s not forget the tea & popovers on the lawn at Jordan Pond House. After an afternoon of bicycling on carriage path trails, of course.

Posted in Photography | Tagged , , , | Leave a comment

Naturally fluorescent minerals at the Yale Peabody Museum

The same rocks, viewed under white light (top) and ultraviolet light (bottom).
Some pretty amazing colors!

20130907-005706.jpg

20130907-005638.jpg

Posted in Fun Science | Tagged | Leave a comment

Fixing Big Pharma Research: The costs of delays need to be properly valued

For the Pharmaceutical industry, the foregone revenue created by delays in bringing a drug to market can be surprisingly high — from $50,000 to $2,000,000 per day, depending on the phase of the pipeline. Such costs are often under appreciated.

Pharma companies the world over are suffering from low R&D productivity. As a consequence they are streamlining their organizations, automating, cutting headcount, outsourcing, and trying to become as efficient as possible. You can debate the relative scientific merits of particular cost-cutting strategies, but one thing that often isn’t included in the calculation — especially for early stage R&D projects — is how much a leaner operation delays progress on active programs.

One key feature of the pharma industry is that its products essentially expire — because patents have a limited lifetime. Before expiration, a pharma company can charge a high price due to patent exclusivity. Those high prices allow the company to make a profit (typically 15-20% return on equity after all expenses), and pay for development costs of not only the successful drug, but all the failed ones. Afterwards, the price drops very quickly (especially for small molecule therapeutics), heading much closer to the marginal cost of production, which can be quite low. To first approximation, in the final days of patent exclusivity sales act like a step function, so if a successful drug with $3.65 billion in annual revenue were to lose patent protection one day earlier than necessary, $10 million would be lost.

Now $10 million 20 years in the future has to be discounted to present day dollars. At a 7% cost of capital (a typical value for today’s industry), that $10 million is actually worth roughly $2.6 million today. But as pharmaceutical industry experience shows, it takes upward of 50 early stage projects to produce the one successful project that actually makes it onto the market. So, for every active early stage project, one day of delay costs over $50,000.

Here’s a table of the costs of a day of delay per project for the different pipeline stages (I’ve used some of Bruce Booth’s crowd-sourced estimates of the amount of time spent in the different phases of drug discovery, and the overall success rate of each phase). You can build your own model using the Excel formula indicated and change any of the assumptions.

(Click on the image below for a full size view)

pv_table

As you can see time IS money. Even at very early stages, a day of delay can cost more than $50,000 per project in present day dollars, which quickly rises above the $250,000-500,000 range in Phase I and II. By the time you hit Phase III, it’s close to $2 million. In Phase III, this sort of thing is on everyone’s mind and that’s one reason why so much effort can be/is spent enrolling patients as quickly as possible (besides the race to, you know, save lives). However, in the early stages these costs are often ignored.

The take away lesson: If you’re going to make a change that slows down all your company’s projects significantly, that change better save you a LOT of cash outlay today. Otherwise your operation will actually be net MORE expensive after the change.

Posted in Management, Research | Tagged , , , | Leave a comment

Fixing Big Pharma Research: Introduction

Lots of articles and blog posts get written about the research productivity crisis in the pharmaceutical industry.  A lot of Wall Street types argue that Pharma companies simply must spend less on R&D (Recently, yet another analyst called for a pharma company to just cut their R&D spending). That’s the equivalent of giving investment advice to a client by saying “make sure you buy low and sell high”.

So what should companies actually DO to either lower their R&D expenses or increase their productivity? Unfortunately, the problem is far more complex and multivariate than even the most comprehensive proposed solution (see, for example, Bruce Booth’s recent post on the subject). In fact, a lot of small (and not so small) problems have combined to create the overall suboptimal system that the industry is trying to change.

In this series of posts, I’ll describe the many (often interlocking) issues as I see them.  There’s no single magic bullet that will fix everything, but a lot of little fixes might add up to some significant improvement. I’ll do my part to build awareness using this blog.

(Note that many of these problems are not unique to Pharma; large and small organizations of various kinds can experience the same issues.)

Posted in Research | Tagged , , , , , , | 1 Comment

Fun Science: Just how close was that lightning bolt?

When you hear a storm approaching, with thunder rumbling and lightning flashing, did you ever wonder how far away the bad weather is?  Or how close a really big lightening strike was?  There’s an easy way to figure it out.  Just watch for a particularly bright flash, and count to yourself the number of seconds (one Mississippi, two Mississippi, etc.) before you hear the matching thunder clap.  Divide the number by five, and that’s how many miles away the lightening bolt was.

How’s it work?  Well the light (traveling at 186,000 miles per second) reaches your eye just about instantly, but the sound from the lightening bolt takes quite a bit longer to arrive.  It travels at about 1,100 feet per second at sea level, or roughly a mile in 5 seconds.

Of course, if the delay you count is less than a second or two — you better make sure you’re in a safe spot under cover!

Bonus science trivia: Lightning is a really complex phenomena.  A single lightning bolt is actually a number of different electrical discharges traveling down and up between the ground and a cloud (or between clouds) on a millisecond time scale.  The thunder clap you hear is caused by the superheated air along the lightning path expanding in a shock wave.  Particularly long bolts will have a rumbling thunder sound because the shock waves from the farthest parts of the bolt take slightly longer times to reach your ear.

Posted in Fun Science | Tagged , | Leave a comment

Tutorial: Significant Figures

New (and not so new) scientists often don’t think about how many digits, or significant figures they should include when reporting numerical data.  Unless you’re using a finely calibrated instrument, most readings aren’t accurate beyond one or two percent (about two significant figures).  But that doesn’t stop a computer display or other electronic device from spitting out readings with a whole lot of digits.  As a consequence, people commonly report data with much more precision than is justified by the measurement technique they use.

Here’s the right way to report numerical data which has an error estimate (e.g. an average +/- standard deviation, or an average +/- 95% confidence interval):

1) Report error estimates with only one sigificant figure, unless the first digit is a 1 or 2.  In that case report two digits.  Round up or down as appropriate.

2) Report the average result using as many significant figures as are justified by the digits in the reported error.  Round as appropriate.

Examples:
Average = 43.695, standard deviation = 5.344, report 44 +/- 5
Average = 10.1113, standard deviation = 0.1249, report 10.11 +/- 0.12
Average = 23,765, standard deviation = 437, report 23,800 +/- 400
Average = 7,390,012, standard deviation = 251,912, report 7,390,000 +/- 250,000
Average = 0.00539, standard deviation = 0.00719, report 0.005 +/- 0.007

If your data has no error estimate and is a continuous number — say a ruler measurement  — report only as many significant figures as can be accurately and reproducibly measured.  If you can truly measure with an accuracy of 1/100th of an inch, report 17.35 inches, but if a reliable measurement can be made only to within 1/10th of an inch, report 17.4 inches.

In the case of a single discrete measurements (e.g. an accurate count of two-story houses in a town), it is correct to report the actual measurement with all digits:  13,271 houses.  However, as soon as such measurements are averaged together and you have a variability estimate for the mean, follow the rules above.  If the average over 12 communities = 13,271 houses, and the standard deviation = 598, report 13,300 +/- 600 houses (When randomly sampling a single town, this is your best guess of what the count of two story houses will be.).

Finally, when the accuracy of a measurement is  unknown, use common sense to limit the number of significant figures to two, unless the first digit is a 1 or 2, in which case report 3 significant figures. [This implies a precision of approximately 1%; very few natural measurements are more precise than this when examined across multiple individuals, locations or time points].  For example, researchers may well have accurately counted 75,456 responding patients and 23,499 non-responders, but a more sensible way to report those numbers is 75,000 and 23,500.  Similarly, 0.02987 would be reported as 0.0299 (three digits), and 0.0495 would be 0.050 (two digits).

Exceptions:  Sometimes using the same number of trailing or leading zeros is a useful technique to help facilitate comparisons.  For example, using Table 1 on the left makes it easier to compare results up and down the column, even though some entries have more significant figures than are justified by the error measurement (Table 2 has the correct number of significant figures).

Table 1                                 Table 2
Avg.         Error                    Avg.        Error
0.0123     0.0007                0.0123    0.0007
1.2405     0.5674                 1.2            0.6
0.0046    0.0030                0.005       0.003
0.0005    0.0004                0.0005    0.0004

And to make the comparison even easier, use a fixed width font!

Additional note on vocabulary:  Precision is the extent to which repeated measurements agree with each other, accuracy is the extent to which the measurements match the true value.

Posted in Data Analysis, Tutorials | Tagged , , , , | Leave a comment

Blog Content

Just a quick note for readers — this blog is brand new (obviously) and I’m still figuring out what I want to post and how to use the blog.  So at least for now, there will be quite a random assortment of content — photos, commentary, tutorials on analytical tools, miscellaneous notes, and the occasional interesting (at least to me) science trivia.  Please do post your comments — positive or negative — about what you’d like to see or not see, as the case may be.

Posted in Uncategorized | Tagged | Leave a comment

Fun Science: The Cricket as a Thermometer

The loud crickets chirping in unison tonight reminded me of A. E. Dolbear’s observation that crickets can tell you the current temperature.  According to his classic 1881 communication:

T = 50 + (N-40)/4

where N is the number of field cricket chirps per minute, and T is the temperature in Fahrenheit.  An easier to remember equivalent is

T = N + 40

where, in this case, N is the number of chirps in 15 seconds.

I gave it a try tonight.  I counted 91.7 +/- 2.1 chirps per minute, yielding a calculated temperature of 62.9 +/- 0.5 degrees.  The actual reading on the thermometer outside?  61 degrees.  Pretty good accuracy, and no instrumentation required.

Bonus science trivia:  All nearby crickets of the same species will synchronize their chirping — despite the fact that the males are actually competing with each other for mates.

Extra bonus science trivia: The chirping rate of crickets (and many activities of cold blooded creatures) actually follows the Arrhenius equation for the temperature dependence of chemical reaction rates.  The chirping rate is a function of temperature because of the underlying biochemical reactions that give the cricket the energy it needs to chirp.

Posted in Fun Science | Tagged , , | 1 Comment