Six years ago, when I founded a company to serve scientists pushing the frontiers of biology research, large molecule R&D was by no means a sure thing. In 2012, biologics made up roughly 27% of pharmaceutical pipelines, and had been hovering around that level for the preceding ten years. The FDA hadn’t yet approved a single CAR-T or gene therapy. CRISPR-Cas9 gene editing hadn’t even been invented yet.
Today, of course, biologics are widely acknowledged as the drugs of the future. In 2017, biologics made up nearly 40% of pharmaceutical pipelines. In the first two months of 2018, investment in biotech startupsexceeded the entirety of biotech investment in 2013. Industry consensus is that by 2022, the majority of top 100 drugs on the market will be biologics.
Right now, we’re at the start of an even broader biotech revolution comparable to the rise of the computer in the ’70s and ’80s, and even to the industrial revolution of the 19th century. Over the next 10 to 20 years, biotech will fundamentally rewrite the way we live – and this goes beyond pharmaceuticals. The food we eat, the crops that make up our agriculture industry, the fuels that power our lives, and just about all everyday materials, from textiles to plastics, will be radically affected and improved by biotech.
But at the same time, the pioneering scientists who are actually doing this research are stuck using paper, spreadsheets, and software built for traditional small molecule research. These researchers routinely spend 30% of their time on busy work. Meanwhile, their responsibilities include not only making a new drug, but figuring out how to make an entirely new type of drug. Because they can’t easily and accurately report their progress and results, managers and executives end up having to base pivotal decisions on incomplete data.
Given the state of things, it’s a testament to the tenacity of today’s scientists that biotech has come this far. But as drugs and other biological products continue to get more complex and as R&D costs continue to rise, biotech’s only option is to take a different approach: to industrialize.
These impediments – ill-suited tools, insufficient record-keeping, and underdeveloped processes – are the same sorts of growing pains that preceded the rise of semiconductors, modern manufacturing, and even chemistry-based R&D. Over time, all of these paradigm shifts trended towards standardized, structured industrialization. In the context of biotech, this means more engineered processes, higher predictability, higher scalability, and ultimately faster time-to-market.
Compared to many previous paradigm shifts, large molecule R&D is distinguished by the incredible quantity and complexity of its data, and of the extent to which numerous teams need to work together. Given that biotech’s particular complexity centers around data and collaboration, it makes sense that software will have to be the driving force behind its industrialization. This software will need to do three central things:
1. Enable rapid, iterative development of new therapeutic techniques
When someone says, “Large molecule R&D,” or even, “Biologics R&D,” it can mean many different things. From CAR-T immunotherapy, to genetically engineering new crops, the sheer range of modalities that biotechs work in today is staggering. What’s more, there are numerous opportunities for these techniques to overlap. For example, if a company is using CRISPR/Cas9 to genetically engineer CAR-T cells, their R&D processes and needs will differ from a company doing CAR-T with lentiviral transfection. Biotechs need software that can support not only novel therapeutics, but novel R&D processes.
2. Empower faster and smarter decision-making
In drug discovery, time to market is key. Especially when you’re dealing with iterative workflows and novel processes, being able to make quick decisions and back them up with comprehensive data is a must. Large molecule R&D in particular involves complex work from multiple teams. Biotechs need software that can quickly synthesize data from across teams and surface decision-quality results. For example, it shouldn’t take days of data collation to figure out which parameters during fermentation lead to the highest quality materials after purification.
3. Accelerate the move to “labless” drug companies
Externalization and automation are two of the most hyped components of the future of R&D, and with good reason. The benefits of 24/7 experimentation and geographically advantageous lab space hardly need to be expounded upon. In the context of large molecule R&D, going labless means faster iteration and higher throughput. Biotechs need software that can manage external partners, obfuscate data when necessary, and surface partners’ data back to them in a digestible manner. If a company outsources its early antibody discovery work to a CRO, accessing the data produced by the CRO should be as easy as accessing the data that company is producing internally. Ideally, all of that upstream data should be accessible to downstream teams in the same system they use to complete and record their own work.
Biotechnology will (in some cases literally) alter the fabric of our lives – if the industry can overcome its growing pains. For large molecule R&D, the hurdles are as high as the results are promising. But we can clearly delineate the challenges that stand in our way. And thankfully, modern software development is more than up to the task.