by Garrett Dunlap
figures by Rebecca Clements
It’s no secret – pharmaceuticals are expensive for the patients who need them. One major reason for this is the cost and time needed to bring a drug to market. In our current medication pipeline, many drugs will ultimately fail during the process of development, which influences this cost. But could a better, less costly drug development process be on the horizon?
Bringing a drug to market
The search for new pharmaceutical drugs generally begins in the lab, where scientists test thousands or tens of thousands of compounds to see if they have any effect on a specific disease (Figure 1). Once a few promising candidates are found, they are further studied to find their shape, how they work, and how the body breaks them down. Promising drugs are then tested, usually in animals such as mice and primates, in order to discover potential side effects and determine optimal dosing. Candidates that pass this first round of testing in non-human mammals are next taken into the clinic for human trials. These trials aim to assess the safety (Phase 1), efficacy (Phase 2), and utility in large patient populations (Phase 3). Only when a drug candidate passes all of these stages can it be brought to market to help those afflicted by a particular disease.
While the current system to develop a drug usually ensures that all drugs brought to market are safe and effective, it is a lengthy process. The average elapsed time between discovering a potential drug candidate to obtaining FDA approval is 14 years. The process is also expensive: when the cost of failed attempts is considered, an estimated $2 billion is spent on bringing a single drug to market! With an estimated failure rate exceeding 95%, it is easy to see how this cost becomes gigantic.
Much of the failure in the drug development pipeline occurs at two critical points. The first occurs when scientists first characterize a potential drug candidate using cells grown in the lab, but the drug reacts differently once inside living organisms. The second occurs when the results seen in animal models fail to translate to humans during costly clinical trials. Given these significant barriers in drug development, methods to better model a drug’s function and toxicity in humans could help immensely to prevent spending money and time on a drug that is destined to fail.
Chip! Chip! Hooray!
One budding solution to this problem is organ-on-a-chip technology. Organ-on-a-chip refers to a collection of different devices that share a unifying goal: simulating human biological conditions by recreating the functions and natural environments of organs in miniature form. These chips, though typically smaller than a pack of gum, can already recreate some of the body’s most critical functions.
These chips rely on a process termed “microfluidics” in order to function. This process involves the movement of tiny (“micro”) volumes of liquid or air (“fluids”) through chambers inside the 3-dimensional chips, simulating the natural environment of an organ in the human body (Figure 2). After coating a chip with cells, the microfluidic process can then be used to answer a multitude of different questions. For instance, the addition of a potential candidate drug to the flowing liquid better simulates the environment it will face in the body, allowing a researcher to test how it functions in a 3-dimensional environment instead of the standard 2-dimensional dish routinely used in the lab.
The first of these technologies was a lung-on-a-chip designed by the Wyss Institute for Biologically Inspired Engineering at Harvard University, which has been used to study the effect of bacterial infection on lung function. Since then, organ-on-a-chip technologies simulating the intestines, heart, skin, brain, and kidneys, and female reproductive tract have already been developed, and others that will simulate even more organs in the human body are currently being planned (Figure 3).
Perhaps one of the most exciting new technologies for research and drug development will be the liver-on-a-chip. The liver plays a very important role in breaking down and processing drugs in the body, so it is also a common place to see harmful effects of drugs. If the liver-on-a-chip is successful, scientists could use it to study how a drug is broken down and absorbed in the body, as well as how toxic it may be. In effect, these chips may one day not only reduce the need for animal testing but also provide superior data that is more relevant to the human body than current approaches – a real win-win!
Organ-on-a-chip technologies may accelerate the journey to “personalized medicine,” which seeks to choose the best possible drug at the right dose based on the needs of an individual patient. Growing a patient’s cells using an organ-on-a-chip could be a way to find out the unique response of his or her cells to a drug. Imagine this: a patient newly diagnosed with a lung disease has a small, painless sample of cells taken from their skin. These cells are then reverted to stem cells, which have the special ability to transform into many different types of cells in the body. The stem cells are grown in the lab, then coaxed to become the type of lung cells that are affected by the patient’s disease. Following this, the cells can be transferred onto chips, which are hooked up to machines that feed liquid and air through the chambers, simulating the respiratory process. A multitude of different approved drugs can be added to the flowing liquid, all while scientists monitor how the cells respond. The drug that shows the highest improvement in the cell’s health and strength could be prescribed first, saving the patient from a process of trial-and-error that can be ineffective and even dangerous.
Looking to the future
The current state of drug development is one of assumption. One assumption is that a drug will work the same in a dish of cells as in a living organism. Another is that animals will accurately simulate both the disease and the drug’s action. Finally, it’s assumed that a drug will be equally effective in large populations of patients as it is in a single patient. Because these can be false assumptions, a drug can go through many stages of research and development, only to be dropped late in the process due to efficacy or safety issues that weren’t observed in earlier tests. Although still in its infancy, emerging organ-on-a-chip technologies have the ability to limit this risk. While it may not mimic the body with complete precision, this technology has the potential to make us better at predicting which drugs will be effective and safe in humans. A future with organs on chips may be one that develops drugs cheaper, faster, and does it all with less animal testing.
Garrett Dunlap is a second-year Ph.D. candidate in Biological and Biomedical Sciences at Harvard University. He can be found on Twitter at @dunlap_g.
For more information:
- To learn more about the current state and future of organ chip technology, check out these resources from the National Institute of Health, Harvard University’s Wyss Institute, and CellPress
This article is part of the 2018 Special Edition — Tomorrow’s Technology: Silicon Valley and Beyond