CDER researchers are developing methods based on deep learning, a kind of artificial intelligence (AI), to capture patient experience of medical products in a way that allows essential information to be communicated, retrieved, and analyzed.
The development, evaluation, and regulation of drugs, biologics, and medical devices hinges on capturing and understanding information about how patients experience these products. This information is both complex and vast. For example, regulators receive information about patient responses or reactions to drug products from thousands of adverse event reports that physicians make to FDA each year through the FDA Adverse Event Reporting System (FAERS). All these reports must be carefully evaluated to ensure the long-term safety of drug products. To cite another example, the results of a clinical trial in an application for a new drug could consist in part of the experiences of thousands of patients, each with individual differences that should be considered.
MedDRA, a common terminology for patient information
If patient information is to be communicated, retrieved, and analyzed efficiently and accurately, a common language—one that regulatory authorities and the drug development community share—is needed. For example, if the clinicians conducting drug trials around the world used inconsistent language to describe cardiac arrhythmia, it would be impossible to quantify the arrhythmic effects of drugs based on reports the clinicians submitted to FDA. MedDRA (Medical Dictionary for Regulatory Activities) is widely used as a standardized dictionary of terms to describe adverse events of drugs and therapeutic biologic products. In MedDRA, tens of thousands of medical terms are linked in a way that captures logical relationships among them, making it possible to create meaningful summaries and conduct quantitative analysis (e.g., determining the frequency of various categories of adverse events and how they are associated with certain drugs). For regulatory purposes, patient information must be coded into MedDRA. Currently, for MedDRA coding via correct term selection (Figure 1), unstructured data in various documents needs to be aggregated and reviewed by humans, in a process that is ineffective and time-consuming. FDA reviewers devote considerable time and resources to checking the accuracy of the MedDRA-coded information submitted to the agency by sponsors.
Deep learning to code patient information accurately
Figure 2. In deep learning, artificial neurons (white circles) are connected in neural networks with one or more “hidden” layers of neurons between the input layer (where the data is first entered) and the output layer that signals which choice is to be made in a learning task (in this case, recognition of human faces). By using a training set in which the desired final output is known, inputs and thus outputs (represented by the lines connecting circles) in these networks can be adjusted stepwise so that the network learns to make good decisions consistently. A key characteristic of this learning approach is that successive hidden layers learn to recognize features of greater and greater complexity by combining simpler features from the previous layer. Examples of features captured in successive hidden layers might be edge, eye, and face (as shown here for facial recognition) or waveform, phoneme, word, and sentence for speech recognition. In practice, these networks can consist of thousands of neurons with many hidden layers and features. Learn more.
FDA researchers are developing computational approaches to use deep learning, a form of artificial intelligence (AI), to extract standard MedDRA terms automatically from large-scale free-text documents. Deep learning - machine learning that relies on artificial neural networks—has been applied to many activities performed by humans, including image and speech recognition and game playing. In fact, in many tasks of great complexity, deep learning has surpassed human performance. Like neurons in the brain, artificial neurons in deep learning receive inputs, process the inputs, and transmit signals to other neurons. “Deep” in deep learning refers to the organization of the artificial neurons into multiple layers with “hidden layers” learning to identify various kinds of attributes or “features” that can be composed into a set of more complex features in the next layer (Figure 2). Essentially, these networks can learn to recognize patterns in data.
This learning approach depends on large data sets that allow for the training of the neural network to achieve a task. For example, if a set consists of thousands of photographs, some of which are of cats and some of which are not, the connections between neurons in a neural network can be modified in a systematic way based on the accuracy of the outputs so that eventually, the network can reliably distinguish between images that contain cats and those that do not.
Deep learning to code patient information accurately
Figure 3. Natural language processing encompasses multiple techniques (as indicated in the outer circles) for processing unstructured data into a form that can be used by a deep learning model. Combinations of these techniques are used, depending on the type of analysis that needs to be implemented. For instance, sentiment analysis can be used to analyze customer satisfaction (e.g., happy versus sad). After being structured with these techniques, the data are ready as input to a deep learning model for completion of the learning task (e.g., the coding of patient narratives into MedDRA terminology).
CDER researchers will use thousands of correctly coded patient narratives in new drug application (NDA) submissions to train a neural network to make correct MedDRA coding decisions. Before this can be done, the unstructured data in patient narratives must be extracted, processed, and structured so that it can be used by a deep learning model. CDER researchers are using diverse natural language processing techniques (Figure 3), including sentiment analysis, machine learning, summarization, and entity and fact extraction, to parse the free text and convert the data in a patient narrative to a usable and analyzable form for deep learning.
Developing a user-friendly tool for MedDRA coding
In applying deep learning to MedDRA coding, CDER researchers will assess a variety of network architectures to optimize coding performance, including various kinds of recurrent networks that can be thought of as neural networks with a memory capability (these networks may be especially suitable for natural language processing tasks). The coding results from a variety of network architectures will be compared and their performance measured against that of expert reviewers selecting MedDRA terms from patient narratives.
Ultimately, a user-friendly tool will be developed to let users automatically choose preferred terms from free-text descriptions in newly submitted regulatory documents. This tool could also be used to check the coding accuracy of previous submissions, such as adverse event reports from drug developers.