Seeking a revolution in clinical care through AI

Minnetronix shares its experience in applying AI and machine learning to medical devices.

February 8, 2021 – Medical Device Outsourcing Magazine, Author: Aaron McCabe, Ph.D., Director of Research and Technology, Minnetronix Medical

New computational techniques stemming out of the field of data science, including machine learning, computer vision, and artificial intelligence (AI) techniques (collectively, here—ML), stand poised to revolutionize clinical care. These include the decision making leading up to and surrounding clinical interventions, the decisions to use medical devices, and the ongoing therapies delivered via medical devices. While these techniques are powerful, becoming (or are, depending on industry) mainstream, and generally exciting, they are not without risk and can incur quite considerable time and capital costs to develop and deploy. In fact, as others have noted, the vast majority of measurables (cost, time, code, etc.) developed and incurred while developing any ML system are not the ML-algorithm itself.1

There’s been a profound uptick in medical device companies making or wanting to make use of these ML techniques to meet unmet clinical needs and provide better therapies as these techniques may provide better and more adaptive insights than traditional methods. Meeting the need requires a focus on generalizing and standardizing the approach to developing ML-enabled technologies in the medical device space. By focusing on process, generalization, and re-use, it’s possible to reduce hidden time and capital costs in the development of image-based ML-algorithms.

Figure1: Minnetronix believes the coupling of decision support tools with interventions will lead to better patient outcomes. All images courtesy of Minnetronix Medical.

Background

As an example, there have been many advancements in the neurointensive care pathway in recent decades, and both the neurosurgical and neurocritical care specialties are rapidly evolving. In our experience, new interventions that are developed are often surrounded by questions about their optimal usage: when? in whom? for how long? Therefore, coupling interventions with decision support leads to better outcomes for the patient (Figure 1). In this article, we examine a tool currently under development to support decisions surrounding the use of Minnetronix Medical’s newly FDA-cleared expandable port for deep brain access, called the MindsEye Expandable Port (Figure 2). This port, and others like it, are used to access a hematoma in a patient that has experienced an intracerebral hemorrhage (ICH), allowing for evacuation of the bleed. Accordingly, Minnetronix sought to develop a decision support tool that provides additional insight into the critical choice of surgical intervention (with minimally invasive tools like the MindsEye port) versus medical management in the ICH population.

Figure 2: Minnetronix’s FDA-cleared MindsEye Expandable Port

The treatment paradigm of a patient with ICH is a subject of numerous recent and ongoing clinical trials.2-4 The current standard of care ranges from aggressive evacuation of the blood to “watch and wait,” depending on size, location, severity, and other still-debated factors. Many of these factors may be clarified by careful, quantitative examination of the progression of the patient’s status on CT imagery. Unfortunately, this is time-consuming and not currently the standard of care through radiology. While there are manual tools to help evaluate some of the metrics of the CT image, there are, to date, no automated tools that cover the breadth of measures required.

Figure 3: Minnetronix’s DepiCT Neuroimaging Platform (currently under development).

With this in mind, Minnetronix sought to create an automated CT processing algorithm (CT segmentation) that calculates relevant anatomic and volumetric factors over time to assist the neurosurgeon in evaluating and deciding when and how to evacuate the hematoma (Figure 3). This algorithm, called the DepiCT Neuroimaging Platform, was developed using well-documented ML techniques. During the development of this algorithm, technical hurdles were encountered that are equally well-documented; as issues were resolved, the team focused on generalizable solutions and tool-creation to better serve medical device customers, who are increasingly asking for similar solutions to accompany their systems. Following are highlights of a few of these problems and the generalized solutions. Examples from each of six different phases of the ML algorithm development process are highlighted (Figure 4).

Figure 4: Six phases of algorithm development

1. Data Acquisition
At the outset, we estimated that to develop a robust algorithm across numerous factors associated with the images or imaging equipment (e.g., technology, quality, time, anatomic, surgical), we would require thousands of CT scans from a variety of clinical sites. These scans would be labeled to create “ground truth” (gold standard) to train the ML algorithm. Timely, robust, and varied data collection is a well-known challenge in the biomedical space. According to both law and best practice, hospitals are protective of patient data. Data de-identification and accuracy must be top priority. To be successful and keep pace, the most effective tool was extreme flexibility. We collaborated with nearly a dozen sites. Some sites preferred to collaborate in the context of data-transfer agreements, while others preferred to operate as retrospective studies. We supported both approaches and followed the relevant processes to ensure success. In some cases, sites simply needed additional support navigating internal processes and procedures to expedite data transfer. Some sites preferred electronic transfer of imagery data via custom secured tools, some via standard cloud tools (e.g., Dropbox, Box, OneDrive, etc.), and others preferred USB sticks or even crates of DVDs, if it was easiest for them while following their internal processes to guarantee deidentification. We supported them all by setting up our own hosted many-terabyte datastore that could interact and receive any data electronically, via USB, as well as purchasing bulk DVD-rippers to create shorter work of the project.

2. Data Cleaning and Organization
One side effect of extreme flexibility during data acquisition is that this pushed the onus of cleaning (e.g., removing extraneous data) and organizing (creating a master database relating relevant clinical and technical factors to the imagery) to us. This was our preferred approach. We received data in DICOM format, as is typical for CT and most other medical imagery. This format is fraught with data management challenges and is best suited to reside within a PACS system database. Rather than being boxed in by off-the-shelf image acquisition systems, we developed our own internal tools to efficiently manage the data, tailored to the ML algorithm development workflow. As an example, while off-the-shelf DICOM viewer tools were useful for viewing the images and their associated metadata, they were not particularly useful for subsequent steps in the algorithm development lifecycle. Specifically, instead of utilizing a database approach, both our custom tool for manual segmentation for ground truth as well as the ML code for algorithm training operated better when organized as a hierarchy of human-readable folders on the filesystem, minimizing complexity and increasing ease-of-use of their implementations. As such, we used both customized off-the-shelf systems as well as wrote our own file management tools to bridge the gap between PACS database requirements and our filesystem needs, allowing us to accomplish the best of both worlds. Furthermore, as quickly as the process would allow, we moved away from the DICOM format to a more analysis-friendly format5,6 to facilitate data deployment. These tools are completely generic and able to handle other types of imagery for us or for our medical device company customers.

3. Ground Truth
The critical component of the type of ML algorithm that we developed for DepiCT was robust gold-standard “ground truth” data for training and against which the algorithm was tested. Without this, the algorithm would never train to appropriate performance levels. As previously mentioned, while there are manual tools available (with varying degrees of accuracy) as part of the standard clinical workflow for labeling a minority of the measures we trained for DepiCT, the majority of the measures are simply not performed today. Thus, there are no off-the-shelf tools that make the ground truth annotation process easy. Accordingly, Minnetronix developed its own tool to aid in the labeling of the ground truth for this algorithm. This is probably the most important time-saving step in the entire process since training of the algorithm required thousands of manually-labeled images. Without an efficient way to accomplish this, schedules and budgets could explode. To avoid this, we treated the development of our own, proprietary, manual segmentation tool as we would treat any clinical solution: by identifying the unmet need and the job that needs to be done. We focused on development of a general-purpose, readily-modifiable, and extensible manual image segmentation software suite; one that was purpose-built for the specifics of this challenge yet readily extensible to the next.

Furthermore, after experimenting with partners while labeling ground truth, we found that clusters of two-person teams greatly outperformed any one individual in labeling the features. We paired radiologists up with students trained as manual segmenters (Figure 5). The radiologists, trained to quickly read scan after scan in clinical practice, were very efficient at rapidly labeling features within a particular CT scan. Students were then brought in to refine what the radiologists had labeled to “pixel perfection.” The whole process was overseen by neuroradiology, to ensure the students interpreted the radiological inputs correctly. We therefore built a workflow management system into our tool, wherein scans would be routed to radiologists first for coarse labeling, and then to a student for polishing, with the capability for additional loops in the event of errors, tracking all interactions with the data and at a fraction of the cost of traditional approaches.