One of the most exciting things to behold at this year’s Personalized Medicine World Conference is the increasingly innovative ways that companies and healthcare providers are combining disparate data sources and deploying analytic tools to generate insights that are useful and actionable for clinicians.
This is no small feat.
It won’t surprise anyone reading this blog post that the great promise of personalized medicine – namely, the ability to identify and deliver the optimal treatment for an individual patient based on her genetic code and other individual attributes – has been tempered by the difficulty of processing the enormous data associated with the human genetic and phenotypic makeup to generate information that is actually useful to clinicians on a practical level.
In other words, we know the data contain insights, they’re just hard to generate.
So, it’s encouraging to see how innovators are pulling together disparate data sets in new and unique ways to overcome these barriers. For example, we heard more about an interesting partnership between the team at Genospace and Washington, DC-based Inova Health System, which has collected thousands of whole genomic sequences along with corresponding clinical data from their patient populations, and has engaged Genospace to layer analytic tools that leverage this petabyte of data and make it useful to the clinicians treating Inova’s 2 million+ patient population.
And, this is just one example. We heard others from organizations like Numedii, Syapse and Oracle. These perfectly highlight the practical challenge. In this era of ever-expanding data, incorporating its insights into the clinical workflow is a formidable challenge.
This was the exact topic our founder, Mark Harris, chose to explore in his talk at PMWC16.
As readers know, we focus on one particular segment – genetic testing – and came face-to-face with this reality at a very early stage. We built a comprehensive database of every genetic testing product on the market, and when we turned to deliver that into hospitals, we discovered the layers of complexity and variation in ordering workflows. Each hospital was different from the next in ways that were difficult to predict (or even imagine).
We learned quickly that we needed to become students of the workflow at each institution, approaching their genetic test ordering struggles with empathy and a sharp pencil. (Trust us, it is not easy to be a send-out lab manager at a hospital.)
Once we did, it was incredible what we learned – not to mention the number of features we built – adapting to different workflows. Now, fast forward a couple of years, and we’ve installed our tools at enormous health systems, high-quality regional hospitals, and everything in between. Far from being a distracting challenge, incorporating our tools into the clinical workflow was the gateway into greater adoption and greater impact.
And, if you’re wondering what we’re up to now, we’re focused on two things:
How to build systems to assess genetic testing order data for higher-level analysis, such as cost-effectiveness studies.
How to support and improve the process through which genetic tests are reimbursed – another process mired in complexity for everyone involved.
Stay tuned for more as 2016 unfolds.