Healthcare has evolved to recognize that health data needs to be centralized. A person-centered, universal store of one’s health data, combining the pieces of the story broken into the many different institution-centered fragments where it now lies, is the best way to deliver coordinated, intelligent healthcare to everyone. Moreover, as we move towards population health and value-based compensation for healthcare services, the ability to survey a population and capture all the data from wherever it has been created is central to being able to deliver high quality recommendations.
The burgeoning field of Artificial Intelligence, where machine-learning algorithms sift through massive amounts of centralized data in order to identify patterns that may not have been previously apparent, relies on aggregated, centralized data organized into a Medical Knowledge Graph.
Where are we in this journey? The current state of health data grew out of the legacy of historic record-keeping when everything was on paper. Each doctor, each hospital had her or his own chart on each patient seen. A given patient, therefore, would have multiple charts in multiple places, each one with a segment of the story. Information-sharing was accomplished by making copies of pages from a paper chart and faxing it to a designated recipient when clinically needed for coordinated care.
As we moved from a paper-based legacy to an electronic base (over 80% of physicians and hospitals now use some form of Electronic Health Records system), the basic pattern was not changed. Each institution had its own electronic system, replacing the paper charts, but each institution continued to have a segment of the story. Significant strides have been made in the past 10 years in standardizing legacy methods of capturing health data, which has led to better ways of exchanging data. Standard vocabularies, standard message formats, and standard secure ways of passing data have improved the ability of each institution to request and receive pertinent copies of data from elsewhere, but the overall architecture of data distribution remains basically unchanged – each institution keeps its own “chart” and sends copies of pieces of that chart to authorized requestors, not unlike the legacy method of faxing pages from a chart.
Moving from query-response to aggregated data
The standardized exchange of health information has mostly emerged as centralized hubs that can pass data requests from one subscribing institution to another, and do the job of mapping individuals in each system to facilitate exchange. These hubs engage in query-response interactions, and do not keep copies of the data transferred; they only keep the mapping information (this person in institution A with record number 1234 is the same person in institution B with record number 5678). They rely upon standard message packages which each endpoint can create and consume.
Though many efforts at creating public regional Health Information Exchanges (HIEs) were launched with HITECH in 2009, few of them have survived. There are some federal and non-federal agency efforts (eHealth Exchange), but the main HIEs that are seen in the marketplace today are vendor-associated (such as Epic’s Care Everywhere), or vendor-consortium-sponsored (Commonwell, and Carequality).
The limitations of query-response have become evident: such exchange of copies of individual records from one institution to another does not do anything to break down the institution-centered data silos. It is very hard to build useful population-health and value-based dashboards on this kind of infrastructure. Aggregated data is needed.
There have been some emergent attempts at developing aggregated data. Epic, though it has one of the largest query-response hubs in the country (Care Everywhere), has also developed an aggregated-data approach for population health management (Healthy Planet). The pragmatic reality is that aggregated data is needed, and simple pass-through of individual records just won’t do. The main drawback of the Healthy Planet approach is that it is still institution-limited. A given institution can create its own aggregated data from all the sources it can capture, but that data store is separate from the ones accumulated by different institutions. A step forward, yes, but still a ways away from the goal.
Some other attempts at aggregated, centralized data have also emerged. In California, Cal INDEX is an attempt to structure aggregated data from multiple subscribers (health plans, delivery organizations and hospitals). It has struggled with getting subscribers, and has tried reorganizing its business model to promote its growth. But, as Metcalf’s Law states, a network is only as valuable as the number of subscribers.
What does the future look like?
The challenges in aggregating data across institutions are more political/business than technological. Let’s assume that health data can be aggregated in very large ways, and include data pulled from multiple institutions. That opens up several “next generation” options for innovation:
EHRs no longer work with internal, institution-centered data. They work with external universal data through standardized connectivity (modern APIs), and update that data so that everyone connected can see it and use it. No more need for query-response pass-through of copies of data. Next-generation EHRs are more like a collection of apps, designed for the various work-flow needs in healthcare. Since they all work off the same shared data, they can be swapped out and improved as needed.
The universal data can be used by AI in ways that are staggering. Insights into disease, mapping clinical disease with genomic data, identification of individualized care recommendations – truly intelligent decision support for clinicians and patients – these are all things that no longer belong in the realm of science fiction. They will be seen in the next few years.
Since the value is more in the data than in the particular user interface, business cases can be patterned after this understanding. All roadblocks to gathering data need to be removed. The use of the data is where the value lies. The data can be segmented and used in ways that can differ from one customer to another, with fees built accordingly. For example, insights from the Medical Knowledge Graph can be segmented into “skills” which can be subscribed to in ways akin to Amazon’s Alexa skills marketplace. One can subscribe to a “dermatology skill set” or a “cardiology skill set” or a “general medicine baseline skill set,” etc.
Clearly, the future of health data is in its aggregation, and liberation from the institutions which now cage it. We will get there soon. Progress is being made in this direction already. The next generation of health IT, based on this structure, will be staggering.
This article is published as part of the IDG Contributor Network. Want to Join?