Healthcare IT delivery systems company Health Gorilla recently published their State of Interoperability 2023 report. Please don’t click away! That term does sound scary and complex…but it’s talking about something we all deal with: how we get one computer’s stuff to work with another one’s.
It’s become common for people to experience challenges when using a computer-based app and wanting to get something delivered to another type of system. That’s the simplified way to look at this complex “interoperability” term…so read on to learn more about how it impacts the current world of laboratory data.
To help give a sense of how their findings are seen in the real world, Health Gorilla did an analysis of today’s healthcare data beliefs in their 5 Myths Debunked by the 2023 State of Interoperability Report. This report distills down the essence of their State of Interoperability 2023 report. So, skip the main one and consider this second report’s takeaways:
- Most health systems support sharing patient data for purposes beyond treatment.
- Both electronic health record (EHR) vendors and digital health companies want more non-traditional data types.
- Data quality remains a major hurdle.
- Digital health organizations rely on multiple methods to reliably acquire patient data, and standardizing across channels is key.
We’re actually witnessing much of this in the course of recent business at C4. For example, over this past year or two, we’ve been increasingly involved in patient data sharing efforts that aren’t related to treatment. So, the first point is assuredly revealing itself in real life project requests.
The second point is related to the “traditional data type” which would be HL7 in healthcare. We’ve noticed that medical and research professionals want data-based projects in modern formats like MS SQL for things like visualizations in Tableau, not in old-school, native HL7 format.
However, one thing has not changed: everything can still be “garbage in – garbage out.” In virtually all data usage scenarios, there remains a need to scrub it and check data for accuracy. This supports their third finding.
Finally, the last point is where the data expertise work is: applying best-fit methods to operate across data channels to provide information in a consumable, easy-to-manage format. That’s the kind of work we’ve seen emerging and growing over these past 2 years or so.
When we do such work, we know that we’ll need to include data checking and scrubbing as users will continue to be sloppy. One must carefully look out for the disruptions that poor data quality can bring. As customers request new data mining or repurposing, it’s critical to fully understand their goals and the use cases of the data. Then database professionals can dig deeply into their toolkits, get out the right tools for the job, and create customized solutions that precisely deliver the requested data.
Of course, it all needs to be delivered in a user-friendly way too. Just like the needless complexity of the term “interoperability,” the goal with all of this techie stuff is to reduce complexity for data to be understandable, not mysterious.