Why do we believe Health Care still needs data models despite the advances in FHIR? (Fast Healthcare Interoperability Resources)
In recent years the healthcare system has been making great strides towards interoperability, that is the ability of two or more systems or components to exchange information and to use the information that has been exchanged. The panacea here is known as ‘semantic interoperability’ where computer systems can exchange data with unambiguous, shared meaning. The benefits of this are obviously significant, including that data can be entered directly into clinical workflows so it is more likely to be seen and acted on, and clinical decision support tools can use data aggregated from different sources.
An enabler of the recent advances in interoperability has been the acceleration in the maturity and adoption of HL7 Fast Healthcare Interoperability Resources (FHIR). The HL7 FHIR standard defines how healthcare information can be exchanged between different computer systems as standardised resources or ‘packets’ of information, regardless of how it is stored in those systems. Beyond FHIR, for semantic interoperability to work in healthcare we also need medical terminologies, ontologies and nomenclatures – such as SNOMED CT, LOINC, and ICD-10-CM codes. These are used to encode healthcare data elements, allowing data to have shared, consistent, and well-understood meaning.
This is important because there is enormous variation in the way information is collected, coded, and processed within different healthcare applications, with it stored in different formats and with different business rules. Building standardised terminologies natively into applications or providing ‘cross walking’ or terminology mapping capabilities to map local system code sets to agreed and standardised code sets, is also needed to enable semantic healthcare information exchange.
So, while these key foundations for healthcare interoperability are advancing and becoming more widely adopted, there also remains a key and often overlooked challenge for an organisation to understand its own data. There are numerous obstacles to semantic interoperability – data quality, data sharing agreements, consumer consent, clinical resistance, privacy, security, etc – but let’s start with data modelling.
A well-structured data model for your organisation independent (or agnostic) of any vendor’s data model will help you to understand the information your business relies on and will help to define your business rules for interoperability including how you will use the data you receive, and what data you will share. It will help your organisation define the scope of your different systems and system boundaries, your integration points, and your integration, and thereby your service or contracted requirements. It will also give your organisation a consistent and agreed definition of key data concepts, which will help ensure they are implemented and reported on consistently across systems and different settings – so that you can be confident you are comparing “apples with apples”. It also gives your organisation confidence that it can meet privacy obligations and consumer expectations, reducing your potential risks around data breaches.
In healthcare organisations, data models are often derived from and exist within the structures of a contracted vendor’s database. This can pose challenges for the organisation in extracting and understanding their data requirements and developing and integrating them in a standardised way to build an ecosystem of high-quality and integrated information systems.
In larger healthcare organisations with multiple EMR (Electronic Medical Record) vendors (or multiple instances of the same EMR vendor split geographically) and numerous other systems, the vendor physical data models have often grown and diverged organically over time without clearly defined central data governance or controls, and often with limited documentation beyond that developed for a specific project. This can lead to databases not being as normalised as they could be, with poorly defined or duplicated data concepts, and a plethora of data quality issues. This environment makes it hard for business, clinical and even technical stakeholders to understand the overall data model, and to introduce new data requirements efficiently without creating duplication. It creates challenges and extra effort for reporting and data and analytics activities and adds workload to clinicians through additional data capture burden and duplicated processes. It also makes information exchange between settings much harder – for instance where do you pull the data from, where should you store the data to, and what is the authoritative source?
Building your own data model will provide you with a governance mechanism and process to assess and manage the introduction of new data requirements, to ensure the re-use and normalisation of data as much as possible and will support engagement with business and clinical users in doing so without reliance on your vendor or scarce technical teams. Further, it has the added advantage that if you decide to replace any vendor product, all the sunk effort into building your organisation’s data model within it won’t be lost – as there is often little appetite to try and extract it given the costs and complexity of doing so.
FHIR (Fast Healthcare Interoperability Resources) and clinical terminology alone will not define your organisation’s data model for you, it will only give you a consistent and understood format and structure of the subset of your data that you decide to exchange.
Your data model should be mapped to the functionality and data structures your vendor’s data provides, so you can be clear about how their product meets your business requirements. It should also be mapped to your interoperability common messaging model so that you can be confident that your data exchange requirements are being met and of the quality of your data exchange.
Building this data model doesn’t have to start from scratch. There are existing open data models and specifications such as OpenEHR TM which provide a good foundation for many health information concepts, and resources like the AIHW’s Metadata Online Registry (METeOR) that provides nationally endorsed data definitions (metadata standards) for Australia. Your organisation will also have existing data requirements and specifications that can form the basis of building your data model, prior to taking this to your stakeholders for review and validation.
Not having visibility and control of your organisation’s data and the associated data model is an enormous risk that most Executives aren’t willing to take, especially with the increasing focus on Health information privacy and security. Accountability for this must be identified and allocated and the establishment of an overarching data model ultimately becomes a small price to pay.
Doll Martin Associates are specialists with many years of experience in information management, data architecture and classifications, data modelling, data dictionaries and metadata management.
If you would like to find out more about our data and information services, please get in touch.