In the media, we read about “big data” and how it will affect us for better or for worse. On an international scale, Norway has come a fairly long way when it comes to digitalization, and the healthcare sector has been moving full speed ahead with this initiative via its modernization programs. Almost all sectors are in favor of the agenda, but the uncertainty as to how these efforts should be carried out is palpable. Companies that have already started have experienced that it’s more about modifying business processes than introducing new systems and technologies.
From confusion to insight
Where does big data come from and where can we find it in the healthcare sector? As physical objects are increasingly being linked to the internet, more and more data and information is piling up. This growing trend is called IoT, The Internet of Things, where objects communicate with one another and consequently generate data traffic.
In healthcare, this often involves large objects, such as transportation vehicles (ambulances), buildings (intelligent facility management), equipment in hospital units (measurement devices, beds or fixtures) or home healthcare equipment (safety and assistive devices). But the real inundation will come in the form of inexpensive, mass-produced products that will appeal to the healthcare system due to their cost, quality and simplicity. A lot of this equipment will depend on the sector and the business, such as, for example, fashion (clothing, shoes, glasses) or even things like communication equipment and light bulbs.
The raw scale of it presents new challenges, but big data isn’t just about quantity. The topic has been extensively researched and we now understand that big data, in its simplest form, can be described in three dimensions: volume, variation and speed. Changes in one or more of these three dimensions can create sudden shifts, new combinations, or new phenomena that test the limits of the analytical models themselves. Our calculations can therefore encounter unanticipated challenges that create issues and deliver erroneous results. Where estimates are calculated in areas that lie outside the predictive boundaries of the model, you get a “false positive”. When it comes to treating patients and carrying out care plans, this is a very unfortunate thing.
The technology provides us with a realm of opportunities. The question is whether we can exploit these opportunities in order to provide patients with a better quality-of-life at a lower cost and a reduced risk. Are we more making ourselves more vulnerable? This depends a lot on the credibility we assign to vulnerability analyses and our confidence in them. When it comes to big data and vulnerability, we need to think outside the box and see the big picture. Being confident that the risk level of our own processes is acceptable and manageable won’t protect us if hackers cut the power supply at the hospital or 911 dispatch center and demand ransom.
Data reduction techniques and big data
Today, we hear that the amount of data worldwide doubles every 18 months and that 90% of the information out there today wasn’t around two years ago. A couple decades ago there were 16 million internet users, today there are 3 million of us surfing the net. Textual information is exploding, a plethora of new user communities have popped up, and processed health data have become a valuable trade commodity for commercial vendors. Where does your healthcare information go when you check the “share with other users” box on an online healthcare site, what is it worth, and what do you actually get in return?
In order for the healthcare personnel and patients/residents to properly benefit from big data, we need access to technologies and methods that can separate the wheat from the chaff. We must be able to remove the “noise” - irrelevant information and data of suspect quality or low credibility. We call these processes “data production techniques” here at the SAS Institute and they will serve as a platform in the future for utilizing big data in a meaningful way. Advanced analysis involves data reduction and big data needs reduction techniques. Through R&D and practical testing via real world projects, we’ve identified four methods we feel should help solve big data challenges within the healthcare sector:
- Move calculations close to the data source.This means the analyses are carried out as far as possible at the measurement site and we avoid transferring unnecessary information.
- Get rid of text. We can do this by using advanced textual analysis that converts text into numbers.
- Use in-memory processing.Here the collection, quality control, harmonization, and analysis of big data are carried out in high-speed systems - what technology experts call “in-memory” or “in-database” operations. Processed results are transferred over to business processes.
- Take advantage of machine learning. Here we replace static models with models based on machine learning, where verification of the validity of existing models is carried out on an ongoing basis, while at the same time new dynamic models are built to handle modifications and quality requirements.
Big data - a wealth of opportunities for a paradigm shift?
The healthcare system needs to modernize in order to meet the workforce and cost challenges it will face leading up to 2025. The guidelines for this work have been described in Government Report 27 “Digital agenda for Norway - ICT for a simpler everyday life and increased productivity”. The digitalization programs that have been implemented are extremely important, but there’s even more potential if the healthcare sector can manage to integrate external big data by using innovative digital technologies and delegating the role of data vendor and operator more and more to patients/residents. This provides more flexibility for patient care and tasks can be organized in a more effective way to reduce costs.
In order to use big data effectively, there’s an extensive need for training on the part of both healthcare personnel and patients/residents. Once this is in place, there will be a wealth of opportunities to implement better, more effective work processes. We can reduce and simplify manual routines, with the reward being more time for comprehensive patient care.
Alongside this, systemic work is required to better understand the risks associated with big data and introduce appropriate security mechanisms and initiatives to safeguard patient safety and treatment.
Do you have a data strategy for utilizing valuable data?
Read more here: The 5 Essential Components of a Data Strategy
Big data in practice
In April 2009, we had the first outbreak of swine flu in 90 years. The epicenter of the outbreak was Veracruz, Mexico. A mutant version of the H1N1 virus quickly infected people in the local environment, including tourists. Due to the large amount of tourist traffic, the virus spread first to other parts of Mexico and then on to other countries and continents. The virus spread like wildfire.
It wasn’t until August 2010 that WHO felt comfortable concluding their battle against the epidemic. By then, the official mortality estimate had already climbed to 285,000-560,000. The issue that continues to be discussed is how many of those lives could have been saved if the first case in Veracruz had been recognized as “patient zero”? Could subsequent outbreaks have been predicted ahead of time if the health authorities had been able to connect the dots on new outbreak sites by using big data analysis?
Today, we realize the information flow was chaotic during the epidemic and understand it took a long time for health authorities to improve the data quality of their reporting needed in order to put adequate measures in place.
Visit our site to read more about the data-driven management in the healthcare sector.