About a decade ago, the healthcare industry recognized, on a global basis, that the current state of healthcare delivery had to change. Why? Because the world’s aging population was increasing dramatically and needing support for chronic illnesses, growth in the number of healthcare providers was not keeping pace with the needs, and the cost of healthcare was skyrocketing. Nations around the world realized that immediate action was needed.
In turn, these nations began to study the problem to formulate a solution. Overwhelmingly, the studies pointed to the fact that technology would be at the core of the solution. At that time, there was another significant trend in the market that drove many to this conclusion…the “smart phone.â€Â Smart phones were a very early success story in the broader trend of “ubiquitous computing†or “pervasive computing,†which has since evolved to be addressed by other buzzwords like “edge computing,†or “fog computing,†or as we see in the more current incarnation, the “Internet of Things†(IoT) or specifically in healthcare, the “medical IoT.â€Â One of the main reasons for the rampant success of smart phones was that they put not only the capability to communicate via voice and video at everyone’s fingertips, but they also provided a computing platform for the acquisition and dissemination of sensor data not only to the local user, but equally importantly, they provided a means to share this data around the world via the Internet.
Many countries launched initiatives to take advantage of this promising technological solution. Australia had the eHealth Transition initiative, the UK had the NHS National Programme for IT (NpfIT), Japan was dealing with the recovery efforts from the Tsunami by adopting these technologies to provide displaced citizens with healthcare, and like many other countries, the U.S. was making this a core aspect of national economic planning through efforts like the American Recovery and Reinvestment Act (ARRA).
The rise of cybersecurity as a concern
Each of these initiatives involved some level of risk management vetting. For example, in the U.S., the FDA Safety and Innovation Act (FDASIA) was invoked to convene a working group of experts tasked with vetting the risks associated with balancing regulatory approaches of the Food and Drug Administration (FDA), the Office of the National Coordinator (ONC) and the Federal Communications Commission (FCC). For the U.S. market, this group examined the benefits of healthcare innovation relative to the potential risks, and similar analyses occurred in many geographies around the world. While all acknowledge that there was some level of risk, none were able to predict the extent to which cybersecurity would soon become a major concern for healthcare.
“Consumerization of medicalâ€
One of the underlying factors that prevented this early recognition was that the adoption of consumer technologies like wearable devices (e.g., smart watches and augmented reality glasses) were driving a phenomenon that some called the “consumerization of medical.†This meant that the fundamental development lifecycle of medical devices needed to change. The traditional approaches used in medical device development for coupling “top down†risk analysis techniques like Fault Tree Analysis (FTA) with “bottom up†techniques like Failure Mode and Effects Analysis (FMEA) weren’t always part of the new wave of rapid development and deployment lifecycle models. When coupled with the realization that the Internet was not foundationally architected for security and placed in the context of geopolitical unrest involving major terrorist organizations and shifts in political power, we get a clearer understanding of why cybersecurity became a major issue for healthcare.
Evolving responses to cyber challenges
So, how did this evolve?
Many of the very same organizations that helped drive the rapid technological disruption needed to reduce costs and improve patient access and outcomes began to put measures in place to better protect the critical national infrastructure for the delivery of healthcare. Groups like the U.S. Department of Health and Human Services’ Healthcare Industry Cybersecurity Task Force were assembled to determine what needed to be done to improve the cybersecurity posture of the healthcare sector. See the HHS HCIC TF report for recommendations that were beginning to be seen around the world.
In addition, domestic and international consensus standards began to emerge to support appropriately constrained product design to protect the critical infrastructure of the healthcare industry. Examples of this are UL 2900-1 and UL 2900-2-1 Software Cybersecurity for Network-Connectable Products, Part 2-1: Particular Requirements for Network Connectable Components of Healthcare and Wellness Systems, and AAMI TIR 57, which are recognized by regulators in the U.S., China and other geographies. Moving beyond cybersecurity, there are also newly emerging standards to help demonstrate to regulators and others that interoperable systems are both safe and secure, such as AAMI/UL 2800, the Standard for Medical Device Interoperability.
Aligning business processes to boost cybersecurity
Moving forward, one of the most significant things that can probably be done to improve the cybersecurity posture and state of device interoperability in the healthcare sector would be to get better technical and business process alignment between Healthcare Delivery Organizations (HDOs) and medical device manufacturers through improved vetting during procurement using standards that help to address security risks, particularly where patient safety is involved.
Facebook Comments