How to ward off Cyber Attacks On Medical Products

Security controls, i.e. measures to increase the security of a product, are defined by regulatory documents
Within the US-Administration – e.g. Department of Defense, HomelandSecurity, Department of Veterans Affairs, Food and Drug Adminstration, National Electrical Association – documents have been developed which define measures to increase the security of a product. For example the NEMA standard HN 1–2013: “Manufacturer Disclosure Statement for Medical Device Security” MDS”, the standard SP 800–53A “Assessing Security and Privacy Controls in Federal Information Systems and Organizaitons Building Effective Assessment Plans” or SP 800–53 “Security and Privacy Controls for Federal Information Systems and Organizaitons” by the NIST.
Deriving from Europe, the ETSI TR 103 305–1 V2.1.1 “Critical Security Controls for Effective Cyber Defence” could be mentioned. These publications are supplemented by non-binding documents like “Common Weakness Enumeration”, “Common Criteria”, “CIS Critical Security Controls” and many more.
These documents state requirements are:
- Is the product capable of encoding patients’ data?
- How can be ensured that such data cannot be manipulated when sent?
- Does the operating system run on the latest patch level, how is it being installed, and how do you ensure that unwanted third-party-software will not be installed on the product?
- “Off-the-Shelf-Software” is on the latest patch-level.
- Minimal administrative rights.
- Exclusively necessary services, users, shares and ports are installed.
- No booting of removable storage devices.
- Limitation of network traffic to trusted IPs.
- Two-factor-authentification for adminstrative users. An audit-trail is established which can be archived and signed digitally.
- Personal data on USB data carriers have to be encoded.
- Training of developers.
- Identification of duly authorized representatives for Incident Response Management.
All of these documents have in common that lists of security aspects can be derived from them. Such lists of security aspects are being developed during the design-phase of a product and have to be reviewed after all significant modifications. These modifications can either be internal (the product itself is modified) or external (the software applied receives a security relevant patch). The review of security requirements does not end with the release, but has to be extended to the entire life cycle of the product.
The challenge consists in defining for every single element on the list the value and the consequence of this requirement on the product. In case a network connection cannot be established it will be annoying to the service operator and can hopefully be avoided by means of removable storage devices, while such a blackout will be harmful to a web application.
Secondly, the probability of such an occurrence has to be defined. Here the spying on passwords through the Meltdown/Spectre-leak is to serve as an example. It is being attempted to read the storage of another process as well as of the operating system via this gap. Actually, the probability is real whenever the product is in use.


Still, preconditions to successfully carry out such a hack are very high. Generally, the normal user should not at all be able to open the command-shell and the system should be protected by a whitelist. So the probability of a successful cyberattack is smaller than expected at first sight.
As a third step, it is to be evaluated how high the effort is to close the security leak. Meltdown/Spectre can only be closed externally, but measures to impede the attack are recommended and practicable.
The Risk and Threat Analysis consists of at least these three steps. They form a matrix of security characteristics for the product. For big-sized products, the use of tool support is recommended. But that’s another story …
The steps described can only guarantee passive security. They apply to general security aspects, to lists derived from the documents mentioned above or security relevant defaults detected previously, like patch days by software producers. Errors within the own product are thereby not taken into account. Active security can only be found after product specific tests. Only static code analysis, fuzzing and penetration tests can guarantee active security. Again, another story to be told…
One final point: The OWASP Testing Guide 4 already stated that while the space of time between the discovery of an endangerment and its patch stays the same, the time for active exploitation is reduced. Consequently, there is an increased necessity to execute the steps of risk-analysis described above and continuous application of security testing to the product.
If you would like to learn more about this topic, do not hesitate to contact me, please. I am looking forward to a conversation with you.