How will pharmaceutical companies and other stakeholders protect sensitive patient health data from privacy and cybersecurity risks?
Aligning expectations
Data usage is driving the future of pharma. The ability to collect, analyze, and use data in new and innovative ways is increasingly essential for developing new treatments, improving patient outcomes, and reducing costs. Moreover, it is nearly impossible to talk about data these days without recognizing the increasing role of AI. The use of AI-enabled systems that collect, store and/or process sensitive personal data, particularly health information, raise important privacy and cybersecurity concerns. In addition, the use of health data beyond traditional clinical development brings forth the possibility of diverse data health projects with varying partner/vendor relationships.
Managing data risks when pharma meets tech
To manage risk appropriately, pharmaceutical companies need to ensure that appropriate data governance processes are in place to secure the data and protect patient privacy. This is true whether or not AI-enabled systems are in place but risks can be heightened, and can present in novel ways, where they are. Proper data governance processes include, at a minimum, internal training on responsible management of health data, robust and up to date policies and appropriate guardrails guiding external relationships where data may need to be shared, with, for example, a third party app developer cooperating in developing a disease care model.
Compliance with health privacy laws can also raise significant challenges of cross-border coordination. Companies must examine their data flows: who is doing what, where the data comes from, and how it is being used. It is generally best practice to evaluate up front what the data flows will be (particularly whether the transfers will be within a single jurisdiction or cross-border) rather than having to retrofit later. Also on the theme of cross-border data transfers, the EU General Data Protection Regulation (GDPR) provides an additional layer of complexity in the context of sensitive health data, including an additional look at local implementation for health data hosting. Moreover, considerations around “secondary use” of data, such as for research, innovation, policy making, regulatory purposes, and patient safety, are of increasing concern. This is especially worth assessing in the context of the (proposed) European Health Data Space (EHDS), as we have also recently described here. Furthermore, when AI is involved, there are additional compliance and governance layers that must be added, and we consider AI more deeply in chapter AI Governance.
Companies should consider:
- What data is needed to develop and deliver the product/service?
- What is the source of the data used to develop any algorithms and can the rights be secured to use that data for all of the possible uses the company may well have in the future?
- What technology access is needed to gain insights from the data?
- What retention rights are needed in relation to that data, and will these needs change over time as intended uses can evolve?
- Is there sufficient experience in-house to analyze the data or will it be necessary to work with others?
- How will data be used by partner(s) and with what limitations?
- What are the selection criteria for the data?
- Where will the data be stored and transferred (from where to where) and with what security precautions?
- Is additional diligence required due to the involvement of an AI-based system?
- If the resulting software application will be a regulated medical device, what data security features will be required, which standards will apply, what data will need to be generated to demonstrate safety, efficacy and security, and what regulatory authorizations will be needed.
- Notice and patient consent may also be required to ensure the appropriate levels of data protection as patients engage with, and develop trust in, the evolving digital health ecosystem.