The use of artificial intelligence (AI) in healthcare is rapidly advancing, and with it comes a growing debate about managing its development. A study by the University of California at Berkeley states that advances in AI have made the Health Insurance Portability and Accountability Act of 1996 (HIPAA) obsolete, and that was before the COVID-19 pandemic. While other emerging technologies may be just as exposed to privacy and security issues in healthcare, AI is vulnerable in a different way. The first set of concerns includes accessing, using, and controlling patient data in private hands. Some recent public-private partnerships to implement AI have resulted in poor privacy protection, so greater systemic oversight of health research with big data has been called for.
Adequate safeguards must be in place to maintain the patient's privacy and agency. Private custodians of data may be affected by conflicting objectives and should be structurally encouraged to ensure the protection of the data and prevent its alternative use. Another set of concerns relates to the external risk of privacy breaches through AI-driven methods. The ability to de-identify or anonymize patients' health data may be compromised or even nullified in the light of new algorithms that have successfully re-identified such data. This could increase the risk for patient data in private custody.
AI-based products pose additional privacy challenges, especially when using anonymous data to try to address potential bias issues. As more data is added to AI systems, the possibility of creating identifiable data also increases, especially since the increasing sophistication of AI systems has facilitated the creation of data links where such links did not previously exist. As the number and number of data items increase, it is important to continuously assess the risk that AI systems will generate identifiable patient data where it was not previously identified. The benefits of AI in healthcare can far outweigh security and privacy risks, but it's important that healthcare organizations continue to consider these risks when developing cybersecurity programs and ensuring HIPAA compliance with privacy. The enormous volume of data, the ability to re-identify previously anonymized data, and the challenge of navigating the regulatory landscape make AI a unique risk to the security and privacy of healthcare. Working together, several regulatory agencies could fill the gaps and help implement safeguards that ensure that AI technologies take security and privacy into account, especially when used in healthcare.
Recent research also showed that AI can close gaps and mitigate risks in the area of health cybersecurity. Legal, regulatory and ethical frameworks for the development of standards in artificial intelligence (AI) and autonomous robotic surgery are needed. Meanwhile, concerns about patient safety and privacy continue to be a priority in the healthcare sector when it comes to AI. A white paper from the Canadian Association of Radiologists on ethical and legal issues related to artificial intelligence in radiology has been proposed, as well as a regulation from the European Parliament and Council establishing harmonized rules on artificial intelligence (artificial intelligence law) and amending certain legislative acts of the Union. In conclusion, advances in healthcare artificial intelligence (AI) are taking place rapidly and there is a growing debate about managing their development. It is important for healthcare organizations to consider security and privacy risks when developing cybersecurity programs and ensuring HIPAA compliance with privacy.
Legal, regulatory and ethical frameworks must be established for the development of standards in artificial intelligence (AI) and autonomous robotic surgery. Furthermore, a white paper from the Canadian Association of Radiologists on ethical and legal issues related to artificial intelligence in radiology has been proposed, as well as a regulation from the European Parliament and Council establishing harmonized rules on artificial intelligence (artificial intelligence law).