By Bryan Gaffin, Executive Creative Director, FCB Health New York
President Biden’s Executive Order on artificial intelligence (AI) is a powerful attempt to catch up to the potential dangers of AI on society and the world. Overall, the order aims to balance promoting innovation with the potential risks to our society. It’s a thoughtful, multi-faceted strategy to create and uphold ethics and build trust and safety while also adhering to and protecting our democratic American principles. But how will this Executive Order impact the healthcare industry and healthcare marketing?
There are some strong positives within the provisions, specifically around transparency, AI testing requirements, and monitoring critical AI systems while improving safety and accountability. Forcing disclosure of AI safety tests and results can only help avoid potential issues, make AI creators accountable for unforeseen issues, and create public trust through transparency.
Investments in privacy and security are also mentioned within the order, combined with consumer data protection and legislation that can help protect us from unethical data collection and unwanted surveillance. There is also considerable thinking about stopping bias and discrimination, an essential subject of conversation within the health industry.
The FDA can help the healthcare industry seize the moment
There is an opportunity to streamline, update and clarify many of the rules and regulations in the healthcare, pharmaceuticals, and healthcare marketing industries. The FDA can take this moment to integrate and write these AI rules for each industry so that every company can follow clear guidelines universally. The healthcare marketing industry is already very comfortable with FDA regulation and would appreciate any efforts the FDA can make to be unambiguous so that there is not an ad hoc approach that will vary from company to company, project to project.
The advantages involved with leading the AI space outweigh the risks of waiting and watching. It would benefit the industry to find expert partners to work with to find the best use cases for AI at large— not just in healthcare.
Properly and ethically implemented, AI presents some world-changing use cases for healthcare
A best-case outcome for the Executive Order could be that American health outcomes are positively impacted by new AI technology. Product safety, data privacy, and surveillance can be enforced while the industry creates guiding principles to handle new, unforeseen technology as it is discovered, designed, and developed. Additionally, healthcare rules can vary at the state and local level, so there is a real opportunity to craft legislation that bridges federal, state, and local laws to help bring all regulations in line with one another.
Reducing medical errors and improving diagnostic accuracy by implementing AI decision-support tools can save lives immediately within hospitals. AI simulations can optimize the way healthcare professionals manage chronic conditions and could prevent thousands of deaths annually. AI chatbots and virtual assistants to automate appointment bookings and patient education can free up overworked staff.
For pharmaceutical companies, faster drug development and clinical trial times will result from AI modeling and analysis. Biotech companies can use AI to tailor treatments based on DNA and family history, improving outcomes. These are just a few scenarios, but ones that will require AI to be applied perfectly in an ethical way that ensures that the technology does not harm anyone.
The Executive Order does have some gaps and risks that warrant observation
The regulations are wise, but they are mainly voluntary without any actual enforcement mechanisms. However, compliance by private companies and full adoption across the industry can be supported by congressional legislation. AI safety testing and evaluations for algorithmic bias rely on self-policing, which has had mixed results in the past. As AI becomes better at tasks, the adoption of ethical guidelines, either by each company or by the entire healthcare and marketing industry, will protect public health.
The biggest concerns for AI usage are ensuring that the privacy of personal data is vigilantly monitored and that no lives are at risk by using technology to make health decisions. The risks of database insecurity are significant without full transparency and industry-standard usage of an Electronic Health Records system. There is a delicate line between harnessing data to improve health outcomes versus allowing personal surveillance and data collection. The government can monitor the collection of personal data and the way databases are structured and shared to prevent any leaks, misuse, or personal harm.
Overall, the Executive Order is an impressive plan for AI policy
The Executive Order provides a promising foundation for promoting ethical and fast innovation across industries, including health. This EO is a visionary document with the kind of thought leadership and action possible when government works in good faith. It will be advantageous as the US government works at home and globally with other governments at events like the recent AI Safety Summit in the UK. Proper implementation of the plan will build bridges between our government, corporations, research and academic organizations, and citizen civil rights advocacy groups to move America towards leadership in AI in a way that uplifts society and benefits the public health of the world.