For years public concern about technological risk has focused on the misuse of personal dataAny identifiable data is regarded as personal data. This includes names, home addresses, email addresses, ID numbers, an IP address, etc. In the US, various laws define personal data in their own words, such as personal health information defined in HIPAA. In the EU, the GDPR defines it as any information relating to an identifiable person, such as a name,... More. But as firms embed more and more artificial intelligence in products and processes, attention is shifting to the potential for bad or biased decisions by algorithms—particularly the complex, evolving kind that diagnose cancers, drive cars, or approve loans. Inevitably, many governments will feel regulation is essential to protect consumers from that risk.
How to integrate AI into business
This article explains the moves regulators are most likely to make and the three main challenges businesses need to consider as they adopt and integrate AI. The first is ensuring fairness. That requires evaluating the impact of AI outcomes on people’s lives, whether decisions are mechanical or subjective, and how equitably the AI operates across varying markets. The second is transparency. Regulators are very likely to require firms to explain how the software makes decisions, but that often isn’t easy to unwind. The third is figuring out how to manage algorithms that learn and adapt; while they may be more accurate, they also can evolve in a dangerous or discriminatory way.
Though AI offers businesses great value, it also increases their strategic risk. Companies need to take an active role in writing the rulebook for algorithms.
The full version of this article appeared in the September–October 2021 issue of Harvard Business Review.
By: François Candelon,Rodolphe Charme di Carlo, Midas De Bondt, and Theodoros Evgeniou