By Mark Nitzberg, executive director, Center for Human-Compatible Artificial Intelligence (CHAI)
Throughout the pandemic, many businesses have turned to artificial intelligence at a faster pace than ever. Six years of data show that “COVID-19 has accelerated AI’s shift from experimental to widely adopted as a key lever of sustainable competitive advantage and profitability across businesses and industries around the world,” IBM reports.
The shift is helping deliver revenues, especially among companies that are investing more heavily in the technology, a McKinsey analysis found. These companies are likely to make AI an even bigger part of their operations in the coming years, and competitors are likely to follow suit.
These developments come with a major concern, however. The algorithms that drive AI can increase productivity and efficiency, but they can also cause a host of problems. They can get people addicted to tools and social media platforms. They can lead people to extremist content and misinformation. They can get people to unwittingly give up great amounts of private information.
They can also amplify inequality and bias. After all, AI works based on data that is given to it. If the data reflects bias, the results will be biased. For example, the Harvard Business Review reported on one algorithm that led to Asian families being charged higher prices than non-Asians, and on studies that showed high-paying job opportunities were being advertised disproportionately to men.
The moral reasons to tackle such problems are clear. But what are the business incentives? Does adopting a code of strict ethics mean giving up profits and share values? Actually, no. For several reasons, businesses can benefit greatly from designing AI with ethics in mind.
Winning over consumers
In recent years, surveys have found that consumers are becoming more concerned about how the companies they support are using technology. One found that 97% of consumers worldwide expect ethical use of technology from brands.
Awareness of inequality and racial bias has increased over the past couple of years, especially amid protests for racial justice. Now, consumers and investors are more aware than ever that bias can show up in numerous ways, including in technology. As Venturebeat reports, “AI trust” is a business opportunity, offering competitive differentiation. “Promoting anti-bias measures can set a business apart by establishing greater customer confidence and trust in AI applications.”
On the flip side, when word gets out that a business has ethical problems in its algorithms -- particularly around bias -- research shows that the business loses out. “Word of mouth about algorithmic bias among customers will hurt demand and sales and cut into profits,” according to professor Kalinda Ukanwa at USC’s Marshall School of Business.
Attracting and retaining employees
Meanwhile, workers are increasingly focused on the ethics of the algorithms that power their companies’ offerings. In some cases, they’re ready to start internal battles if ethical concerns are not addressed.
“The individuals who work at technology companies are coming forward to air their concerns,” the MITRE corporation notes in a study. “They are representing and responding to the ethical declarations that their organizations proclaim.”
The departure of a lead of Google’s AI ethics team didn’t just create great controversy; it also reverberated across the AI industry.
Staying ahead of legal changes
Europe has been creating new laws aimed at ensuring ethics in AI. As they’re enacted, these laws can force some companies back to the drawing boards, requiring them to make major alterations or switch course in their future offerings.
New laws in the United States and other parts of the world are expected. And in April, the Federal Trade Commission noted that it is already in a position to enforce laws that concern numerous ethical issues in AI, offering guidelines -- and something of a warning -- to businesses. In addition, states are increasingly taking their own initiatives. To avoid having to enact major changes later on, businesses would do best to ensure the highest ethical standards now.
In 2019 the Business Roundtable, consisting of CEOs from more than 150 of the largest U.S. corporations, announced their new “Statement on the Purpose of a Corporation.” In it, they vowed to place the concerns of all stakeholders at the core of their mission, including customers, employees, suppliers, communities and shareholders. Since then, analysts have kept an eye on how well these corporations are doing in fulfilling that newfound purpose.
In the months and years ahead, algorithms will be one of the chief determinants of just how much the needs of all these stakeholders are being taken into consideration -- not just by the Roundtable, but by businesses in general. As AI becomes a centerpiece of daily operations, organizations will likely face increasing pressure and expectations to “do well by doing good.” Those that move earlier in this direction will be best equipped to face the challenges of the future.
Mark Nitzberg is executive director of the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley and co-author of The AI Generation: Shaping Our Global Future with Thinking Machines.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.