Abstract Tech

Board Perspectives on Navigating the AI Revolution

Nasdaq N
Nasdaq Center for Board Excellence A community dedicated to advancing corporate leadership

The transformative potential of generative artificial intelligence (AI) like significant technological innovations that came before it, is at once exciting and alarming. While its implementation will undoubtedly bring tremendous benefit to companies and stakeholders, it also comes with inherent risks. As organizations explore the potential uses of generative AI to enhance business, customer experience and competitiveness, boards must also respond to a call to action to provide effective risk oversight that supports their management teams as they build out an AI strategy.

The Nasdaq Center for Board Excellence and international law firm Mayer Brown recently hosted an informative and thought-provoking in-person program in London to discuss some of the key issues and topics that should be on board agendas with respect to AI and the board’s responsibility for effective oversight. Nasdaq’s Byron Loflin and James Beasley were joined by board members from Europe and the US in a two-part panel discussion: Exploring AI’s Impact on Corporate Governance and Humanity and Business and Governance Lens on AI. We share here some of the key insights from their discussions.

Exploring AI’s Impact on Corporate Governance and Humanity

Artificial intelligence is on a no-turning-back trajectory. Perhaps not since the invention of the steam engine or the light bulb, or more recently in 1969 when the first human walked on the moon’s surface, has there been as overwhelming a collective response to a technology that has the potential to transform our understanding of the world and the way we live, learn, work, create and communicate.

Over the years, the direct financial costs associated with AI have gone down, facilitating significant corporate investment in its development and adoption. While it feels as though business leaders and boards have been talking non-stop about AI for quite a while, considering how it may be used to grow and even radically transform companies, we are still early in the application of generative AI to mission-critical areas in business. To prepare adequately for the AI journey that lies ahead, boards – with whom oversight of the ethics, governance, regulation, and strategies for competitive advantage lies – must take the time now to learn, understand and wrestle with the potential promise and dangers of AI, perhaps beginning with what AI is and what it isn’t.

Seeing Past the Hype

According to several of the panelists, that Gen AI has the attention of the boardroom may be more a reflection of the fear boards and companies have of missing out on the promise of AI, even before they really understand what that promise is and why interest is warranted. With the hype comes tremendous pressure to use the new technology.

Jesús Mantas, Biogen board member and Global Managing Partner at IBM, cautioned that as boards and companies consider when and how to use AI, they should bear in mind an important lesson from the past; what was once the best technology will at some point in the future likely be replaced by an even better technology. Take Napster, for example, which taught the world about listening to music in digital form; but like the steam engine, Napster’s technology has been replaced by new technology.

Like the rise and fizzle of Napster, we can (and should) expect that today’s AI’s “party trick” capabilities will likely seem rudimentary, even quaint, in a year or two. The board’s and management team’s first task is to move past the “party tricks” and focus their attention instead on understanding three critical use cases of AI for business: (1) improving quality, (2) improving productivity, and (3) improving service, all of which are pragmatic goals.

Panelists noted that among companies that have begun to move forward with implementing AI, their use of AI is generally trending toward “seamless” integration of the technology “where it fits in so that you don’t even realize that AI is part of the process,” e.g., chatbots. Boards, however, must be fully aware of everywhere AI is being used within the enterprise. Their ability to provide effective oversight depends on it. For this reason, all boards should request access to a full list by their management teams, which should include explanations of the intent for using AI in each case, the cost benefit and risks for the enterprise.

Data and Content Integrity and Trust

Gen AI is possible because Large Language Models (LLM) have been trained on massive, publicly available data sets. Because the content and data used to train LLMs is likely to reflect the biases of their authors, what responsibilities do AI creators and users, who may be using AI in decision making, have to address those biases? Additionally, can copyright protection be claimed by the authors and creators of content and data training LLMs? Are current laws clear on copyright protection? Do they sufficiently protect copyrighted material while also allowing the development of AI technology?

Without clearly discernable controls in place and questions about data ownership and bias among the many yet to be settled, it’s hard for businesses to know what direction to go. Also, as several panelists noted, trust is an essential component of excellence in governance and business and, therefore, a significant concern with respect to AI.

How and where can companies and boards bolster trust in AI and business when so many questions and doubts persist? A first step is being clear on the ownership of the data and algorithms that Gen AI is being trained on. Any input biases (e.g., of authors, contributors, etc.) will impact systems and processes on the other side. In the same way digital platforms like Uber and Lyft operate as proxies of trust for customers, for users of AI, proxies of trust will be knowing what data was used to develop the AI-supported system.

On navigating the issues of trust and AI, panelists offered guidance and questions boards and management should be asking and discussing:

  • Do we know what AI systems we are using or planning to use?
  • Do we have the right people across function areas looking at how the organization is planning to use AI?
  • Do we trust technology and our values as an organization to use AI ethically?
  • Do we know what the sources of risk are?
  • Do we have the right people looking at the potential risks?
  • Do we (the board) trust management and ourselves?
  • Are we asking the right questions?
  • Do we trust the government and regulators to put the proper guardrails and framework in place?

Panelist Emily Spratt, Ph.D., art historian, lecturer on AI ethics and diplomacy, art curator of the Global Forum on AI for Humanity in Paris, and former High Technology Advisor for the Frick Collection in New York City, has used computer vision technology, a type of AI, since 2011. In the art world, AI is facilitating better understanding of collections and aiding research by putting images together in novel ways.

According to Dr. Spratt, while the art world is a great place to experiment and work with various Generative AI models to create “new content,” doing so carries ethics, trust and integrity issues and raises questions about copyrights and “how we think about the integrity of data sets and whether artists should be required to disclose what their secret sauce is.” An important question being debated is whether all data sets, including art, should be copyrighted to protect them from what might be considered unauthorized use.

Patent and copyright protection vis-à-vis AI is an evolving legal field; the strength of copyright laws varies country to country. To develop effective regulations, clear information about what data sets are being used and how authorship is being attributed to the data sets is needed. Dr. Spratt emphasized that “there is a lot of work to do. We’re waiting for directives from boards and companies as to what should follow, but we really need to see more action from regulators.”

In the March 2023, the US Copyright Office issued guidance stating that “[i]f a work’s traditional elements of authorship [i.e., literary, artistic, or musical expression or elements of selection, arrangement, etc.] were produced by a machine, the work lacks human authorship and the Office will not register it.” So, while machine-generated outcome cannot be copyright protected, human-created code is fully protected. This begs the question, what minimum human input ought to be required to qualify codes for copyright protection? With the expectation that in about two years 70% of existing code will have been created with some amount of AI, the copyright question is significant. Authorship claims must be considered but for now there are no definitive answers. According to Spratt, “We need regulatory flexibility to experiment in this space.”

In closing, Mantas and Spratt underscored the importance – for companies, boards and employees – of understanding what AI is and what it is not. “AI is not a technology journey; AI is a leadership journey. Like any leadership journey, it starts with a personal realization that hands-on education is key and the understanding that AI is not here to take your job but to help you be more creative and effective.”

Join the Nasdaq Center for Board Excellence to receive exclusive corporate governance insights and shape the future of corporate governance.

The views and opinions expressed herein are the views and opinions of the authors and do not necessarily reflect those of Nasdaq, Inc.

Nasdaq Center for Board Excellence

With Nasdaq's commitment to reimagine tomorrow in our DNA, the Center is uniquely positioned at the cross-roads of forward-looking thinking and innovation to convene board and executive leaders to advance board engagement and corporate leadership.

Learn More ->

Latest articles

Info icon

This data feed is not available at this time.

Data is currently not available