Future-ready businesses not only outperform the average, but also significantly outshine them, enjoying 33% higher profitability and a staggering 200% greater market capitalisation. When it comes to the future, all roads seem to now point to AI, but its implications for organisations and the adoption of AI tools remain less well understood.

Generative AI tools are so valuable that no organisation can afford not to prepare for them. Here are the key considerations for boards in that process (the term “AI” in this article refers specifically to the new breed of Generative AI technologies, which have distinct requirements from organizations vs more traditional forms of pattern recognition-AI):

1.    Your company doesn’t need to adopt AI; it will be everywhere. AI is coming to the tools your company already uses; it is embedded in technology, and your company will have no choice but to plan for and manage its use one way or another.

2.    Banning AI is futile and counterproductive. While many high-profile American companies are banning AI, it is almost guaranteed that some of your people are already using these tools to assist them with work.  Banning them means they may use them in more dangerous and risky ways. Security and privacy risks are real, but they are not solved by bans.

3.    A better way is to set up AI ‘playgrounds’ and upskill your people. Instead of banning the tools, have environments set up with cutting-edge capabilities where your people can explore and experiment with AI in a safe and secure way. All levels of the organisation will benefit from knowing more about the new AI tools, what they can do, and also what they cannot do.  Vendor-neutral education will be important to increase the effectiveness of AI investments.

4.    Your company needs a strategy for AI.  Your company needs to have a view and plan for how it deals with AI, and to ensure the use of AI is purposeful.  AI plans need to align with the overall business strategy.

5.    Your company also needs an AI policy or governance framework. Given the deployment of AI systems or products from 3rd parties within your company, it will as a minimum need an AI usage policy, and ideally a governance framework to manage the risks, capture the opportunities, and drive the accountability for AI. Establishing core principles for its use is a solid starting point.

6.    AI systems work in unexpected ways. Today’s AI differs fundamentally from human intelligence. For example, as impressive as ChatGPT is, it lacks an understanding of fact vs. fiction, thus often inadvertently demonstrating that “a lie is best hidden between two truths”.  AI requires careful management due to this inherent unreliability.

7.    The ethical risks associated with using AI doesn’t mean it shouldn’t or can’t be used.  The field of ethics can be complex and abstruse, but that doesn’t have to be the case.  While there are hundreds of ethical AI frameworks to help create a roadmap, they’re not a substitute for rigorous ethical deliberation.  Boards must ensure ethical concerns (such as bias) are addressed and thoughtfully integrated into their company’s AI operations.

8.    Lack of AI-specific legislation doesn’t mean it’s unregulated. Australia’s limited AI-specific legislation doesn’t exempt it’s use from legal obligations. Privacy, anti-discrimination, and IP laws are just some of those that apply to AI systems. Using AI may lead to exposure to more legislation than initially meets the eye.

9.    AI is a moving target, while simultaneously being out of date. ChatGPT was last updated in September 2021 so it thinks Boris Johnson and Donald Trump are still in power, and yet AI is evolving every day.  Even the definition of AI is evolving. Your company’s processes and policies will need to continuously evolve with it. 

10. Your customers will automate dealing with you.  The ease and cost-free nature of AI will lead customers to use it for their interactions with your customer support team (no more sitting in long call queues!), and your support agents will not be able to tell them from humans. This could quite conceivably lead to a surge in complaints.

ABOUT THE AUTHORS

This article has been co-authored by Andrea Durrant , Managing Partner of BoardsGlobal and specialist Board advisor, and Sami Mäkeläinen, Founder Transition Level and Senior Advisor at Institute for the Future.  Andrea and Sami have long had a passion for innovation and have held senior roles at market-leading global technology companies. 

Andrea is a leading Australian-based board expert on corporate boards.  Her work is underpinned by 20 years of experience and deep insight including from her corporate career as a CEO.  She thinks strategically about how boards can add value and is committed to helping shape high performing boards.

Sami is a seasoned visionary with over 25 years of multidisciplinary expertise in technology, AI, and strategic foresight with leading global companies.  His services assist leaders by providing actionable insights and guidance that enables organisations to thrive in an increasingly automated, AI-driven future.


REFERENCES 

[1] Rohrbeck and Kum: Corporate foresight and its impact on firm performance: A longitudinal analysis.  See https://www.sciencedirect.com/science/article/pii/S0040162517302287). A 2023 McKinsey Global Survey on digital strategy found that the most innovative companies deploy generative AI six times more than their competitors. See https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/companies-with-innovative-cultures-have-a-big-edge-with-generative-ai.

[2] Generative AI – a game-changer society needs to be ready for | World Economic Forum (weforum.org)

[3] There are numerous ongoing lawsuits targeting the creators of Generative AI systems.  Experts believe this could lead to these companies being forced to abandon their current model. Relying on a system that may suddenly disappear or degrade in performance is another aspect of AI that needs to be managed.  See https://fortune.com/2023/08/17/openai-new-york-times-lawsuit-illegal-scraping/

[4] Major organisations like Apple, Bank of America, Amason, Samsung, Goldman Sachs et al have banned the use of ChatGPT. See https://www.businessinsider.com/companies issued bans on openai

[5] 68% of workers say they are secretly using AI. See: AI Is Already Transforming the Workplace: Here Are 10 Examples (businessinsider.com).   

[6] About two thirds of Australian business are using or planning to use AI this year.  See https://www.uts.edu.au/human-technology-institute/news/report-launch-state-ai-governance-australia.

[7] Research shows that many employees believe they lack the skills to use gen AI safely and effectively, and expect their organisations to provide training.

See https://www.computerworld.com/article/3705373/workers-are-embracing-ai-but-want-more-guidance-on-how-to-use-it.

[8] According to a white paper from the Berkman Klein Center for Internet and Society at Harvard, the OECD’s statement of AI principles is among the most balanced approaches to articulating ethical and rights-based principles for AI. 

[9] See bias in AI: Dissecting racial bias in an algorithm used to manage the health of populations | Science

[10] There are many publicised ethical risks associated with the use of AI, such as unintended bias and invasion of privacy. It is worth considering options for the use of ethical AI such as establishing principles for its use, using the expertise of ethicists, or establishing an Ethnics Committee as some of our clients have done already. See: Why You Need an AI Ethics Committee (hbr.org).

[11] Safe and responsible AI in Australia. See: https://consult.industry.gov.au/supporting-responsible-ai.  

 [12] In May 2023, the OECD was in the process of revising their definition of AI systems.

Leave a Comment

Your email address will not be published. Required fields are marked *