AI and the role of the company board

King IV™ explains that the board must set the ethical tone from the top and provide leadership and guidance on how the organisation should exploit AI technologies. File photo

King IV™ explains that the board must set the ethical tone from the top and provide leadership and guidance on how the organisation should exploit AI technologies. File photo

Published Jul 25, 2023

Share

In the digital age, almost every business has been influenced by technology. As the pace of technology development increased, technology has ceased to be a mere business enabler and became a source of a company's future opportunities.

In particular, artificial intelligence (AI) has developed at a remarkable pace over the last few years, with numerous technological innovations since the release of ChatGPT in November 2022. AI is becoming more powerful by the day and is gaining popularity in business processes and operations. As AI and its use expands into new areas, companies may find it imperative to adopt AI to retain a competitive advantage.

AI is, therefore, often at the heart of automation, with the result that less human monitoring and supervision is needed. But AI does not only perform routine tasks independently but now also makes crucial decisions on their own, which exposes companies to a myriad of risks and affects traditional corporate governance systems and accountability.

King IV and the responsibility of the board

Principles 11 and 12 of the King IV Report on Corporate Governance, respectively, require the governance of risk, and of technology and information, in line with the company’s strategic objectives. King IV also places an imperative on company boards to ensure sound data and information governance.

The adoption of the increasingly popular generative AI technologies should, therefore, be led by the board as opposed to allowing employees to use AI experimentally and without oversight.

Regulatory compliance

The King IV principles also tie in with accountability under the Protection of Personal Information Act of 2013. A company should categorise its data and then list the categories of confidential, sensitive and personal data. This data may not be uploaded or used by employees when accessing AI technologies that are provided as a third-party offering, such as AI as a Service (AIaas) or Software as a Service (SaaS). Currently, the majority of AI technologies are owned by third parties, which creates a company risk as employees may disclose confidential information or trade secrets to unauthorised parties by using AI software.

An important governance task of the Board is to ensure that when using AI, the storage and processing of personal and other data of the company is in accordance with the applicable laws and regulations. The use of third-party AI and the storage of confidential information in third party databases may increase the cybersecurity risk and exposure of sensitive information.

When using third-party AI, carefully drafted service level agreements (SLAs) must be in place to ensure that the service providers adhere to South African laws and regulations. Boards must ensure that these SLAs do not include any exclusion of liability and/or restrictive liability provisions.

Intellectual Property

The Board should ensure that the intellectual property (IP) of the company is adequately protected. This governance responsibility is, however, complicated by the use of generative AI since the ownership of the data and the generated outputs currently is a somewhat “grey” area. At the moment, there are disputes about the ownership of the data (text and images) scraped from the Web for the training of the AI.

Boards will therefore have to confirm that the company and not the third party owns the data, outputs and intellectual property generated by generative AI. Otherwise, the use of AI could result in claims from a number of sources, including clients, users, third parties, and even regulators.

Bias and discrimination

Boards will have to be cautious of any bias in the company's machine learning data or AI algorithmic discrimination (e.g. facial recognition, recruitment, and decision support systems). Bias could lead to the reinforcement of stereotypes, discrimination against certain groups of people, or exclusionary norms.

In South Africa, we face unique challenges stemming from historical and structural biases as well as the inaccuracy of collected data. Machine learning algorithms in institutions such as banks and insurance companies are often used to recommend applications for approval. Unfortunately, due to the fingerprints of the programmer, algorithms may be “deliberately blinded” to an applicant’s race, gender or class. AI decision-making can be impaired if trained on inaccurate or biased data.

The risk of probability

Despite the value and benefits of AI in business, it is important to remember that at the core of generative AI (e.g. ChatGPT) is a probabilistic model. In the realm of response generation, probabilistic models employ examples from an extensive training dataset to infer ‘’answers.’’ By grasping the probability distribution, patterns, and structures within the data, these models acquire the ability to create fresh instances.

Once the model completes its training, it becomes proficient in generating new content by drawing from the learned distribution. Nonetheless, as this distribution is merely an approximation of the actual data distribution, the produced samples may not perfectly represent reality. Instead, they are probabilistic approximations that embody the fundamental statistical characteristics of the training data.

Due to their probabilistic and inference-based nature, these models often yield surprising results. If you were to train a text generation model and prompt it to produce a sentence repeatedly, you would likely receive slightly different sentences on each occasion while still maintaining an overall theme or style.

A company relying on generative AI also runs the risk that the data used by the AI might be outdated or inaccurate, which could lead to incorrect responses. Due to the notorious disinformation or fake news spread by social media and tech platforms, the data scraped from the Web may be inaccurate.

A further risk is that if employees use generative AI to generate advice and provide it to clients without a thorough review, it could lead to a situation where unqualified or incorrect advice has been given by an unauthorised entity.

The board will therefore have to carefully consider the ethical implications of the AI used by the company.

Mitigation of risk

Although generative AI experienced major improvements, it can still make mistakes or “hallucinate.” Since real-world environments are not deterministic, there will be certain scenarios in business for which accuracy is important and where the probabilistic nature of generative AI should be used with care.

King IV™ explains that the board must set the ethical tone from the top and provide leadership and guidance on how the organisation should exploit AI technologies. Boards will therefore have to ensure that appropriate policies, guidelines and practices with regard to the use of AI are implemented to mitigate the risks involved. More than ever before AI and its ethical implications need to be a standing item on the board’s agenda and an integral part of its oversight responsibility.

As responsible corporate citizens, the board is tasked with ensuring that the company's AI practices align with the Constitution, laws, and established standards, as well as the company's own policies, procedures, strategies, Code of Conduct and Ethics. The board must also ensure that the Code of Ethics guide the responsible development of AI in products and services.

Unfortunately, the application of corporate governance to AI in companies and their Boards has been quite slow in South Africa. This raises significant risks for boards and organisations.

Professor Louis C H Fourie is an Extraordinary Professor in Information Systems at the University of the Western Cape.

BUSINESS REPORT