We recently had the pleasure of working for Dr. Tanya Kant, a Senior Lecturer in Media and Cultural Studies (Digital Media) at the University of Sussex. Our market research team explored the application of generative text AI tools in the advertising, PR, communications, and marketing industries. We sought to understand what ethical practices and frameworks are guiding the implementation of emerging tools like Open AI’s ChatGPT. Is regulation or more education needed, and what role should employees and employers play?

It’s a big topic, and one that is front of mind for many businesses right now, sparking interest, curiosity, anxieties and concerns. Before you read on, ask yourself: do you think there should be ethical limitations on the use of generative AI, and why?

The risks inherent in using generative AI tools are no secret to anyone. Despite the enthusiasm for generative AI, there are thorny issues to resolve, especially in the production of toxic content, entrenching biases based on gender, sexuality, race or other characteristics, copyright infringement, and changes in the labour market that we are not prepared to handle. Other risks are related to models that have been trained on inaccurate or misleading sources of information creating damage, exposing brands, or even impacting stock markets.  

With AI’s rapid development and adoption in various fields, it became increasingly important to establish ethical frameworks around its usage. As both public and private organisations have scrambled to define their AI strategies, debate has arisen about the need to limit or regulate the use of generative AI tools.

  • Many international organisations, such as UNESCO (2023), issued statements about the rapid development of AI systems and the ethical concerns that need to be addressed.  
  • Forbes (2023) reports that in Canada, more than 75 AI researchers and CEOs of AI startups signed a letter advocating for the Canadian federal government to pass the AI and Data Act (AIDA). The Future Institute of Life published an open letter from more than 1,000 technology leaders in March of this year advocating for a 6-month pause to advance generative AI chatbot technology. 
  • In 2022, The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was published by the White House Office of Science and Technology Policy (OSTP). The AI Bill of Rights outlines five principles to foster policies and practices—and automated systems—that protect civil rights and promote democratic value. 
  • In an effort to inform and prompt informed discussions on the development and potential of generative AI tools and systems, the World Economic Forum held “Responsible AI Leadership: A Global Summit on Generative AI” (2023). It concluded with a summary of 30 action-oriented recommendations while emphasising the importance of open innovation and international collaboration as essential for responsible usage of generative AI tools and systems.
  • The EU AI Act aims to harmonise the rules on Artificial Intelligence by establishing a global standard, and is the first ever attempt to enact horizontal regulation for AI, classifying different AI systems on a “risk-based approach” to different requirements and obligations.
  • In response to guidelines by other countries, the UK government released a white paper framework titled “Pro-Innovation approach for AI regulation” (2023). The White Paper applies a principle-based framework that uses existing legislative regimes, without introducing new legislation, to future-proof the regulations according to AI trends, opportunities, and risks.

The principles set out by the UK government to govern AI regulation include: (a) Safety, security and robustness of AI systems and their regulation; (b) Transparency of access to the decision-making process of AI systems to promote public trust; (c) Fairness in regulatory compliance requirements between different AI systems and tools; (d) Accountability and governance established to ensure effective oversight of AI systems, and; (e) Contestability towards third parties and regulators when the outcome is deemed to be a risk-based decision.  

There are many more examples of organisations, global institutions or governments that are moving towards the creation and implementation of policies, frameworks or guides but we would love to hear from our community on this issue. What are the clearest risks you have felt regarding the use of AI in your business? What do you believe to be the ethical approach to those risks and challenges?

If you’d like to share your insights, want to read our research on Ethical uses of generative text AI in advertising, PR, communications, and the marketing sector” or are interested in learning more about your market and how AI may affect your business, do not hesitate to contact our research team.