Generative AI (GenAI) is a category of AI that can create new content, including video, audio, images and text. GenAI has the potential to change the way we approach content creation.

It’s gotten much attention lately. Take ChatGPT for example. The AI chatbot has captivated the public’s imagination with clever answers, creative writing and helpful problem-solving – all driven by GenAI technologies. Google has announced the release of the Bard chatbot. Salesforce has teased an EinsteinGPT solution. Prisma Labs became a household name when the Lensa image editing app took over social media in late 2022. As recently as February, Runway Research (the parents of Stable Diffusion) announced Gen-1 – the first video-based generative AI tool. Some organizations even offer complete generative AI product suites. Moreover, the global market for GenAI is expected to reach US$110.8 billion by 2030!

So how does GenAI work and what should be considered before putting it to work?

An introduction to GenAI

GenAI uses various technological approaches such as deep learning, reinforcement learning or transformers. While the specifics vary conceptually, the models work similarly.

Read more stories in the series about data ethics principles

The most common approach is to use deep neural networks, which consist of multiple layers of interconnected nodes that accept large quantities of data as inputs and identify patterns or structures within the data set. A large amount of data allows the model to draw from millions of data points to generate new data similar in structure and content to the training data. These data sets are incredibly cumbersome to accumulate, requiring significant oversight to maintain data quality.

The quantity of data necessary is mind-boggling. McKinsey reports that OpenAI used approximately 45 terabytes of text data. The terabytes of data are expensive to maintain; by some estimates, data storage can cost more than $250,000 annually (not including processing costs).

The market's rapid growth has inspired a barrage of utopian and dystopian responses about the possibilities and risks of GenAI technologies. Dozens of authors have written about the incredible potential of GenAI to answer tax questions, provide translation services, diagnose Alzheimer’s, reduce health care spending and serve as a 24/7 communication tool.

At the same time, research has shown that users overly trust automated programs. This automation bias amplifies the risks of GenAI tools, as individuals may inadvertently make decisions based on misinformation, fake content, or dubious facts promoted by the algorithms. Many authors have explored how GenAI can perpetuate misinformation, empower malicious actors, breach privacy, infringe on intellectual property and threaten jobs.

While many articles have raised valid concerns, others have highlighted the genuine benefits of the technology. Some articles have contributed to the need for more clarity around GenAI and whether organizations should embrace this technological advancement. We recently published a blog that dives into core values at SAS that highlight human-centricity, inclusivity, accountability, transparency, robustness, privacy and security regarding development. In it, readers learn how these values are reflected in our people, processes and products.

This blog introduces the steps necessary to lay out a practical framework for organizations adopting GenAI tools.

So you want to use GenAI?

The novelty of GenAI offers the first-mover advantage to the most agile businesses. However, most innovation introduces uncertainty and risks to an organization. Organizations should weigh their use cases ' potential risks and rewards before investing capital into GenAI business improvements and divulging proprietary information to a new tool. The best way for a business to establish an effective strategy is to start with organizational values. At SAS, we established our principles to help us answer the question of “can we” and “should we” adopt this new technology. We propose this model and the questions below as a starting point for the responsible usage of GenAI. It's possible that some of the questions may not be relevant for every GAI. In those situations, organizations should decide whether to adopt alternative principles.

Human-centricity: Putting people at the forefront

AI tools should never harm; they should promote human well-being, agency and equity. While most developers do not create GenAI with the intent to generate hateful content and harassment, we should be intentional always to prioritize human well-being. Consider:

  • Do the relevant employees understand how this GAI tool can assist the organization?
  • Does the project align with the organization’s ethical principles?​
  • Does this use case have positive intent for society?​
  • Who may be harmed by the use of GAI?
  • What is the impact on individuals and society over time?​

Transparency: Understanding the reasons behind development

GenAI tools will undoubtedly change the global business landscape. Transparency ensures that we understand the reasons and methods for these changes. When using any GenAI tool, it is important to openly communicate the intended use, potential risks and decisions made. Consider:

  • Can the responses of the GenAI tool be interpreted and explained by human experts in the organization?
  • What are the legal, financial and reputational risks of GenAI-generated content?
  • Should the organization indicate when GenAI created content?
  • Would it be clear to people if they were interacting with a GenAI system? ​
  • What testing satisfies expectations for audit standards [FAT-AI, FEAT, ISO, etc.]?

Robustness: Awareness of limitations and risks

Most tools (analog or digital) come with a warning to only use as intended by the designers to ensure that they can operate reliably and safely. Systems that are used beyond their intended purpose may cause unforeseen real-life harm. Many GenAI systems still have significant limitations that impact their accuracy. Since there is still considerable room for improvement, all users should be cautious when using GenAI tools. Consider:

  • Was the GenAI sufficiently trained on data for the organization’s specific use case?
  • Has the creator of the GenAI system documented any limitations, and is the use case within those limitations?
  • Can solution results be reliably reproduced?​
  • What guardrails are required to ensure safe operation? ​
  • How might the solution go awry, and what should be the response?

Privacy and security: Keeping everything safe

Users and businesses may engage with GenAI under the false assumption of confidentiality and accidentally disclose inappropriate information about themselves or others. For example, doctors using ChatGPT to write clinical notes may inadvertently violate HIPAA. Developers using GitHub Copilot may accidentally contribute proprietary data to the model. Remember that any input into a GenAI may be permanent and is unlikely to remain private. Consider:

  • Is there a risk of sharing any private or sensitive information?
  • Is there a risk of sharing intellectual property or other information that should not be disclosed to the algorithm?
  • What legal and regulatory compliance measures apply? ​
  • How might cyber or adversarial attacks exploit this solution?

Inclusivity: Recognizing diverse needs and perspectives

GenAI – just like humans – carries bias. If this bias is not identified and mitigated, we could exacerbate power imbalances and marginalize vulnerable communities. Awareness of the potential bias enables users to include diverse needs and perspectives. Responsible innovation requires considering diverse perspectives and experiences. Consider:

  • Does the solution perform differently for different groups? Why?​
  • Do people with similar characteristics experience similar outcomes? Why not?
  • Are all for whom the solution is intended equally prioritized? ​
  • How might the training data impact the inclusivity of the GenAI response?

Accountability: Prioritizing receiving feedback

Organizations using and developing GenAI systems are responsible for identifying and mitigating adverse impacts of decisions based on AI recommendations. Consider:

  • How are outcomes monitored, and can they be overridden and corrected? ​
  • Can the end users and those impacted raise their concerns for remediation?
  • How can the organization ensure the tool does not promote misinformation or perpetuate negative topics?
  • Is the organization familiar enough with the use case to spot errors in the GenAI?
  • When are humans involved in the decision-making process? Should that change? ​
  • What mechanisms exist to provide feedback on the results to improve the technology?

While generative AI can provide a competitive edge, organizations should consider the potential risks and rewards before investing capital. This consideration should start with organizational values to establish an effective strategy. We propose a model based on human-centricity principles, transparency, robustness, privacy and security, inclusivity and accountability as a starting point for responsible usage of generative AI. By intentionally implementing generative AI, organizations can mitigate adverse impacts and promote positive outcomes for individuals and society.

Read more stories on generative AI, including:

ChatGPT brings AI into popular culture Exploring the origins of generative AI and natural language processing Exploring the use of AI in education

Kristi Boyd and Allie DeLonay contributed to this article.

Share

About Author

Kristi Boyd

Trustworthy AI Specialist

Kristi Boyd is the Trustworthy AI Specialist with SAS' Data Ethics Practice (DEP) and supports the Trustworthy AI strategy with a focus on the pre-sales, sales & consulting teams. She is passionate about responsible innovation and has an R&D background as a QA engineer and product manager. She is also a proud Duke alumna (go Blue Devils!).

1 Comment

  1. Wonderful and very useful framework/content. As a product manager in HLS, these frameworks are so important to designing products for safe and high-quality care. The first rule in Medicine, do no harm. I think of it from a design perspective, much like what we learned with security.... the framework and domains you provide should be moved as far as possible to the left in the product design/development process 🙂 we are on it.... thank you very your meaningful contributions Kristi & Allie...brilliant!!

Back to Top