As the newest piece to the ever-evolving AI puzzle, generative artificial intelligence (AI) is set to transform many facets of the way we live and work. It can be both a game changer for business and a threat to humanity, making it both exciting and frightening at the same time.
All technology comes with risk, responsibility, and the need to think through intended and unintended consequences. There are ethical concerns at each “level” of the AI landscape that requires a movement to a more thoughtful approach to addressing issues of bias and ethics.
Generative AI brings a unique set of challenges and ethical considerations. As the use cases for this new technology grows, it helps to understand generative AI in the larger context of artificial intelligence, and where the potential fault lines exist at each level.
What is generative AI?
At the broadest level, AI refers to the ability of machines to perform tasks and make decisions that would normally require human intelligence. The following infographic outlines where generative AI fits into the broader AI ecosystem.
Across all types of AI, bias is a critical ethical concern. The quandary lies in the potential for consumers to see the predictions, judgments, and outputs generated by AI as objective or having scientific credibility. The vast scale of AI systems can compound issues relating to ethics or biases, because any interpretations—if taken at face value—can have an outsize impact.
As we drill down further into more specific, advanced types of AI, deciphering the pros and cons grows more complex.
Machine learning
Machine learning (ML) is the umbrella term for a subset of AI that involves the use of complex algorithms and techniques that allow systems to learn from data, identify patterns, improve performance, and make decisions without explicit instructions or programming. In ecommerce, ML is often used for data-crunching tasks like forecasting, predicting, or clustering shoppers into segments.
Supervised machine learning models rely on labeled training data and inputs, which require a degree of human oversight and are resource intensive.
Unsupervised machine learning models rely on raw, unlabeled training data, which accounts for the vast majority of data available in the world. It’s often used to identify patterns and trends in raw and unstructured datasets.
Ethics issues to watch out for: Particularly in the case of unsupervised ML and the large data sets used to train and run these models, bias and scale are an ongoing concern, along with additional ethical considerations around privacy and surveillance. In healthcare, for example, underrepresented data of minority groups has been shown to lead to lower accuracy results in computer-aided diagnosis systems for black patients as opposed to white patients. Other examples include issues of bias in predictive policing , predictive medical care , and mortgage-approval algorithms .
Deep learning
A more sophisticated subset of ML, deep learning mimics the structure and processing power of the human brain. It uses artificial neural networks to identify complex patterns and reach conclusions without (much) human intervention.
Deep learning is being deployed in applications such as facial recognition, object recognition that makes autonomous vehicles possible, and even in tasks like cancer diagnoses based on medical imaging. But as powerful and groundbreaking as the technology is, it can also produce worrisome results, such as false negatives or positives in facial recognition, and, more alarmingly, false diagnoses in healthcare.
Unlike some forms of “ explainable AI ,” which allow us to understand and interpret the factors that lead to a particular output, deep learning can be a black box that requires a much bigger leap of faith. That is, due to the increased complexity of models and algorithms at this level of AI, deep learning produces results that are often much harder to explain–we see the output, but we don’t know how the algorithms got there.
Ethics issues to watch out for: With the massively large data sets and a labyrinth of neural networks that deep learning requires, inaccuracies or ethical concerns, including bias and false results, are even more difficult to root out. In a study published in Nature Machine Intelligence , for example, researchers at the University of Cambridge determined that DL models for diagnosing and predicting patient risk for COVID-19 using medical imaging were not yet fit for clinical use. In one cited case, a model was trained on data that included patients who were lying down while being scanned—making them more likely to be sick—resulting in the algorithm mistakenly assessing COVID risk based on the position of the patient during scanning.
Generative AI
Generative AI is a subset of deep learning that focuses on creating new or original content, including text, images, and video, often based on user prompts.
Harnessing the power of ML, deep learning algorithms, and neural networks, generative AI models are capable of producing human-like creative outputs. By analyzing relationships and patterns within data and refining results based on iterative training, generative AI models can also improve their outputs over time, delivering content that’s more contextually relevant.
Popular generative AI models include generative adversarial networks used in image generation, variational autoencoders used in image and audio synthesis, and large language models like ChatGPT, commonly used in language-based applications.
With the capacity to create things that don’t exist, generative AI brings its own set of (larger) ethical concerns.
Ethics issues to watch out for:
- Potential for misuse: The ability of generative AI models to create realistic fake content at scale raises many issues. To name just a few:
- Deep fakes, including fake news and videos, and the increasing misuse of
visual
and
voice likenesses
of celebrities and politicians can be used for hyper-targeted
disinformation and propaganda
.
- Generative AI, deep fakes, and voice spoofing can be
used for fraud
, including
identity theft
and
extortion
.
- Generative AI also allows bad actors, including novices, to write malicious code at scale, opening up a growing list of cybersecurity threats . Hackers, for example, are using generative AI to create vicious malware .
- Deep fakes, including fake news and videos, and the increasing misuse of
visual
and
voice likenesses
of celebrities and politicians can be used for hyper-targeted
disinformation and propaganda
.
- Unreliable outputs: Generative AI models are known to be prone to AI hallucination, delivering realistic-sounding but incorrect or misleading results with a level of confidence that makes it that much
harder for users to discern
.
- Lack of transparency: For the time being, generative AI is the biggest black box of them all, offering little visibility into how content is created or how conclusions are reached. The lack of interpretability can lead to a lack of trust, potentially limiting wider adoption in consumer-facing contexts.
- Copyright and IP issues: There are still a lot of unknowns around Generative AI in the intellectual property space, around issues like authorship, attribution, and infringement risks. Many generative AI models don’t identify the source of content that they’re drawing from, raising complicated copyright and attribution concerns. President Biden recently issued an
Executive Order
on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in which he directed the US Copyright Office to study the copyright risks and issues raised by AI, “including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.”
- Bias: Like other forms of ML and deep learning, generative AI models rely on the data that they’re trained on, which in many cases can include large swaths of the internet. Generative AI models learn patterns from existing data, then use this knowledge to generate new outputs. If that data contains biases or limitations, they can be reflected in the outputs .
Pressure test AI results
Whether you’re using generative AI to create content at an enterprise level, optimize quality control in manufacturing, sort through resumes, or model your business’s finances, testing before you share publicly is a critical component to protect against ethical and bias issues.
“One of the dangers of AI is unintentional harm,” said Adobe’s Executive Vice President, General Counsel, and Chief Trust Officer Dana Rao in a recent Adobe webinar about using generative AI . “People don’t set out to make AI that does something wrong. You’re building AI to make beautiful pictures, and then all of a sudden you realize something in the dataset made it show you images that are unsafe or biased in some way or misrepresentative of the society you live in. That wasn’t something the developer did on purpose. The AI learned it because a lot of the data it’s trained on is biased, and the AI is only as good as the data it’s trained on.”
Dana suggests that businesses considering generative AI as a tool come up with a series of questions to ask any potential AI vendor to make sure it meets with a company’s brand and values.
“You have to test it,” Dana said. “So you want your vendors to have that testing program before you bring in AI. Whether it’s Adobe or anyone else, you want to ask them those questions: What are you testing for? How did you test it for bias? How did you test it for harm? How was it trained? Do you have a review program? What’s your governance structure, and do you keep the data safe?”
What businesses should be thinking about
To help ecommerce businesses prepare for this rapidly evolving technology, Buy with Prime UX design and research team members share their advice.
“Our focus is how you can create purposeful, ethical, and delightful customer experiences with this technology,” says Kim Lewis, Principal UX Designer for shopper experiences.
“This is just the beginning of the conversation,” adds Signe Slater, Sr. UX Program Manager for UX design and research. “We’re all learning together about the best way forward. But along the way, by lending as much weight to the ethical considerations of Al as we do the business considerations, we can ensure that the most fundamental principle never gets lost: Do no harm.”
If you’re considering adding generative AI to your ecommerce business to help automate tasks, assist customers, or create baseline content, be diligent in your approach to avoid ethical issues or biases that could harm your brand.
________________
Learn more about how
Buy with Prime can help your ecommerce business
.