The Prompt

October 27, 2023

Responsible AI: Balancing Innovation with Ethics

Discover the world of Artificial Intelligence and unravel the confusion of basic concepts. Explore distinctions between pattern detection and generative AI.

Responsible AI: Balancing Innovation with Ethics
Kevin Gray

by Kevin Gray

President at Cannon Gray

Marketing scientist Kevin Gray asks Dr. Anna Farzindar of the University of Southern California some important questions about Artificial Intelligence.


Artificial Intelligence (AI) is evolving at an ever more rapid pace, and many of us are confused about basic concepts and terms related to this subject. For example, what are the differences between traditional pattern detection and generative AI?

Traditional AI works mostly like pattern matching or pattern detection. For example, if you have a specific pattern and ask AI to find that pattern among millions of images, it will analyze these data and find the nearest neighbors to the pattern.

We can also ask machines to do classification tasks for us. When the model is trained on data with labels for these classes, we can provide it with numerous examples of each class, such as apples, oranges, and bananas. Then the machine will be able to classify the fruit into these categories; it would detect the class of fruit based on these patterns.

However, with generative AI, the work of complex neural networks is more than simple pattern detection from historical data. By utilizing a large volume of data, the AI can generate original output that has never been seen in past data. To illustrate, one can ask the AI to generate a new image in the style of a Picasso pain'ng, one that was never in the database!

Can you briefly explain what Large Language Models (LLM) are and how they are used in ChatGPT and similar tools?

Large Language Models (LLMs) are used in ChatGPT or similar AI systems able to generate human language. The output of the machine is new content in various forms such as question- answer, machine translation, or even storytelling.

Now, with the huge amount of data available on the internet and increases in computing power and expanded computer memory, we can estimate billions of parameters and capture very complicated patterns and small nuances in language. Furthermore, LLMs can be tuned for a specific task or domain like legal or medical fields.

LLMs have been criticized for their tendency to confabulate ("hallucinate"). Can this be fixed, or is it an inherent characteristic of the technology?

I would not use the terms confabulate or hallucinate for machines. These concepts are related to consciousness; precisely, the human memory error that may lead to errors or distorted memories. AI generates text based on information received in its training data.

Biases and incoherent content are likely to be present in LLM-generated responses because the models learn from biased data on the internet. There are some efforts to mitigate this by filtering the training data and removing the inappropriate or offensive content, but biases can persist.

Hundreds of plugins have been developed for ChatGPT and similar tools. How do they enhance their capabilities? Does using them also entail risk?

As I’ve mentioned, general LLM or ChatGPT-type models are trained on data available on the internet, some of which may be inaccurate or otherwise problematic. So, the output could contain errors and sometimes the AI may struggle to understand the context and maintain contextual information over longer conversations.

But when we use LLM for specific applications such as chatbots for financial institutions, it would be possible to tune the model for targeted audiences and use specific domain-based training data. This would result in better and more accurate responses when interacting with users.

What potential dangers to individuals, communities and society does AI in general pose?

Several AI technologies are now publicly available, but the danger is in how we would use them as individuals, communities, and society at large. For instance, AI generated content such as text, images, audio, and video can be used for fraudulent or malicious purposes.

AI-powered surveillance systems can infringe upon individuals' privacy rights and be used for unethical or authoritarian purposes. Biased AI systems, for example, errors in facial recognition systems, can perpetuate discrimination in areas like law enforcement.

Additionally, massive automation would lead to job displacement in many industries, potentially causing economic and social disruption. For example, AI can write code like a junior programmer. But if companies don’t hire junior programmers and never give them opportunities to work on complex problems, in five or ten years there will be a lack of senior programmers and program managers who can handle complicated problems. So, we need to be very careful in workforce development programs and consider the long-term consequences.

How can we make certain AI benefits individuals, communities, and society? How should we use the technology to minimize the potential of AI to cause harm?

Something important to consider is distinguishing between human work and the output of machines. If we are not careful from the beginning about the AI implementation process, we will not have control over the output of machines or be able to distinguish it from human work. For instance, we should know if an artwork is a result of human thought process or if it had been created by a machine that combines patterns.

For example, in customer service, when a response is created the customer should know if it was automatically produced or if the question was answered by a human representative. Imagine that someone is staying at an Airbnb in a city that they’ve never been to before. Naturally, the person will ask their hosts questions regarding attractions, things to do, etc. Personal messages between the host and guest have a sense of trust and comfort in contrast to the robot-toned AI-generated content.

Will AI ever be able to truly think and feel?

Sentient machines and robots are mostly subjects of science fiction. However, developing sentient AI able to experience deep emotions which has self-awareness and consciousness raises profound ethical and philosophical questions and concerns. For these reasons, we need to develop specific guidelines regarding the ethics of AI for developers of technologies, corporations, and users. We must also include responsible AI and Ethics in education.

Thank you, Anna!

customer insightsgenerative AIartificial intelligence

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Kevin Gray

The Impact and Ethics of Artificial Intelligence
Research Technology (ResTech)

The Impact and Ethics of Artificial Intelligence

Marketing scientist Kevin Gray asks Dr. Anna Farzindar of the University of Southern California about the impact and ethics of Artificial Intelligence...

Where is Marketing Data Science Headed?

Where is Marketing Data Science Headed?

Some marketing data science directly competes with traditional marketing research areas and many marketing researchers may wonder what the future hold...

Data And Oil
Research Technology (ResTech)

Data And Oil

The importance of pursuing the discovery and understanding of data.

Big Data, Small Data, or Both?
Research Methodologies

Big Data, Small Data, or Both?

Learn how to best utilize which data sources, and when.

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers