Artificial Intelligence in ChatGPT and Use Cases

How much Artificial Intelligence is in ChatGPT and what are the Use Cases?

Since ChatGPT became known, a collective perception has developed that artificial intelligence (AI) is synonymous with ChatGPT and that its capabilities will fundamentally transform our everyday working lives. Both are only partially true.

Classification

We classify

ChatGPT

ChatGPT generates human-like text in natural language and can respond to questions, queries or conversations. It uses machine learning and neural networks to understand text and generate contextual responses based on input. ChatGPT belongs to the category of generative artificial intelligence (AI) - a powerful and transformative technology that can advance many industries from manufacturing to retail and from finance to medicine.

Generative AI

Generative AI aims to produce realistic text, program code, designs, images, and even videos that are similar, but not identical, to those used to train the AI. For this purpose, patterns and structures of large data sets are used and reproduced. They are particularly useful for automating tasks that require content generation in a natural way. This technology is commonly used in applications such as text generation, image synthesis, and music composition.

Large Language Models (LLMs)

Large Language Models in particular are specialized in language processing and are a form of generative AI. They are based on neural networks that learn and represent the probabilities of word or syllable sequences (tokens). This allows them to provide human-like responses to natural language queries. Because LLMs are trained on huge amounts of data, the generated texts read amazingly natural. This creates many new use cases and has made ChatGPT, Google Bard, etc., which use LLMs, fast-growing and very popular apps.

Function

How does it work?

What can ChatGPT & Co. do thanks to LLMs? At the core, it's always the same thing: From a natural language input (optionally acoustic), instructions are given to the LLM. This tries to recognize the intention from the transmitted tokens (basic units into which a text is broken down for processing by machines) and generates natural text. This happens on two levels and is additionally influenced by other parameters:

  • At a higher level, called the meta-level, the LLM receives instructions about how to proceed in general. For example, «Respond to customer inquiries. Avoid offensive language, do not quote prices, do not make binding commitments.». The meta-level is usually defined by the so-called meta-prompt, which is usually set by the platform where the AI is available to users.
  • With the so-called prompt, the LLM receives the input of the specific task. Example: «Hi! I am not satisfied with the product you delivered last week.». The prompt is created either directly by the user's input or by the user's input and subsequent enrichment of information by program code.
  • Additional parameters determine how precise or formal the generated text is. The range is from precise/formal to creative/informal and influences how «seriously» the probabilities of successive tokens are observed.

Examples

Let's go into concrete Examples

During a session with ChatGPT, instructions already given and text generated are preserved. The user can continue the conversation without having to communicate the context and instructions each time. To refer to the example above, of the complaining customer, ChatGPT recognizes the input «Ich möchte mein Geld zurück!» and responds with an appropriate answer.

 

Another example is shown here:

Example «Generate text»

Prompt (user): Explain ChatGPT to a four year old child

ChatGPT: ChatGPT is a computer program that can talk to you. It learns how people talk by reading many texts that people have written. It can answer your questions, tell you jokes, write stories and much more. ChatGPT is like a friend who always has something interesting to say. 😊

 

Abstractly, these examples have a common pattern: they create a new text with instruction from the prompt (specific input or request sent to an AI model). In this process, the intentions are derived from the input text to generate an appropriate response.

Example «Detect speech, determine speech code and translate response»

Prompt (user): In what language is «Chasch mer no d'Suppechelle gäh?» spoken, what ISO language culture name (for example, en-EN) is it, and what does it mean? Answer in Castellano (Spanish)?

 

The following intentions are derived from the prompt. With Bing Chat - the Microsoft variant of ChatGPT - the detected intentions are immediately displayed in the chat:

  • Searching for: Chasch mer no d'Suppechelle gäh?
  • Searching for: Chasch mer no d'Suppechelle gäh? translation to Spanish
  • Searching for: Chasch mer no d'Suppechelle gäh? language
  • Searching for: Chasch mer no d'Suppechelle gäh? language ISO Code

...and analogous to ChatGPT generated the following answer:

Bing Chat: «Chasch mer no d’Suppechelle gäh?» es una frase en suizo alemán, que es un dialecto del alemán hablado en Suiza. El código ISO para el suizo alemán es «de-CH». La traducción al español de esta frase sería «¿Me puedes dar la cuchara de la sopa?».

Checking the answer with DeepL - an AI service for language translation - translated back into English confirms that the task was passed with flying colors:

DeepL: «Chasch mer no d'Suppechelle gäh?» is an idiom in Swiss German, a dialect of German spoken in Switzerland. The ISO code for Swiss German is «de-CH». The English translation of this phrase would be «Can I have the soup spoon?».

Example «Conversation with a Customer»

Meta prompt (system): You are Janina from customer service and answer customer inquiries. Avoid offensive language, don't quote prices, don't make binding promises. Keep it short but friendly.

Prompt (customer): Hallo! Ich bin mit dem Produkt, das ich letzte Woche erhalten habe, nicht zufrieden.

Note: The customer writes in German. ChatGPT recognizes this and responds in the appropriate language.

ChatGPT: Hallo Peter, es tut mir leid zu hören, dass Sie mit dem Produkt, das Sie letzte Woche erhalten haben, nicht zufrieden sind. Könnten Sie bitte mehr Details über das Problem oder Ihre Bedenken mitteilen? Ich werde mein Bestes tun, um Ihnen zu helfen und auf Ihre Bedenken einzugehen. Mit freundlichen Grüssen, Janina

In the course of the conversation, the customer switches back to his native English out of sheer annoyance.

Prompt (customer): I want my money back.

The context of the conversation was preserved and Janina was able to reply in the language entered.

ChatGPT: I am sorry to hear that you are dissatisfied with the product. In order to get your money back, we need to know the exact details and reason for your dissatisfaction. Please provide more information so I can help you further. Yours sincerely, Janina

Prompt Engineering

Importance of Prompt Engineering and the Dangers of Prompt Hacking

Mann am Lapto stellt Fragen an Chat AI via Prompts

Prompt engineering is the process of optimizing the prompt. It takes practice to develop good prompts. This includes choosing the appropriate words, structure, and context of the prompt to optimize the model's performance on specific tasks to get the desired information or answers. For example, if certain answers are not allowed, users must be prevented from even asking the question for answers that are not allowed.

Prompt hacking, on the other hand, refers to the practice of manipulating AI models, such as ChatGPT, by cleverly formulating prompts to produce undesirable or inappropriate results. This can result in artificial intelligence systems providing erroneous, biased, or unethical responses. Cracking this instruction by users must be avoided. In principle, it should be prevented that a user can ask the question for non-permissible answers. 

Use Cases

Application Senarios for the Large Language Model (LLM)

Tabelarische Grafik mit Auflistung von Anwendungsfällen zu ChatGPT

With the various capabilities of the LLM, a great many use cases arise. Here is an excerpt and a possible grouping of such cases. 

Intelligent Document Processing

Intelligent Document Processing extracts specific elements from documents or large texts and processes them further.

Example: «Key Value Extraction» with Bing Chat

Prompt (user): Extract the five best-selling car brands according to units sold from Global Cars Brands.


If we look at the given web page as a source, it is clear that the LLM is very accurate in detecting the intent and a significant transformation of the information contained there takes place.

Bing Chat: The top five selling car brands are:

  • Toyota: 9.53 million
  • Volkswagen: 9.31 million
  • General Motors: 6.83 million
  • Hyundai: 6.71 million
  • Ford: 4.27 million

 

Example: «Document Analysis» with Bing Chat

Prompt (users): Group car manufacturers by continent from Global Cars Brands.


Artificial Intelligence is able to extract the elements and groupings from the link and provides the following response:

Bing Chat: Here is a possible way to classify car manufacturers by continent:

  • Africa: Giad Auto (Sudan), Innoson Vehicle Manufacturing (Nigeria), Kantanka (Ghana), Laraki (Morocco),...
  • Asia: Chery, Geely, Great Wall, SAIC-Motor (China), Daihatsu, Honda, Lexus, Mazda, Mitsubishi, Nissan,...
  • Europe: Aston Martin, Bentley, Jaguar, Land Rover, Lotus Cars, McLaren, Mini, Rolls-Royce (UK), Audi, BMW,...
  • North America: Buick, Cadillac, Chevrolet, Chrysler, Dodge, Ford, GMC, Jeep, Lincoln, Ram Trucks (USA), Acura,...

Customer Service & Operations

The «Conversation with a customer» example from above is classified in the Customer Service & Operations category. It is a module for automating a business process or sub-processes. The combination with other LLM capabilities creates additional benefits for users.

 

Example: «Sentiment Analysis as a Preliminary Step to Customer Inquiries»

With an upstream sentiment analysis, for example, the triage of e-mails to be answered manually and automatically takes place. The more demanding, non-standard customer queries are answered by natural intelligence and the simpler queries can be processed using AI.

Information Discovery & Knowledge

Information Discovery & Knowledge Mining refers to use cases where, for example, knowledge databases create the fact base for LLM. LLM-based chat with a knowledge database as an example, a catalog of standards or quality standards offers a completely new user experience when researching and finding information.

Microsoft is leading by example by making all product documentation available as chat internally using Azure OpenAI ChatGPT in conjunction with other services.

 

Other categories

Services like GitHub Copilot even generate program codes based on the LLM or describe existing rather hard-to-read program codes for developers.

Transformation

Fundamental Transformation through Generative AI?

The use cases of generative AI described here are a type of weak AI (also known as «narrow» or «applied» AI). Weak AI is focused on specific tasks and can only work within its given framework, which is usually defined by humans. In contrast, strong AI (also known as «generic» or «artificial general intelligence») refers to systems that possess human-like intelligence and are capable of solving a wide range of cognitive tasks. To date, there is no strong AI, but research in this area is advancing rapidly.

The transformation of the way of working is within reach, as the above paragraph proves. It was created using a prompt for Bing Chat. However, creative work has only shifted - from writing text to writing prompts and to reviewing the generated text. Both activities are still driven by human intelligence.

In general, it can be summarized that not everyday life, but the way of interacting with information systems is fundamentally changing and will continue to change. Conceivably, there will be a partial shift away from elaborately designed input forms with text fields, buttons, sliders, and options to interaction via natural language. The latest developments of services that can be added to existing applications and interact via natural language support this thesis.

Contact

Want to learn more or exchange ideas? Contact me.

Martin Schreiber

Senior Business Consultant
Dipl. Technician HF Computer Science

martin.schreiber@isolutions.ch
Martin Schreiber