Approaches in using Generative AI for Workflow Automation — Part 1
Co-authors: Luigi Pichett, Pierre Feillet, Yazan Obeidi
See other related stories in the “Approaches in using Generative AI” series:
In the last 2 articles, we talked about using Generative AI in Enterprise Content Management, and in Decision Automation. In this article, we are going to tackle the topic of using Generative AI as part of a Workflow solution.
Before we talk about the solution, let’s first examine what Generative AI with Large Language Models (LLMs) allows us to do today.
Generative AI Core Capabilities
There are 6 core capabilities:
- Generate — Generate content based on a request, e.g., write an email, draft a proposal, write code, etc.
- Classify — Read and classify input based on a few examples. Can be used to categorize customer issues or group inputs.
- Summarize — Transform dense text into your personalized overview, capturing key points.
- Extract — Pull the information you want from large documents. Identify named entities, parse terms and conditions, etc.
- Translate — Get a text in one language to generate the equivalent semantic in another language
- Check — Verify a text and propose fixes to improve the readability of other expressed criteria
By day’s end, it becomes apparent that Generative AI, powered by Large Language Models (LLMs), is grounded in probabilistic modeling, incorporating elements of reinforcement learning through human feedback. However, it does exhibit certain limitations. Generative AI, for instance, lacks the ability to:
- Think: It cannot engage in cognitive reasoning or ponder abstract concepts.
- Reason: The AI cannot draw logical inferences or apply deductive or inductive reasoning.
- Check for Facts: It does not possess the capability to independently verify information or validate the accuracy of statements.
- Coordinate Tasks: Generative AI cannot orchestrate multiple tasks or manage complex workflows.
- Perform Real Work: It is not equipped to execute practical, physical tasks in the real world.
In essence, while Generative AI with LLMs offers impressive language generation capabilities, it falls short in the above crucial domains.
Generative AIs, on their own, are not process automation tools. They can provide insights, generate text, and assist in decision-making, but they do not automate the execution of entire processes. Handling sensitive data with LLMs requires careful attention to data privacy and security considerations. Moreover customizing LLMs for specific tasks and training them to perform effectively may require domain expertise and time. Finally, LLMs may not be the most efficient solution for high-volume, repetitive, and real-time process automation tasks.
Generative AI alone is not enough to build enterprise solutions.
Use Cases
In this article, we will cover 6 possible use cases and describe how Generative AI can be used together with Workflow to address them.
- Classify inputs and perform requests
- Generate responses as part of a business process
- Automate LLM fine-tuning and exception handling
- Provide answers to questions on business performance
- Generate workflow from a description
- Comparative analysis and Explanation of existing workflows
We will cover the first 3 use cases here and will go over the rest in the next article.
1. Classify inputs and perform requests
Description: In this use case, we leverage Generative AI to classify and/or extract input data from incoming business information and use the decision to perform work. LLM gets a structured and typed list of parameters independent of the syntax and formats to apply causal conditions and routes to the adapted work.
Pro: This approach is straightforward and requires no or minimal fine-tuning. This allows workflow to work directly with natural language inputs without pre-processing (e.g. classify the intent of an email request).
While LLM takes care of the NLP slice decision and workflow engines automate the routing and actions following the organizational policies.
Con: This might require experimentation with different LLMs to find the one that provides the best results based on the industrial domain. There is always a chance that the extraction process does not find the expected data from the input text so we must include appropriate guardrails to handle such scenarios. Some risks will always exist when transitioning from natural language to a formal interpretation.
2. Generate responses as part of a business process
Description: In this use case, we use Generative AI to create responses (e.g. email messages, work summaries, task notifications) directly from business data. We leverage LLMs to phrase a nice text in a targeted language from a structured context.
Pro: Support a more intuitive engagement experience with users and potentially save some effort if we need to generate text to support multiple languages. This also allows us to provide better responses to support non-API-based interactions with personal assistants like Alexa, Siri, or voice-based applications with text-to-speech.
Con: The LLM Prompt will need to be prepared carefully to make sure the generated outputs reflect the intent of the business outcomes. Due to LLM hallucination, the generated responses might include information that it should not use or reflect the facts on the table.
3. Automate LLM fine-tuning and exception handling
Description: In this use case, we leverage workflow to provide Human-in-the-Loop review and approval before knowledge data is fed to LLM for fine-tuning. We use LLM to classify the input text and determine if filtering and human review are necessary.
Pro: This is a must-have for any fine-tuning or indexing exercise that uses enterprise data.
Con: Incur additional review and approval costs. Sensitive data classification may go wrong with false positive or false negative cases.
Summary
What we have outlined here are the first 3 use cases, we will discuss the remaining ones in the next article.