“Open the pod bay doors, HAL.”
“I’m sorry, Dave. I’m afraid I can’t do that.”
Those are two classic lines from the 1968 Stanley Kubrik movie “2001: A space Odyssey”.
Over the years, there have been many movies about Artificial Intelligence (AI) that have attempted to introduce us to AI, yet sew fear, uncertainty, and doubt in our minds. But a general lack of understanding, transparency, and explainability around AI has helped develop and spread mistrust.
AI continues to demonstrate its value by enabling organizations to gain deeper insights into their data. It can help identify new trends and business opportunities that would have otherwise been missed. It can help accelerate time to value when introducing new products and services to market, help predict outcomes with greater accuracy, faster and more consistently than humans alone, and help prescribe actions – all of which create the potential for smarter business outcomes.
According to the McKinsey report “The state of AI in 2022—and a half decade in review”, published December 2022, AI adoption and AI capabilities in organizations have doubled since 2017, and budget allocations for AI as part of a digital transformation strategy have grown from 5% in 2018 to 52% in 2022.
I see the market being split into two groups: organizations that successfully embrace and exploit AI for business advantage, and those that do not. The latter may struggle to compete and may eventually become extinct.
AI for the Masses
In recent months there has been a lot of buzz around public AI-generative engines that automatically generate text based on written prompts in a way that appears very advanced, creative, and conversational in nature. This technology uses large language models trained on data from the internet with an interface simple enough for the public to use.
Putting AI into the hands of the masses is exciting. However, the answers from these seemingly impressive conversational chat bots raises many issues including but not limited to:
- accuracy of the answers they generate – as the internet contains misinformation, inaccurate data, conspiracies, hate speech, etc.,
- accountability issues – often referencing resources and scientific papers that may or may not exist,
- lack of transparency and explainability – as to why these systems arrived at the answers they did.
This led to restrictions being imposed on many of these experimental systems.
As these systems evolve, I’m sure each iteration will become more impressive in what it can achieve. That said, the technology still lacks knowledge of events that occurred before the data set cut off – and does not learn from its experiences. Many of these systems also make simple reasoning errors and accept obvious false statements from users.
Trustworthy, Explainable AI – Enterprise Ready
An increasing number of enterprises are transitioning their data integration and analytics capabilities to “AI-first” capabilities (prediction, automation, machine learning).
The impact of AI is being felt across industries by a variety of people as chatbots are used for everything from customer service and personal assistant applications to automated customer support. With the continued advancements in natural-language text understanding (the AI capability that fuels chatbots) even more applications will embrace this capability.
Organizations need their AI systems to deliver accurate insights on their ever changing and ever-growing enterprise data. Imagine the chaos and brand damage that could ensue across an enterprise if inconsistent advice, insights, and outcomes were to be generated from inaccurate data and misinformation.
Accountability and explainability help build trustworthy AI. Only by embedding ethical principles into AI applications and processes can we build systems based on trust. For reference, IBM has laid out its perspective on AI Ethics.
Organizations must have full understanding of their AI decision-making processes with model monitoring and accountability of AI and not trust them blindly. Explainable AI can help humans understand and explain machine learning (ML) algorithms, deep learning, and neural networks.
ML models are often thought of as black boxes that are impossible to interpret. Neural networks used in deep learning are some of the hardest for a human to understand. Bias, often based on race, gender, age, or location, has been a long-standing risk in training AI models. AI model performance can also drift or degrade because production data differs from training data. An organization must have the ability to continuously monitor and manage models to promote AI explainability while measuring the business impact of its algorithms. Explainable AI can also help promote end user / brand trust, model auditability, and productive use of AI. It ca also help mitigate compliance, legal, security and reputational risks of production AI.
Explainable AI is one of the key requirements for implementing responsible AI, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability, and accountability.
Insights from AI are fundamentally changing every part of the way we work. And AI tools, like some of the chatbots referred to earlier – while imperfect – are showing the way toward our tech future.
IBM has spent decades building a portfolio of business-ready tools, applications, and solutions designed to help reduce the hurdles of AI adoption while optimizing for outcomes and responsible use. Combining technology with feedback from the field into a proven framework helps solve the world’s most pressing business problems, regardless of where those solutions come from. IBM actively engages clients through its ecosystems to incorporate deep industry knowledge and technical expertise to meet the business needs of an organization. And researchers continue to invest in developing the next big advances in software and hardware to bring frictionless, cloud-native development, and use of foundation models to enterprise AI.
AI for Business
IBM Watson Assistant and IBM Watson Discovery are two IBM primary offerings that contain IBM’s natural language understanding (NLU) and natural language processing (NLP) capabilities.
Watson Assistant is designed to learn as it goes, improving automatically over time by gaining knowledge from every conversation, through a process called autolearning. Watson is designed to surface the most relevant responses to customer queries, improving its capabilities with no human supervision.
Designed to accurately recognize what users want, Watson Assistant comes out of the box with the latest NLP techniques. Watson understands the flow of natural language, to help organizations build robust assistants that understand natural conversations.
It can learn the vocabulary of an industry and even internal terminology unique to an organization. It can be customized to understand nuances like regional dialects and colloquialisms.
Watson Assistant benefits from the integration of advances at the forefront of artificial intelligence by IBM’s AI research and development teams. Aiming to increase precision, decrease the amount of training data, and shorten the time to production, this continual investment in research and innovation positions Watson Assistant to be the heart of an organization’s customer service operations.
For ‘answer generation’, Watson Assistant provides the ability to extend organizations’ conversational AI capabilities with an offering called NeuralSeek by Cerebral Blue. Organizations can leverage queries asked in Watson Assistant and use them to retrieve content via Watson Discovery. Generative pretrained transformer technology is then deployed to generate a response based on the retrieved content, the query, and full context of the conversation. I view this as a positive way to leverage other generative AI technologies, because a business is still able to capture context and relevant domain language within the responses generated.
Watson Discovery makes it possible to rapidly build cognitive, cloud-based exploration applications that surface unseen and actionable insights hidden in unstructured data with features powered by natural language processing and machine learning.
Smart Document Understanding (SDU) is a visual machine learning tool that enables users to label text so that the tool builds an understanding of critical components inside enterprise documents, such as headers, tables and more. Once a few pages of a document is annotated, SDU can teach itself the rest, retrieving answers and information only from relevant content.
Traditional enterprise search engines perform keyword searches and provide users with links to documents. Watson Discovery provides specific passages that contain the relevant information from its source documents using semantic search. The design of this platform helps ensure that an enterprise’s information is easily accessible.
Content mining uses natural language processing to understand context and relationships in text. An organization can search across its documents to surface patterns, trends, and anomalies in content in near real time.
The Watson Discovery platform’s out-of-the-box NLP enrichments include entity extraction, sentiment analysis, emotion analysis, keyword extractions, category classification, concept tagging, and more.
Powered by Watson Discovery, IBM Watson Assistant search skill enables virtual agents to respond to a user’s queries quickly and efficiently. This helps provide a better customer experience by delivering passages of text from relevant content.
Intuitive tooling empowers subject matter experts to teach IBM Watson industry-specific language with no previous programming or coding skills required.
Watson Assistant and Watson Discovery combined, help deliver powerful NPL and NLU capabilities – and are able to interface to an expanding range of third-party technologies, data sources, and APIs.
Summary and Next steps
Generative AI, while impressive, is just one element of the bigger AI landscape. When used within an enterprise business setting, organizations must have Trustworthy AI. Accuracy, accountability, explainability, and ethics become the foundation of successful AI-based applications that the public can trust. Watson Assistant and Watson Discovery are just two offerings within IBM’s natural language understanding and processing portfolio that serve the Digital Labor market. Continual investment in research, deep industry and domain knowledge, along with feedback from consulting teams and the field all help build robust, relevant, trustworthy AI services. While these AI services provide value individually, only when combined within an integrated, unified, governed Data and AI platform can the value of AI and an organization’s data estates be fully realized. IBM Cloud Pak for Data can help provide this value.
If you found this blog interesting and want to learn more, I invite you to click on the links below to enjoy a trial of the following offerings:
IBM Watson Assistant Trial
IBM Watson Discovery Trial
IBM Cloud Pak for Data Trial
And for more information on IBM’s leadership in conversational AI read this blog post: “IBM again recognized as a Leader in the 2023 Gartner® Magic Quadrant™ for Enterprise Conversational AI Platforms”