Importance of trust in adopting GenAI for knowledge base

Importance of trust in adopting GenAI for knowledge base

Technical writers produce a single source of truth by coordinating with Subject Matter Experts and stakeholders. They draft clear articles and remove ambiguity. They produce artifacts such as user manuals, standard operating procedures, troubleshooting guides, process manuals, and software “how-to” guides. These artifacts have absolute truth, making “trust” inherent. Some are used for compliance and regulatory filing of products in the hardware domain for manufacturing and medical devices. Thus, technical writers’ workflows are critical in ensuring accurate information in user manuals. Quality control workflows ensure that technical writers’ work is peer-reviewed and vetted by Subject Matter Experts before publication. End users of the documentation rely on a PDF or online knowledge base for accessing accurate information.

The search engine which is based on “lexical keywords” brings in relevant articles based on user keywords. Thus, the end users always go to the article source to get the information they need.

example of lexical search

Example of lexical search

However, things have changed after the introduction of GenAI technology such as ChatGPT, Gemini, Claude, and so on. Now, GenAI-powered search engines are taking over the semantic search.

example of semantic search

Example of semantic search

The GenAI-based agents act as an interface between end users and documentation. Given the generative nature of this AI technology, it is harder to have control over what is generated as a response to user questions! This impacts “trust”!

Limitations of Current GenAI

The characteristic of Artificial Intelligence (AI) is that it is non-deterministic. GenAI tools are based on Large Language Models that predict the next token (token is ¾ of a word).

GenAI tools generate responses based on user prompts. The Retrieval Augmented Generation (RAG) architecture supplements the knowledge gaps in LLMs. RAG provides better context and up-to-date facts to help LLMs generate better responses. The quality of the response depends on

  • Quality of the prompt

  • Quality of the content

  • Underlying LLM

Quality of the prompt

The quality of the prompt determines the quality of the generated response. Suppose the prompt is vague and contains ambiguous terms. In that case, it might confuse the RAG architecture when retrieving article chunks that might have ambiguous information, leading to a non-factual response that is not grounded in truth.

To continue reading about the importance of trust in adopting GenAI for knowledge base, click here