Assessing AI Chatbots for MRO: Capabilities, Limitations, and Potential

AI chatbots, powered by Large Language Models (LLMs), can generate coherent responses but lack true understanding. They rely on pattern recognition and past data rather than reasoning. While LLMs improve with more data, larger models become less explainable and require high computational power. Despite these limitations, chatbots hold immense potential for MRO activities by providing quick access to technical documentation and sensor data. Security of datasets is crucial. Although chatbots cannot replace human expertise, they can significantly enhance efficiency and decision-making for trained MRO technicians, making them valuable tools for maintenance, repair, and overhaul operations.

ChatGPT has both strengths and weaknesses. So let us examine and make a quick assessment of utility of AI chatbots for MRO activities.

Albert Einstein said “It would be possible to describe everything scientifically, but it would make no sense. It would be a description without meaning—as if you described a Beethoven symphony as a variation of wave pressure.” (Albert Einstein on the Limits of Scientific Description, n.d.)

This applies to the ‘intelligence’ we see in the responses of a chatbot. It may seem like a response which is given after a clear understanding of the subject by the AI Chatbot, but it is not so.

To find out why, let us get a bit deeper into the technology which works behind the chatbot’s output responses.

At its core an AI Chatbot can generate a response to your query by correlation with the data it has and how it has learned from it. However, it is not able to understand the meaning of the response it has generated.

When a machine is learning it has two phases. The training phase and the inference phase. As the machine trains itself its responses keep getting better. The base of the training is the use of large language models (LLMs).         

The machine keeps learning as it accesses more information. This learning is different from human learning. It does not actually understand the topic. For example, it will not understand the way an engine functions and fails in the human sense. It will seek the response from past datasets and give out the best possible fit it is able to garner from that. If you ask the reasoning, it will again go back to its dataset and give an answer if it is available in the LLM, it will not be able to think through and offer an insight of its own.

Let us find out more about LLM.

An LLM is “a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. Large language models are among the most successful applications of transformer models.” (Lee)

How does it feel when we talk to an AI Chatbot?

To quote “Engaging in dialogue with the latest generation of AI chatbots, based on “large language models” (LLMs), can be both exciting and unsettling. It is not an experience many people have had yet-these models are still too computationally demanding to be widely available-though this will certainly change over the next few years as new chips are developed to run them at low cost.” (Arcas, 2022)

So cost is also a constraint. Larger the datasets, larger the computational power needed and greater the cost. In due course it may become cheaper though.

The LLMs use Transformers Models to generate the response. What are these?

Transformers Models used in LLMs “are a type of artificial neural network architecture that is used to solve the problem of transduction or transformation of input sequences into output sequences in deep learning applications.” (Rogel-Salazar, 2022)

The output of LLMs is in English and it makes sense. Even though the Chatbot does not understand the output; it is amazing for the user who feels that the Chatbot is talking coherently.

As the size of the datasets behind the LLMs increase the responses have greater details. However, when we increase the size of the model the level of “unexplainability” and “incomprehensibility” increases. It is seen that “as performance increases, explainability tends to decrease.” (Singh, 2021)

So, this is another barrier in the present-day AI Chatbots.

The power of AI Chatbots, not withstanding the limitations, is immense. It can be leveraged for MRO systems. Large datasets of documentation and sensor and failure records can be the basis of specific to MRO chatbots.

As the datasets are crucial to the chatbot, the security of the datasets will be crucial, and would need to be ensured.

However, with the present level of technology already available a trained MRO technician can ‘talk’ with the chatbot and get excellent support thus increasing both the efficiency and effectiveness of MRO activities.

Albert Einstein on the Limits of Scientific Description. (n.d.). Retrieved from Autodidactproject.org: http://www.autodidactproject.org/quote/einstein_11_sci_description.html

Arcas, B. A. (2022). Do Large Language Models Understand Us? Retrieved from MIT Press Direct: https://direct.mit.edu/daed/article/151/2/183/110604/Do-Large-Language-Models-Understand-Us

Lee, A. (2023). Nvidia. Retrieved from https://blogs.nvidia.com/blog/2023/01/26/what-are-large-language-models-used-for/

Rogel-Salazar, D. J. (2022). Transformers – Self-Attention to the rescue. Retrieved from DOMINO: https://www.dominodatalab.com/blog/transformers-self-attention-to-the-rescue

Singh, R. (2021). An in-depth analysis of Explainable AI. Retrieved from Druva: https://www.druva.com/blog/an-in-depth-analysis-of-explainable-ai/

Leave a Reply

Your email address will not be published. Required fields are marked *