top of page

When AI Solves the Wrong Problem

  • Writer: Mustafa Nourallah, PhD
    Mustafa Nourallah, PhD
  • Feb 21
  • 2 min read

Updated: Apr 8

By Mustafa Nourallah, PhD. 8 April 2026


Eye-level view of a classroom filled with engaged students discussing startup ideas

On a sunny spring day, I needed to call my bank about an important matter.


As is often the case, the interaction began with the usual identification step: am I calling as a private individual, as a company representative, or for some other purpose? Normally, the automated system asks me to press 1 if I am a private customer, 2 if I am calling on behalf of a company, or 3 for anything else.


It is a simple process. One button, one second, and the call moves forward.


This time, however, the bank had introduced a voice-based AI assistant. Instead of pressing a number, I had to explain verbally. The assistant then responded by saying that it was not sure it had understood my answer and asked me to repeat it. At that point, I could not help but laugh.


Not because technology failed, but because this is not the kind of problem I would expect AI to be used to solve in such a complicated way.


The experience reminded me of a study I read some time ago about the limited level of AI literacy among corporate boards. Weill, Woerner, and Banner (2025) concluded that only 26% of boards had sufficient knowledge of AI. This raises an important question: how many decisions about adopting AI are driven by a clear understanding of its strategic value, and how many are driven by fashion, competitive pressure, or fear of being left behind?


Financial institutions today appear deeply committed to implementing AI applications. In many cases, this is understandable. Market pressure is intense, customer expectations are changing, and the potential for service innovation and cost reduction is real. Yet the decision to adopt AI should not be confused with the decision to adopt it well.


In my ongoing research project, I collaborate with several institutions and banks across multiple countries. What is becoming increasingly clear is that there is a genuine need to strengthen AI-related knowledge, not only among employees, but also among leaders, executives, and decision-makers.


This is not a simple task. The field is moving rapidly, and institutions must make sense of a wide range of developments, tools, and competing priorities. In some cases, the Frugal AI Hub at Cambridge Judge Business School (see, https://frugalai.org/) may be highly relevant to a bank’s future operations. In other cases, it may be even more important to understand the behavioral factors that foster bank customers’ trust in AI solutions, and that one size does not fit all (Nourallah, 2023).


AI in financial institutions should not be adopted because it is available, but because it solves the right problem in the right way!



Note: I used AI assistance (ChatGPT) to review the language and to develop the image attached to this blog post.

 

Reference 


Nourallah, M. (2023). One size does not fit all: Young retail investors’ initial trust in financial robo-advisors. Journal of Business Research, 156, 113470.


Weill, P., Woerner, S. L., & Banner, J. S. (2025, December 8). AI-savvy boards drive superior performance. MIT Sloan Management Review.



 
 
 

1 Comment


Negasi Mehari Temnewo
Negasi Mehari Temnewo
Apr 08

Well said!

Like
bottom of page