Wealth Think

Will smooth-talking AI avatars replace human advisors?

Will AI make financial advisors obsolete?  

There are those who believe it will. But there were also many who predicted that robo advisors would quickly turn human advisors into museum-bound artifacts. Once the hubbub died down, the few robos that survived carved out just a tiny segment of the market. Human advisors barely noticed.

Scott MacKillop GeoWealth
Scott MacKillop, strategic advisor for GeoWealth

But that generation of robos were like stick figures drawn by a 5-year-old compared to the new wave of AI-created advisors that are just around the corner. Or so we are told

A quick Google search of "AI-generated avatars" will give you a sense of the possibilities. Using any number of available programs, in minutes you could conjure up a human-looking image with the precise features you describe. They are not your typical explainer-video, two-dimensional cartoon figures — these 3D creatures could easily pass for one of the gang.

The coming generation of financial advisor bots will be embodied in handsome/gorgeous avatars with soothing voices capable of expressing empathy and possess the emotional intelligence of a trained therapist. They will be active listeners that can discern your feelings from your body language and the timbre of your voice. They will respond immediately to your deepest needs like the dream partner you never had. And, of course, they will have instant access to everything that has ever been written about investment management, financial planning, taxes and estate planning. 

That all sounds great, but I have a few — OK, more than a few  — questions.

The uncertainty factor

The work of a financial advisor is all about dealing with the future, which, by definition, is shrouded in uncertainty. It certainly would be helpful to know everything that has ever happened in the past, but that knowledge has only limited value in predicting the future. In financial services, the historical record is full of conflicting information.

Even though the beautiful bots can sort through it in a nanosecond:

  • Which gurus will the AI avatars rely upon, and which will they dismiss?
  • When they encounter conflicting research, how will they decide which best captures truth?
  • Will they give equal weight to data from the 1950s and data from the 2020s?
  • Whose predictions about the markets and the economy will they subscribe to?
  • How will they determine what is overvalued and what is undervalued?
  • Will they be conservative or aggressive? Passive or active?
  • Will they seek simplicity or "sophistication?"

In a world with so many differences of opinion, gray areas and conflicting perspectives and so few established norms, how will they find the "right" answers among the data? After all, advisors can't always agree, but at least they can explain how they reached their conclusions. How will the bots decide — and can we trust their conclusions?
READ MORE: Advisors know ChatGPT, but that doesn't mean they trust AI

Regulatory questions

The Securities and Exchange Commission has taken a more guarded view of AI than it took when the original wave of robo advisors arrived some years ago. The SEC is leery of AI-washing and the potential for conflicts of interest that are inherent in the technology.

Advice-offering avatars are most likely to be deployed by large, well-funded firms with something to sell. What biases, tilts and preferences will be built into the avatars' brains?   

How does an agency that consists mostly of lawyers and accountants get comfortable with the advice rendered by an AI-powered advisor? Will it set standards for acceptable avatar behavior? Will it require the bot to take the Series 65 exam? (It may pass with flying colors.)  Would the bot be subject to the firm's code of ethics? 

Will the SEC hold bots to a fiduciary standard, a best interest standard or a suitability standard? How will the SEC satisfy itself that the bot can or will determine what is in a client's best interest? What does objectivity even look like to a computer program?

Sorting all this out will take time and tools that the commission may not possess today. In the meantime, the SEC's skepticism about AI is likely to slow, if not permanently restrict, the adoption of unsupervised avatar-delivered advice. At a minimum, there will likely need to be disclosure, so people know they are dealing with an artificial advisor. 

READ MORE: Beware, boasters: SEC challenges firms' extravagant AI claims

Entering the uncanny valley

Which brings up a fundamental question and an intriguing hypothesis. 

Will people want to receive advice from a robot, even an attractive, hyperrealistic one? In his essay "The Uncanny Valley," roboticist Masahiro Mori posits that robots are perceived as more appealing as they approach humanlike appearance — up until a point where they almost resemble a human. 

Then they are perceived as eerie, creepy or repulsive.  

However, as the robot becomes less distinguishable from a human, the reaction to it once again becomes positive and approaches human-to-human empathy levels. What role the uncanny valley effect will play in the adoption of AI-based advice delivery models is unknown, as studies attempting to confirm or measure the phenomenon have had mixed results. Some suggest the effect may be generational, that those who have grown up with digitally created imagery will be less affected by it than their elders. So, those who have the most money today (older folks) may be less inclined to rely on avatars for advice than those who will inherit the money later (younger folks).

We don't know where the first generation of humanlike avatars will land. Will they fall into the valley of repulsiveness, or be so human in character as to warm the cockles of our hearts? Will we turn and run or invite them to dinner? 

READ MORE: Don't fear the robots: The case for deploying AI in wealth management

We know from the robo advisor experience that some level of human involvement is important for most investors. 

Robos were eventually forced by consumer preferences to develop hybrid advice models that incorporated some level of human interaction. This was confirmed by a recent Cerulli study that found that 5% or less of investors surveyed favored advisory practices that offer only online engagement options.

It may take a while before large numbers of wealthy investors are ready to rush into the virtual arms of digital advisors, no matter how smart, attractive and empathetic they are. 

Advisors are safe — for a while

AI can be a powerful tool. Certainly, it will make advisors more efficient and help unburden them from many tasks they must perform today. Maybe advisors will even sit side-by-side with an avatar in a conference room someday while dispensing advice to clients.  

The ability to simulate the look and sound of a human already exists. Simulating trustworthy judgments in areas of uncertainty may take a little longer. Overcoming regulatory hurdles and the general preference that most people have for dealing with a human advisor will take a while, too.

The upshot for advisors: Don't start training for an alternative career just yet — but do prepare for AI-driven change.

For reprint and licensing requests for this article, click here.
Technology Practice management Artificial intelligence Wealth management
MORE FROM FINANCIAL PLANNING