What do you mean by the future? They are already the present.
More than half of B2B companies have automated chat bots implemented. B2C companies are near 50% already.
Some business industries are already up to near 3/4 adoption.
Overall, when more than half of medium to large companies are already using them, Iâd say itâs no longer about the future, but the present.
Some reports estimate that sometime during 2025, the adoption percentage in general will be as high as 80%.
The future is already here my froggy friend.
The main question is how much it will improve. Also, not all implementations are equal. I was recruited to help one very large organization (they have double or triple triple digit billion$ in just Treasury and market equity holdings, not counting other assets) with their chatbot. I was asked to try to break it or exploit loopholes so they could address them and keep it on target for their intended use case to stay on topic and within a limited scope. I can tell you that some companies have put a whole ton more effort into implementation of their chatbots with a ton of refinement than others have. Some are much more helpful than others. Some even have some agentic abilities and properties, including some having access to different API functions. Others are nearly useless generalists without a useful knowledge, base included as their primary library.
I have some other clients that are in the process of building new startups that have a core foundation built on generative AI. Some of them are actually really interesting That a lot of anti-ai people would approve of as a positive step. For example, one company Iâm helping out, one of the main reasons they put the generative AI as a foundation, to then build a sort of firewall that locks out the AI from ever being able to access any personal information at all. For example, if it needs to review anything that would have a name or phone number or address or payment information or anything else that people would consider sensitive information related to them, those fields are hashed into something totally unrecognizable before it is given to the AI to review, then the AI can do its analysis for things and send it back to the system, which then decodes the hash back into the original information for the human to be able to understand it. There is a complete blockage of the AI being able to access any personal information in any way. Is complete. Separation should honestly be totally required for pretty much every company but especially industries like healthcare etc. right now everybody just lets the major generative AI companies get access to everybodyâs data. And in some cases they will sign a baa agreement or something, but the problem is that there have been a lot of accidental leaks. Sometimes the AI has shared confidential stuff with random other people. So the idea here is to never allow the AI tab certain confidential information in the first place. Then, in addition to hashing critical information for various kinds of data analysis, local model hosting should be something very important for most of the businesses in my opinion. If companies would do a lot more of both of those things, I think a lot of people would feel a lot more comfortable AI having access because data analyzed by a local hosted model canât possibly go anywhere else, and the extra safeguard is that critical data, especially any identifying data is never available to the model in the first place.
After that core foundation which I think is critical, then companies should have their chat bots. Follow various kinds of looping instructions with mild to moderate agentic permissions to interact with the API. If companies did all of the above, then I think in some cases I might even prefer a chat bot over a human for a majority of my interactions. Interactions assuming the chatbot has the authority and ability to process things that I need such as refunds, corrections to services, replacements, warranty processes, etc. if there is something that has a clear outlined policy that any human would have to follow anyway, I donât have a problem with a bot being told to carry out the same policy. A human that reads scripts and a human that follows a set policy is basically a human bot to me already anyway. But at least with the the digital chatbot, it will be consistent and predictable and do things right more often. It should also have a policy for when to escalate to a human supervisor in rare cases. I really donât have a problem with any of that. Quite the contrary.