In contacting support recently, I’ve noticed that the email and chat responses sound a lot like an AI. For example, when asking about the voltage for my new doorbell, the response was:
Just to let you know, the device only requires an AC power supply of 16-24 volts. Isn’t that great?
What human would think it was “great” that your power supply voltage can be 16-24 volts?
Maybe the support techs are just picking from automated responses to send. But it really makes it sound like we’re not dealing with real people here.
I know that the ChatBot is AI, rather AUI (Artificial UnIntellegence). But I would hope that if connected to a Wizard by Email or Chat, that Wyze wouldn’t be so brazen to make us feel like it was a person and purposely disguise a chatbot.
If they aren’t bots, they certainly do need some serious communication skills training, or at least scripts written and peer reviewed by a group of users who have actually used the product.
I think that a lot of things are scripted, but verified by real users. I say this from lots of customer service experience…The problem is that, as with a lot of customer service, quality assurance teams want people to stay on script as closely as possible, and when they do, sometimes they can sound unnatural because the scripts are written by someone who isn’t actually saying the thing to people. I saw lots of similar things in my day long before there was any AI. Believe it or not, lots of policy writers will create scripts that sound very cringey just like that, and they were actually a human writer.
At this point, it is very possible that whoever is in charge of writing scripts for various issues/topics did leverage an AI for it and now all the reps are stuck with that script as a template.