Conversational AI – Coming to Retail Soon…

By | January 18, 2022

We don’t know about you but all the talk about conversational AI seems to us, to be mostly talk. There is an irony in that observation.  The Alexa and Google Mini’s, and Pixel Assistants are woefully inadequate at understanding nuance or building any simple suggestions based on repeated tasks.  They change the volume and never remember what we have done a dozen times before. There is no persistence. It’s like carrying variables thru a multi-step web process. You would think they might have some form of cookies so they have some historical context, but they don’t. Still just the other day we saw where Checkers is rolling out AI-assistants.

We can talk with a Rochester, NY accent or an Okie accent or we can talk like we are from Tyler, Texas.  Along with query variations and intent there are also multiple dialects.

From fluencycorp —

There are roughly 30 major dialects in America. Go here if you’d like a see a map of the various regions with an example of what each dialect might sound like. On the East Coast, we have many very small regions, with slightly varying dialects in each one. Just like New England and the East Coast itself, it is more densely populated, with little pockets of immigrants from other countries. For this reason we have Boston Urban, Bonac, New Yorker, Hudson Valley, Pennsylvania German-English, Inland Northern and North Midland, all within about 5 hours driving from each other. Once you start going west, many of the regional dialects will span 3-4 states, with Texas alone having just two: Southwestern and Gulf Southern. The entire West Coast will only encompass three dialects, and these areas are also known for having more of a neutral accent: Pacific Northwest, Pacific Southwest, and some Southwestern (just like in Texas).

Google Adwords has expanded its backend and now allows segmenting search based on Intent.  What is it the user hopes to accomplish?

Microsoft Labs just announced Azure AI milestone — New Neural Text-to-Speech models more closely mirror natural speech.

The latest version of the model, Uni-TTSv4, is now shipping into production on a first set of eight voices (shown in the table below). We will continue to roll out the new model architecture to the remaining 110-plus languages and Custom Neural Voice in the coming milestone. Our users will automatically get significantly better-quality TTS through the Azure TTS API, Microsoft Office, and Edge browser. <

More Voice Assist posts

Author: Retail Systems

Craig Allen Keefner is an influential figure in the self-service technology industry, best known for his leadership in kiosks, digital signage, and retail automation. Based in Denver, Colorado, Keefner has managed the Kiosk Industry Group (Kiosk Manufacturer Association) since 2014, supporting self-service professionals and overseeing projects in kiosks, point-of-sale systems, thin client technology, and related fields.​ Over his career, Keefner has served in various executive and managerial roles—including as owner and CEO of pioneering kiosk and retail tech companies, as well as managing key industry websites such as kioskindustry.org and thinclient.org. His experience also includes significant contributions to the deployment and advancement of interactive technology in healthcare, retail, and smart cities.​ Keefner holds a BA from the University of Tulsa and has earned credentials in electronics and technology from institutions like the Missouri Institute of Technology and DeVry. Often recognized as “Mr. Kiosk,” he is noted for his expertise, industry advocacy, and innovation in digital self-service solutions