Dave Limp, the SVP of Amazon Devices and Services, recently announced that their digital assistant, Alexa, would soon incorporate a custom-built large language model (LLM) in nearly all new Echo devices. Amidst a flurry of Amazon tablets and Alexa-enabled gadgets, Limp stated that the LLM was designed around five core abilities, one of which is to ensure “conversational” interactions. Amazon argues that a good conversation involves more than just words, including body language, understanding the speaker, and using eye contact and gestures. Critics are waiting to see if Amazon will introduce these features to its Echo devices.
The company’s virtual assistant, Alexa, appears to require further refinement according to recent demonstrations held at an Amazon event. For example, when Dave Limp, Amazon’s Senior Vice President of Devices and Services, requested Alexa to dispatch a casual BBQ invitation to his friends, the assistant articulated the message as an invite for “BBQ chicken and sides,” a phraseology that may not necessarily be the most intuitive or natural. In addition, Alexa intermittently failed to react to Limp’s instructions throughout the presentation. However, these hiccups could potentially be ascribed to the inherent difficulties of showcasing voice assistant technology in a live, unpredictable environment.