I’ve been dictating papers, notes, emails for more than 15 years (and have a copy of Dragon Naturally Speaking v1 and a closet full of mics to prove it). Some of you know that I regularly dot my spoken langauge with dictation hyphen speak comma and frequently don’t know that I’m doing it exclamation mark.
So, I think there is great potential for the voice interface movement (see the Amazon Echo, Siri, Cortana, etc) to revolutionize the way that we interact with technology. We’re still in that early haphazard phase, in which companies are trying to inject voice into every box, app, tool, and small animal monitoring device — all to see what sticks. This is pretty common in the lifecycle of new technologies (see wearables — who exactly needs pulse ox), and I think we’ll soon see that voice interfaces have huge potential in digital health. Voice will help us improve accessibility, a much overlooked challenge for digital health apps. But just imagine the improvements we can make in hard-to-monitor factors like eating, activity, symptoms, and mood.
And with price points dropping on voice tools (like Amazon’s Echo Dot), there is potential to make voice entry ubiquitous.
(This is not an Amazon commercial — really — but) Amazon is making it easier than ever to make conversational voice (and text) agents with their new Lex framework.
I’ve played with several similar frameworks, but the sophistication in the language parsing, interoperability, flexibility (same logic for Messenger or Twilio), and cost efficiencies really makes Lex standout.
Next time, #thinktwicebeforeyouapp and go voice.