Following the company’s introduction of sound-sensing lenses, Snap Inc. has now added speech recognition based lenses to Snapchat. While the previous addition enabled sounds to trigger a change in how active lenses were, the newest implementation reportedly listens for specific keywords. As many as six of those will be added to the Android application over the course of the next week and will appear in the lens carousel alongside their counterparts. Once added, each respective inclusion will provide users with a prompt showing how to activate the associated animation. For the time being, each word is in English but support for other languages will likely be incorporated via a future update. The new feature will also only apply to lenses that are already present in the app and have special effects, for now. Uttering the word “no,” for example, will prompt the app to put the user into an infinite regress tunnel, while the word “yes” is tied to a zoom lens.
In the meantime, no timeframe has been provided with regard to any additional features that will be included alongside the newest addition to the Snapchat app or when more of those will be released. The company has also not indicated whether or not the feature will make its way to more lenses. It is possible for speech recognition to be used in conjunction with the previously added lens triggering actions, however. Although speculative, that means Snap could eventually roll out voice activation for each of its current and future animated lenses without losing facial expression or noise level triggers. That would make it much easier for Snapchat users to interact with the app with more options available to engage.
Simultaneously, it would benefit users by effectively reducing the chances that unwanted blur or shaking that would be caused by having to physically interact with on-screen software keys. It’s a concept that the company has tested the waters with before, using facial expression triggers and the above-mentioned sound cues. Beyond end-user benefits, intelligent voice-based activations could eventually see use by advertisers or make their way to the company’s Lens Studio. The latter of those would allow lens creators to incorporate speech recognition for any animations or sounds they want to include. However, none of that is guaranteed at this point.