For decades, server-based voice picking systems vendors have been touting the increased accuracy of their proprietary speech engines as well as their architecture for individual user profiles. Back then, it was an improvement over the early generations of user independent engines. Plus, it offered the existing benefits (for the vendor, that is) of being able to charge by the user, instead of by the session or by the device. It also required – and continues to require – proprietary hardware in most cases, again a revenue boon for the vendor. But, times have accelerated forward, and the new user independent speech recognition engines have evolved and are constantly being improved, and today are every bit as good, or dare I say, even BETTER than the user dependent proprietary products.
The modern user independent voice recognition engines have been through numerous revision upgrades and improvements over the last 5 years ALONE. These engines have had hundreds of millions of dollars invested in their capabilities over the past decade. These newer technologies have incorporated wonderful improvements for noise reduction, speech recognition, language understanding, dialogue, text-to-speech, voice biometrics, and many other facets of a mobile voice interface.
Newer acoustic models provide direction that leads to significant accuracy improvements, simpler usage and enable on-device dictation as well. More improvements in the capability and functionality of these newer speaker independent engines include better barge-in performance, improved wake-up word recognition, superior word segmentation as well as integrated speech triggers, the ability to update for field contexts, context compression and increased speed for developing and utilizing acoustic models.
Quite honestly, there is a very low probability that boutique, custom made user profile dependent speech engines have anywhere near the robust capability, feature richness and flexibility of deployment that the latest and greatest user independent engines provide to the world of mobile industrial applications.
The latest independent speech recognition engine delivers a new level of capabilities with an increased array of features and benefits. It provides superior functionality, unmatched accuracy, and the highest performance for a variety of applications that benefit from speech control. Designed as modular and scalable technologies, these new engines can accommodate a large range of embedded and mobile applications. Deployments can be customized with optimized footprints as dictated by the required functionality for each application and hardware device. Adding to these modern user-independent voice recognition marvels, companies like AccuSpeechMobile have embraced the functionality and their capability by providing a set of tools that makes it easier than ever to integrate all these features and capabilities into mobile applications and the devices that support mobile inspection, MRO and any warehousing application.
By providing an easy-to-use ability to integrate existing, optimized applications, these speech engine benefits provide the type of development, user acceptance, testing and production roll out experience that have accelerated ROI, significantly reduced the time to train new and seasonal employees and deliver the kind of accuracy and performance that has been promised for years. Looking at mobile voice interfaces and how they can impact your operations can be a very challenging task. The real good news is that the companies responsible for making independent speech recognition engines continue to pour investment into research and design them so companies like AccuSpeechMobile can utilize every feature to deliver the best experience for our customers.
So, if you have looked at legacy systems and rejected them out of hand due to the high cost and engineering effort of creating a user profile for every single regular employee and seasonal worker, it’s time to reconsider adding voice to your operations with a device-based speech recognition technology that eliminates the need for “voice training” thereby significantly reducing the cost and time for implementation. And, modern speaker-independent voice technology is at a point where the recognition may be better than server-based profiling dependent applications anyway. Better tech at a much lower price…that’s how it’s supposed to evolve, correct?