New developments in artificial intelligence technology developed by researchers at the University of Waterloo may bring us closer to a world run fully by machine-learning AI.
Researchers at DarwinAI and the University of Waterloo have worked together to develop a cost efficient speech recognition software.
“Cost and efficiency are two of the biggest bottlenecks to the widespread adoption of machine-learning AI,” said Alexander Wong, a professor of systems design engineering at Waterloo and co-founder of DarwinAI. “This technology significantly addresses those issues and enables a new class of voice assistants for everyday devices with energy-efficiency needs.”
The AI algorithms developed are able to create AI speech recognition software compact enough to fit on chips smaller than postage stamps for the cost of a few dollars.
This level of efficiency is possible by providing the AI algorithm with specific requirements such as understanding words like “yes”, “no”, “on” and “off”. The AI is then tasked with finding the easiest way to meet these requirements.
“The resulting software is just big enough to do a particular job well,” said Wong, director of the Vision and Image Processing (VIP) Research Group. “That is the goal, the essence of our approach.”
Deep-learning building blocks developed by the researchers allows for greater efficiency and performance of speech recognition AI. These building blocks, known as attention condensers, focus the software on important information in sound waves.
The development of this cost-efficient AI can be used to control home appliances, vehicles and devices for people with disabilities.
Additionally, unlike cloud-based voice assistants like Amazon Echo and Google Home, this speech-recognition AI provides greater privacy for users.
The paper on this groundbreaking research, titled TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices, was presented at the Neural Information Processing Systems workshop.