
Speechmatics, the Cambridge-based speech technology company, has received investment from multiple leading investors to accelerate the commercial roll-out of its products.
Founded by Chief Technology Officer Dr Tony Robinson, who pioneered PhD research into Recurrent Neural Networks in the 1980s, Speechmatics has developed a unique machine learning technology to harness the full potential of speech technology.
“This is an exciting time to be in speech recognition”,
comments Robinson.
“We are at the forefront of how deep neural networks are changing speech recognition. With our ever expanding and highly experienced R&D team, we continue to push the boundaries in speech technology, especially around languages, accuracy and deployment.”
The firm received investment from several sources, including technology venture capitalist IQ Capital, AI/machine learning specialist and technology investors Amadeus Capital Partners and a number of leading technology investors including Laurence Garrett (Highland Capital Europe), Cambridge Professor Ted Briscoe, a world expert in Natural Language Processing, as well as co-founders of CSR, and Richard Gibson (previously Exec Chairman at SwiftKey).
Benedikt von Thüngen, CEO of Speechmatics, said:
“Over the past two years Speechmatics has seen substantial growth, entirely through cash-flow. The addition of these highly-experienced investors as advisors to the business will help accelerate the on-going commercialisation of our technology and place speech recognition technology at the heart of all communications.”
Ed Stacey, Partner at IQ Capital Partners, added:
“Speechmatics’ disruptive technology has significantly greater accuracy than competitors such as Google, IBM or Microsoft, which opens up many new commercial opportunities for speech recognition – from call compliance driven by legislation such as MiFID II and PCI DSS, to content discovery and speech analytics in order to spot trends and understand the voice of the customer.”
With a simple set-up and a combination of cloud and on-premise based solutions, Speechmatics technology enables businesses to generate data about customers and employees which is harnessed to improve process, efficiency, and benefit the bottom line. Speechmatics is now even more equipped to help clients navigate the complex changes in technology, to generate business insights, and to facilitate more powerful communications in the digital age. In 2016, Speechmatics launched its new AI framework, Auto-Auto, which enables the company to add almost any language automatically. Since building the framework, Speechmatics has released a new language every two weeks, including most European languages, and Greek, Russian and Arabic*. Speechmatics has a successful track record of delivering exceptional results across a wide range of applications and industries: for example, language assessment with Cambridge English or content discovery with Udemy. Other applications include call centre analytics, call compliance, sub-titling, interview & lecture transcription and media monitoring. Speechmatics has developed highly accurate universal models that work across use cases and industries and do not need to be individually trained. Tony Robinson’s world-class research team utilises the latest ML and AI technology to offer continual improvements in accuracy, an ever-increasing range of languages and new business applications. This technology is also designed to integrate with other applications in the workplace to generate value and insight.
Richard Gibson, recently appointed Chairman, said:
“I am excited to be working with another successful game-changing deep technology company out of Cambridge. With this funding, Speechmatics will be able to harness the true potential of its technology and accelerate its commercial roll-out.”
*Arabic currently unavailable as of 24/09/2018. For more information contact us at sales@speechmatics.com
![[alt: Sound waveform overlaid on legal documents representing word error rate in legal transcription]](/_next/image?url=https%3A%2F%2Fimages.ctfassets.net%2Fyze1aysi0225%2FQRSezBsdLCxs1BVUN8hS7%2F2039e32c7e69124576ed85a9fb8f90c5%2Fblog-image-wide-carousel__1_.webp&w=3840&q=75)
Word error rate for legal transcription has no single acceptable threshold. But knowing how accuracy, audio quality, and review obligations connect to real legal risk is what separates a reliable transcript from a costly one.
![[alt: Court reporter shortage carousel]](/_next/image?url=https%3A%2F%2Fimages.ctfassets.net%2Fyze1aysi0225%2F2merK8OIQsF78D6bf8J4k8%2F900485ee565bcce115227fdfc74b2914%2Fblog-image-wide-carousel.webp&w=3840&q=75)
The court reporter shortage is reshaping litigation. Explore data, causes, and how legal teams are using digital reporting and AI transcription to adapt.

Why predicting durations as well as tokens allows transducer models to skip frames and achieve up to 2.82X faster inference.
![[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]](/_next/image?url=https%3A%2F%2Fimages.ctfassets.net%2Fyze1aysi0225%2F3TUGqo1FcOmT91WhT3fgbo%2F9a07c229c11f8cbe62e6e40a1f8682c7%2FImage_fx__8__1-wide-carousel.webp&w=3840&q=75)
As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.
![[alt: Logos of Speechmatics and Edvak are displayed side by side, interconnected by a stylized x symbol. The background features soft, wavy lines in light blue, creating a modern and tech-focused aesthetic.]](/_next/image?url=https%3A%2F%2Fimages.ctfassets.net%2Fyze1aysi0225%2F7LI5VH9yspI5pKWFeiZBXC%2F92f6a47a06ab6a97fb7f5a953b998737%2FCyan-wide-carousel.webp&w=3840&q=75)
Turning real-time clinical speech into trusted, EHR-native automation.