Apple has developed a technique for improving Automated Speech Recognition (ASR) on older devices while reducing computational time.ASR systems must identify rare and user-specific words, which the new system tackles through neural context biasing (NCB).NCB processing consumes significant computational and memory resources, but Apple's model uses vector quantisation to overcome this.Results of the new approach show 20% less processing time and a 71% reduction in errors observed.Earlier this year, Apple introduced the DenoisingLM (DLM) to achieve an accurate rate for ASR.However, voice assistants like Siri were far from perfect, even with Apple's AI system, especially with certain regions' accents.Despite concerns about quantisation loss, Apple's use of finite scaling quantisation (FSQ) to improve ASR is being seen as promising.While Apple Intelligence, inclusive of Siri, is ideal, it’s still less mature than competitors, such as Whisper, from the likes of OpenAI.Other techniques have been created by Moonshine to reduce computing demands for accurate ASRs.Assembly AI’s latest ASR model, Universal-2, also improves proper noun recognition and formatting accuracy.