They're not doing it because it would either cause very high cpu usage on the device (obvious by heat and battery life) or would cause a constant stream of network traffic - which would effect battery life and is easily monitored through the gateway - it's not happening.
As there is no network traffic, and no cpu usage - how exactly are they doing it? Phones run on physics, not pixie dust.
I'm gonna assume most people's phones aren't sending a constant stream of audio data as that would be very noticeable though I suspect some more dubious apps (not like google, etc.) possibly do.
Modern phones with a rudimentary speech recognition system could probably be done with very little CPU though especially with the dedicated functionality in the chipsets these days towards those kind of ends - back in the day they used to have speech to text software that ran on like 486s with reasonably minimal CPU use.
EDIT: Interesting first hit on google https://qz.com/170668/intels-voice-...of-the-water-because-it-doesnt-use-the-cloud/ I find that a bit hard to believe it would take that much CPU for rudimentary speech to text that would be completely OK for this kind of use.
The traffic is tiny - and the CPUs are in the data-centres.
Nate
Even at a fairly low, mono, bit rate streaming audio to a data-centre would be fairly noticeable - off the top of my head for voice recognition you'd need around 32kbit/s raw audio data - possibly around 8kbit/s with compression.
Last edited: