*** The official macOS Sequoia thread ***

Surely by now someone has downloaded it on a M3/M4 MBP and figured out the RAM footprint of AI, now it's in UK English?

I imagine I would never use something like this, BUT, if I did I would want to make sure it doesn't slow down the rest of my machine! I have 18GB M3 Pro

Edit: Waiting....
KQ8RLuT.png



rp2000
 
Last edited:
OK, it installed and activated now. Doesn't seem to use much RAM for the basic tasks I gave it. From observing Activity Monitor it uses 1-2GB when I was simultaneously using the siri/chatgpt and image playground.

It's OK, not really majorly different from copilot on windows or chatgpt on many platforms or gemini on mobile phones.



rp2000
 
OK, it installed and activated now. Doesn't seem to use much RAM for the basic tasks I gave it. From observing Activity Monitor it uses 1-2GB when I was simultaneously using the siri/chatgpt and image playground.

It's OK, not really majorly different from copilot on windows or chatgpt on many platforms or gemini on mobile phones.



rp2000
The key difference for me is that it is all processed locally. You can turn it on and off easily, and you can choose whether to hook it into ChatGPT or not.
 
The key difference for me is that it is all processed locally. You can turn it on and off easily, and you can choose whether to hook it into ChatGPT or not.
In my limited testing, I found the Siri experience quite poor tbh. I reckon without enabling the ChatGPT, which I did, it would be the same as old Siri! It as a bit annoying having to confirm each time you want to use ChatGPT, but maybe that's an option I missed.

I forgot to check my free space before and after enabling it. As you sort of mentioned, many of the models are stored locally. Would be interesting to know the storage footprint. I reckon a few GB, which no one will even notice.


rp2000
 
Back
Top Bottom