There's a paper that demonstrates that as LLMs get larger they become more inaccurate. So it's not a simple case of more is better.
That claim isn't so simple either and the field has moved on a bit too, there are issues with transformers (as Yann has been vocal about) and it's true that theoretically errors can increase as LLMs increase in size - but in practice does that happen? Not necessarily GPT4 makes fewer errors than GP3... let alone GPT2. And now we've had reasoning models too for some months incorporating planning and chain-of-thought reasoning which helps reduce errors too - see the o1 and o3 models.
The models of AI are all closed and owned by US companies. This means the UK needs to start, from scratch, to make it's own cloud and AI.
That's false too - I presume you mean generative AI? As far as LLMs are concerned then Meta has released open models, likewise see China and DeepSeek; DeepSeek-R1 is even a reasoning model to rival OpenAI's O1 model, though that particular model isn't open source yet (they do apparently intend to release it and their other models are open).
Likewise, on the imaging side one of the early success stories was Stable Diffusion, granted StabilityAI has fallen behind a bit recently.
Also generative AI only knows the data you have given it.
That not quite true, the models can reason now and can make leaps from their training data, of course if you've got a subject area that is completely missing then not so much.