It’s never been easier to do it locally. It’s just that it requires average users to invest significantly in hardware to make it work fast enough to be useful. While Apple’s commodity hardware is relatively good at this, the vast majority of users don’t have a capable GPU for AI (myself included). Linked content [1].

[1] https://fedoramagazine.org/running-generative-ai-models-locally-with-ollama-and-open-webui/