All big tech companies have released something in the AI space, including Meta, Google, Anthropic, Adobe, NVIDIA, OpenAI, and Microsoft. The only company that has been quiet is Apple, but not for long
Considering that Siri has been absolutely useless on most part with little innovation, especially compared to the latest AI stuff, i sure hope Apple makes proper use of their hardware for AI
Such a well thought-out scenario. Past performance is not indicative of future outcomes, but I also see a clear parallel in the way smartphones became a commodity in our world. With Palm and Blackberry, smartphones were used by the elite on Wall Street and in the Valley. But with the iPhone and Android, 2008-10 became an inflection point that led to rapid development and adoption of smartphones. The world we live in today where everyone has a mini computer in their pocket was barely imaginable in even 2005.
Similarly, now with LLMs and Diffusion models acting as the primary drivers, and with "miniaturization" initiatives such as Stanford's work on Alpaca, it seems more likely than ever that powerful hardware like the M series of chips will bring this power to the end consumer really quickly. The neural engines on M chips or even Google's Tensor chips are ready made for such applications and all it will take is 1-2 generations for the hardware to get insanely capable at the edge.
This world is on its way and I'm looking forward to its arrival.
Just makes you think - if the applications that are running on a supercomputer today will run on a consumer device tomorrow; what kinds of outrageous applications will tomorrow's supercomputers be running? 🤯
Considering that Siri has been absolutely useless on most part with little innovation, especially compared to the latest AI stuff, i sure hope Apple makes proper use of their hardware for AI
Just wanted to say hi, and thanks. I enjoy your tweets and posts about your passion. (and mine -Open AI, Midjourney...)
Such a well thought-out scenario. Past performance is not indicative of future outcomes, but I also see a clear parallel in the way smartphones became a commodity in our world. With Palm and Blackberry, smartphones were used by the elite on Wall Street and in the Valley. But with the iPhone and Android, 2008-10 became an inflection point that led to rapid development and adoption of smartphones. The world we live in today where everyone has a mini computer in their pocket was barely imaginable in even 2005.
Similarly, now with LLMs and Diffusion models acting as the primary drivers, and with "miniaturization" initiatives such as Stanford's work on Alpaca, it seems more likely than ever that powerful hardware like the M series of chips will bring this power to the end consumer really quickly. The neural engines on M chips or even Google's Tensor chips are ready made for such applications and all it will take is 1-2 generations for the hardware to get insanely capable at the edge.
This world is on its way and I'm looking forward to its arrival.
Just makes you think - if the applications that are running on a supercomputer today will run on a consumer device tomorrow; what kinds of outrageous applications will tomorrow's supercomputers be running? 🤯
Emad, CEO of Stability AI i his Twitter replied in his Q & A regarding highly personalized AI collaboration between Stability AI and Apple
this is the conversation link
https://twitter.com/bseb__/status/1639603980883070976?s=20