Monday, April 10, 2023

Major announcements from Nividia, Anthropic, and using long-term memory in GPTs

There have been three important developments and announcements in AI that happened in about the last week. The quote from Anthropic mentioned in the second point here is from the TechCrunch article talking about the impact of AI for companies is the most salient, “We believe that companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.”

-      The first big news was actually at least a couple of weeks ago, but Nvidia made several major announcements at their GTC conference that are as important for AI as the introduction of ChatGPT or GPT4. They showcased their A100 and H100 chips specifically designed for machine learning with unprecedented speed ups in processing. These chips can be grouped together to form a DGX system and then further grouped together into pods and super pods. This represents major breakthroughs and has major implications for AI and computing in general. https://www.youtube.com/watch?v=tg332P3IfOU

-      Second, the company Anthropic announced they are seeking to raise another $300 million to build a system at least 10X more powerful than OpenAI with a focus on AI self-teaching and alignment. All of which remains to be seen, but Anthropic has a lot of momentum. https://techcrunch.com/2023/04/06/anthropics-5b-4-year-plan-to-take-on-openai/

-      Lastly, although not on a par of the announcements by Nvidia and Anthropic but interesting nonetheless is a small project called MemoryGPT. This is especially true in the light of projects I talked about last week like AutoGPT that are creating the idea of autonomous GPT agents. Incidentally, watching autonomous AI work is wild. You can give it a very general goal. It will create tasks for that goal, prioritize them, and then relentlessly iterate to achieve it. To achieve those goals it will write and execute its own code, search the internet, and access APIs. This repository has rocketed to be one of the top repos on Github.

Back to MemoryGPT, this very nascent project attempts to give GPT long term memory.  So instead of being limited by 4,096 tokens (or 32,000 tokens for GPT4) or whatever the current session is in ChatGPT, conversations would be stored in a vector database that could be recalled and used. The application Pinecone is probably the closest to this idea and has been around a while and used quite a bit in these types of applications and is a great implementation.

So it should be apparent where this is going when combining ideas like AutoGPT, GPT with long-term memory, and some of the ideas I talked about last week of GPT applications working together as an orchestrated whole.

https://the-decoder.com/memorygpt-is-like-chatgpt-with-long-term-memory/
https://www.pinecone.io/

No comments:

Post a Comment

"Superhuman" Forecasting?

This just came out from the Center for AI Safety  called Superhuman Automated Forecasting . This is very exciting to me, because I've be...