Sunday, December 17, 2023

AI Book Club

I have a new post in my podcast/YouTube channel here

I started this podcast almost three years ago, which was significantly before the release of ChatGPT. At the time, there were some open source models out that could be used and Web interfaces like GPT-J and GPT NEO, so there was work being done from Hugging Face, Eleuther.AI and others, but the latest version version from OpenAI was 3.0.

The aim of the podcast was twofold:

1) Measure the progress in large language models 

2) Have some fun with LLMs by using them to have a "discussion" such as people might have in discussing books.

I used different LLMs and did some fine tuning to try and give the AIs unique personalities. I created two different AIs, Marie and Charles, interacting with the program I built - none of which was scripted out other than me having an idea of the topics and questions I wanted to ask. I then ran the text of our conversations through Google's text to voice and video synching Python code that created deep fake renditions for the two video avatars.

All of these versions performed fairly well (at least I thought so at the time), but they did have their shortcomings - many of which I outlined in this video in March of 2022. One of the biggest issues was them making up their own facts about the book we were discussing. At that time, I don't remember anyone using the term "hallucinating" as everyone commonly uses the term now with LLMs, but that's what they were doing.

However, they did create some very original and often times surprising discussion.

In this newest podcast, where we discuss Alice in Wonderland, I added a third AI that I called Beth. LLMs are much better now than when I first started out. Even though everyone likes to talk now about LLM hallucinations. they are much more factually oriented now than what they were two years ago.

Before I was very hesitant to ask them about details about the plot, because they might make up parts of the story that didn't happen, so I would steer the conversation to talking about ideas from the plot, because them being creative about their "interpretation" of what I described works, but it doesn't work if they were creating, for example, characters that never existed in the story.

However, now with the newest LLMs they know the story, they can repeat plot points and comment on them and what they create is not the plot but their "ideas" about the plot and the characters. So it's a much better conversation.

In addition, I'm now taking all of the conversations and creating vector based databases and am working on using MemGPT to give them long term memory so that they can have some continuity over the episodes. This will not only give them consistency I hope, but also because they are making up their own back stories when I ask them what they have been up to, they won't contradict themselves in a current video with something they said they were doing in a previous video.

You can view the latest video or any of the videos here.

No comments:

Post a Comment

"Superhuman" Forecasting?

This just came out from the Center for AI Safety  called Superhuman Automated Forecasting . This is very exciting to me, because I've be...