Sunday, April 23, 2023

Day 4 of PyCon 2023

Last day of PyCon. Again I got there early for the lightning talks and the keynote. The keynote today was by Margaret Mitchell who is a researcher who is interested in the intersection between machine learning and ethics. Her specialties are natural language generation, assistive technology, computer vision, and AI ethics. She currently works for Hugging Face as a Chief Ethics Scientist. Previously, she worked at Google AI, where she founded and co-led Google's Ethical AI group. She has also worked at Microsoft Research, focusing on computer vision-to-language generation, and at Johns Hopkins, focusing on Bayesian modeling and information extraction.

Her talk was very relevant in light of the current climate of generative models. There were several talks in the conference on issues regarding machine learning bias, but she brought up some examples that I'm sure most people there had not seen or thought of before and the need for organizations to "operationalize ethical data development" and just as data scientists put effort into evaluating their models to also put effort into evaluating their data sources for bias, representation, and ethical uses.

Some of the talks I went to today:
  • "Supercharging Pipeline Efficiency with ML Performance Prediction"
  • "Getting Around the GIL: Parallelizing Python for Better Performance"
  • "An Overview of the Python Code Tool Landscape 2023"
The session "An Overview of the Python Code Tool Landscape 2023" was by
Al Sweigart. I am a big fan of his. He has written many books on Python and I've watched him many times live code on Twitch. Probably his most well known book is Automate the Boring Stuff with Python
(https://automatetheboringstuff.com/).

He's also a great speaker. He went through many of the coding tools out there, but in the end he said you could get away with just having Black, mypy, and Ruff. Ruff is a new package that replaces many of the old tools. It's written in Rust and is incredibly fast https://github.com/charliermarsh/ruff.



I also used this last day as an opportunity to walk around Salt Lake City and take some pictures:






The last talk was by Guido van Rossum - the creator of Python. Since this was the 20th anniversary of PyCon, he gave us the history of Pycon from it's earliest difficult inception to its present day.

And that wrapped up the last day of PyCon for me. Next year it will be in Pittsburgh and I'm already looking forward to it.

Saturday, April 22, 2023

Day 3 of PyCon 2023

I got to today's session early enough for breakfast and the lightning talks. A nice bonus to PyCon is that breakfast and lunch are included as part of the conference, which means attendees don't have to leave the building to forage for food and leaves more time talk about Python. Plus, having the food right next to the exhibit hall allowed for a many more opportunities to talk to other attendees as you ate. Plus, PyCon provides lots of snacks - like PyCon themed cupcakes!

The keynote for today was by James Powell from his company "Don't Use this Code" - which is a great name for a consulting company (http://www.dontusethiscode.com/). James went through writing Python code for Newton's method, a subject in anyone else's hands might be very dull, but he made it extremely entertaining.

The sessions I went to today were:
  • Python Linters at Scale
  • How Pydantic V2 leverages Rust's Superpowers
  • Quicksort, Timsort, Powersort - Algorithmic ideas, engineering tricks, and trivia behind CPython's new sorting algorithm
  • The Lost Art of Diagrams: Making Complex Ideas Easy to See with Python Approaches to Fairness and Bias Mitigation in Natural Language Processing
I really wanted to go to the "Biohacking con Python" session, but since it was in Spanish and even though I thought I could probably understand enoughbased on my poor high school Spanish knowledge combined with myunderstanding of Python I would be able to piece it together, but ultimatelyI decided I would wait for a translated version. There's a whole track each day called Chalas in Spanish, which is cool.

One of the interesting features of PyCon are the "Open Spaces." These are ad hoc meetings put on everyday by people for whatever they are passionate about - everything from typical Python topics like pytest, Django, or DevOps to something non-Python such as juggling. I did spend some time meeting some great people in some of these open space meetings talking about biotech and healthcare.

Friday, April 21, 2023

Day 2 of PyCon 2023

I had to do a meeting in the morning so I missed breakfast and the keynote for today, but I made it in time for a talk titled "How Python is Behind the Science of the James Webb Space Telescope." The speaker works as an Associate Astronomer at the Space Telescope Science Institute in Baltimore.

He went through how the James Webb Space Telescope (JWST) uses a variety of programming languages during the development and operation of JWST, but Python is the language of choice for most of the science conducted with the telescope.

He talked about some the history of JWST and showed many of the cool images it has sent back and how Python is used to process the data.

I should have stayed for the next talk that was in the same room on PyScript - I really want to learn more about PyScript, but I knew there were going to be other talks on PyScript and I was hungry and wanted something to eat.

After lunch, I sat in on a talk by Dave Aronson titled: "Kill All Mutants! (Intro to Mutation Testing)." He's a funny guy. His talk was easily the most entertaining. Basically "mutation testing" is taking code and creating versions of it (mutants) with slight changes and seeing if the tests still pass. Because if they still pass it most likely means lax testing. So it not only tests code but importantly tests the quality of the tests.

I also spent some time in the Exhibit Hallway talking to some of the sponsors like Microsoft, Anaconda, and JetBrains.

In the afternoon, I sat in on talks like:

  • "Reproducible molecular simulations with Python"
  • "Create interactive games using MicroPython and electronics"
  • "Using Python and PyQgis to make cool maps"
  • "Three Musketeers: Sherlock Holmes, Mathematics and Python"

Now I realize this is probably a pretty esoteric mix of talks I chose, but it's what I wanted to see and there are plenty of other "normal" talks happening, but it shows how PyCon has a wide variety of talks.

I ended the day in the main ballroom listening to the "lightning talks", which I realized are some of the best talks at PyCon. Each person has five minutes to go through their topic. Some speakers would have code, others slides, or even a quick live demo - whatever they were passionate about. For example, one person did a talk about how she uses AWS Lambda and Python for her side job as a volunteer position at an animal rescue shelter. These were some of the best talks and it shows that you don't need a long time to communicate something really meaningful and interesting.

Thursday, April 20, 2023

Day 1 of PyCon 2023

Attending today the first day of PyCon 2023 in Salt Lake City at the Salt Palace Convention Center - the 20th anniversary of PyCon. It was all pretty cool. After flying in early in the morning, I quickly checked into my hotel, grabbed some roasted vegetable stir fry for lunch, and headed straight to the afternoon talks.

There were lots of choices of sessions to attend, but the first talk of the day I chose was about the advancements in high-performance AI/ML through PyTorch's Python compiler. The speaker talked about that as GPUs continue to become faster, PyTorch, one of the most widely used frameworks in AI/ML, has faced challenges keeping up with performance demands. The PyTorch development team recognized the need to address these challenges while maintaining PyTorch's Python roots, and set ambitious goals to improve performance, decrease memory usage, enable state-of-the-art distributed capabilities, and ensure more PyTorch code is written in Python. To achieve these goals, they developed a Python compiler, resulting in a 43% speedup in performance.

I would've like to have heard more specifically about PyTorch's plans but it was interesting nonetheless.

With LLMs and ChatGPTs being such a hot topic, I was anxious for the next two talks I had lined up. Since these talks were approved back in November, I'm sure many of the speakers had to rewrite and add to their talks with everything that has happened in AI and with LLMs specifically.

The second talk was about building LLM-based agents sponsored by Haystack, which was particularly interesting to me considering all of the excitement around autonomous agents and that Large Language Models (LLMs) like ChatGPT have taken the Internet by storm.

Using LLMs as so-called "Agents" allows you to use them as a decision maker in your application that reacts to all sorts of user requests. In this talk, we learned how to build an agent-driven application with Haystack, which was accompanied by code examples. By the end of the talk, we were able to see these concepts applied in practice, and we were better equipped to build an agent-driven application for our own use case.

The third talk I saw was particularly interesting and relevant for me was from a company called Cape Privacy about the "ChatGPT Privacy Tango" as they titled it. The talk delved into a system designed to help users navigate the privacy and PII data, striking a balance between preserving privacy and maximizing utility with ChatGPT and LLMs.

During the talk, we learned about the significance of protecting personally identifiable information (PII) and maintaining data security while still enjoying the advantages of AI-powered language models.

However, I did notice that the sponsor of the talk was pushing their solution and not mentioning the differences in OpenAI's policy of opt-in for their API and for using Co-Pilot compared to the opt-out that a person needs to do for the ChatGPT UI. This led to the sponsor appearing to inflate some of the risks. Nonetheless, the talk shed light on the path towards more privacy-aware applications with Large Language Models.

Overall, the first day of PyCon 2023 was pretty good, and I am looking forward to the next three days.

Monday, April 10, 2023

Major announcements from Nividia, Anthropic, and using long-term memory in GPTs

There have been three important developments and announcements in AI that happened in about the last week. The quote from Anthropic mentioned in the second point here is from the TechCrunch article talking about the impact of AI for companies is the most salient, “We believe that companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.”

-      The first big news was actually at least a couple of weeks ago, but Nvidia made several major announcements at their GTC conference that are as important for AI as the introduction of ChatGPT or GPT4. They showcased their A100 and H100 chips specifically designed for machine learning with unprecedented speed ups in processing. These chips can be grouped together to form a DGX system and then further grouped together into pods and super pods. This represents major breakthroughs and has major implications for AI and computing in general. https://www.youtube.com/watch?v=tg332P3IfOU

-      Second, the company Anthropic announced they are seeking to raise another $300 million to build a system at least 10X more powerful than OpenAI with a focus on AI self-teaching and alignment. All of which remains to be seen, but Anthropic has a lot of momentum. https://techcrunch.com/2023/04/06/anthropics-5b-4-year-plan-to-take-on-openai/

-      Lastly, although not on a par of the announcements by Nvidia and Anthropic but interesting nonetheless is a small project called MemoryGPT. This is especially true in the light of projects I talked about last week like AutoGPT that are creating the idea of autonomous GPT agents. Incidentally, watching autonomous AI work is wild. You can give it a very general goal. It will create tasks for that goal, prioritize them, and then relentlessly iterate to achieve it. To achieve those goals it will write and execute its own code, search the internet, and access APIs. This repository has rocketed to be one of the top repos on Github.

Back to MemoryGPT, this very nascent project attempts to give GPT long term memory.  So instead of being limited by 4,096 tokens (or 32,000 tokens for GPT4) or whatever the current session is in ChatGPT, conversations would be stored in a vector database that could be recalled and used. The application Pinecone is probably the closest to this idea and has been around a while and used quite a bit in these types of applications and is a great implementation.

So it should be apparent where this is going when combining ideas like AutoGPT, GPT with long-term memory, and some of the ideas I talked about last week of GPT applications working together as an orchestrated whole.

https://the-decoder.com/memorygpt-is-like-chatgpt-with-long-term-memory/
https://www.pinecone.io/

Monday, April 3, 2023

New trends in GPT models: Reflection, derived models, and autonomous AI

These last seven days have seen more monumental change in AI than probably in the last 5-10 years. It’s not the release of GPT-4 and how it’s being used – it’s the release of hundreds of papers and products being driven by GPT-4 that are now out-performing GPT-4. Here are a few of the more exciting things happening with links below.


One of the biggest trends are models doing self-improvement. Getting GPT-4 to do “reflection.” Models are being built by different groups that out-perform GPT-4 by getting them to reflect and without specific prompting on how to do it - to reflect and improve themselves. Also, another paper came out this past week showing a system doing self-improvement by pairing up a “Decider” agent with a “Researcher” agent. The one example they give is a model using this process and coming up with a healthcare plan for a patient and then improving it.

Another trend is a GPT-4 “controller” model being hooked up to use other language models, APIs, and to even other non-AI systems to meet goals. This is very loosely being described as giving AI the tools to create “meta-cognition” by people like noted researcher Andrej Karpathy.

In a similar vein in this post-GPT-4 world, AI models are being used to do reinforcement learning to train smaller models from larger models that are as good as the larger models but can be done quicker, cheaper and can even be run on local machines (Llama, Alpaca, and Vicuna as examples).

Finally, the ultimate result of this are people now creating “autonomous AI.” An AI that can access the internet and without intervention do general goal seeking. Yes, that’s happening now.

Then beyond all this, there’s the potential release of GPT-5 at some point and the resulting models built on top of that and I think also most likely an improvement of the modeling to not be primarily based on auto-regressive transformers.

As disorienting as this rate of change can be and not to totally ignore the widely discussed possible negative impacts, there are tremendous positive outcomes on bringing these ideas together, aligning them, and executing really well on them.

https://nanothoughts.substack.com/p/reflecting-on-reflexion
https://arxiv.org/pdf/2303.17071v1.pdf
https://github.com/nomic-ai/gpt4all
https://arxiv.org/pdf/2303.17491.pdf
https://arxiv.org/pdf/2303.17580.pdf
https://arxiv.org/pdf/2303.16434.pdf
https://arxiv.org/pdf/2210.11610.pdf
https://github.com/Significant-Gravitas/Auto-GPT

"Superhuman" Forecasting?

This just came out from the Center for AI Safety  called Superhuman Automated Forecasting . This is very exciting to me, because I've be...