Last Week in AI (01.22.24 – 01.26.24)

Welcome to Last Week in AI, a post I publish every Friday to share a couple things I’ve discovered in the world of AI last week. I spend way too much time in Discord, on Twitter and browsing reddit so you don’t have to!

If you have a tip or something you think should be included in next week’s post, send an email to keanan@floorboardai.com with more info.

This week we’ve got a set of AI agents that build you a personalized newspaper and some research on what happens when we need large language models to “forget” information.

Let’s dive in!

A completely personalized newspaper

This week I found out about GPT-Newspaper, a project that helps you create a completely personalized virtual newspaper that contains information about only the topics you care about. Under the hood, this uses AI agents, which are basically focused AI bots that do one job well, to go through all the steps that content goes through in a newspaper writing and editing process. There’s a “search agent” that scours the web for information about the topics you’ve selected, a “writer agent” to write the article, an “editor agent” to actually put the newspaper together, and more!

The demo video in their GitHub repository is a great way to see it all working in less than 60 seconds.

This pattern of AI, where a larger task is split up into small tasks by a human (not something AI is particularly good at yet) but then given to AI agents to run each piece before assembling the final output, is something that’s only going to grow in popularity as we throw more and more difficult problems at AI.

The challenges of a language model “unlearning”

Large language models take a long time and a lot of money to train. They’re trained on huge data sets encompassing most of the Internet. The downside here is that if there is data that’s not supposed to be in the data set (which is what the NYT is claiming here), the only approach to removing this data is to retrain the entire model again.

This is some pretty early research, but instead of retraining the model from scratch, researchers were able to fine tune the model (much faster and cheaper) on the data they wanted to remove, but by replacing proper nouns with more generic terms. They basically overwrote the memory of what the LLM “knew” about Harry Potter.

Just like we saw with GDPR forcing companies to provide people with the “right to be forgotten”, I think there will eventually have to be a way that certain information can be removed from an LLM’s knowledge base.

See you next week!

If you’ve made it this far and enjoyed “Last Week in AI”, please drop your email down below so next week’s edition goes straight to your inbox.