Last Week in AI (04.08.24 – 04.12.24)

Welcome to Last Week in AI, a post I publish every Friday to share a couple things I’ve discovered in the world of AI last week. I spend way too much time in Discord, on Twitter and browsing reddit so you don’t have to!

If you have a tip or something you think should be included in next week’s post, send an email to with more info.

This week, we take a look at how the expanding context windows on LLMs are making journaling more valuable and a way of thinking about AI for anyone looking to build AI features or businesses themselves.

Let’s dive in!

LLMs are getting larger context windows

It seems like every release of a large language model comes with a larger context window than the last (with the context of the latest Gemini model finally in the 7 figures!). With that larger context window comes the ability to analyze larger sets of data and a larger amount of text.

With one million tokens of context (assuming you journal 500 words per day), you could feed 1,500 days worth of your journaling (~4 years) into a Gemini chat and talk through how you changed in that time, ask it to help you pull out things that you’ve learned and generally get to know yourself on the kind of macro scale that has been difficult to achieve before.

Future models will likely be able to take even more context, which opens up the door to even more possibilities for self-exploration. I’m interested to see how these sorts of tools are used in therapy practices and the like as well, although there are definitely concerns there.

To fine-tune or not to fine-tune?

What Matt is saying here is an interesting framework for folks looking to build AI features into their software or build AI-enabled companies.

When someone starts down this path, they usually start by creating a detailed prompt that helps the LLMs give them what they want, however, depending on the work that’s being done, it can get quite expensive to use the state of the art models (like GPT-4 and Claude 3) that give them the accuracy they need.

Often times, to get around this cost, they look at fine tuning a cheaper model to work for their specific use case. However, fine tuning a model means a larger up-front investment and makes it harder to pivot and experiment if the model isn’t giving you the results you want.

Even though it’s more expensive in the beginning, using a prompt until you’re sure you’re building a product or feature your customers want, getting examples from this of “good” responses and using those responses to fine tune a model and bring your costs down as you scale seems to be the right compromise that will get you where you want to go.

See you next week!

If you’ve made it this far and enjoyed “Last Week in AI”, please drop your email down below so next week’s edition goes straight to your inbox. Talk soon!