Welcome to Last Week in AI, a post I publish every Friday to share a couple things I’ve discovered in the world of AI last week. I spend way too much time in Discord, on Twitter and browsing reddit so you don’t have to!
If you have a tip or something you think should be included in next week’s post, send an email to keanan@floorboardai.com with more info.
This week, we’ve got a couple more cool examples of what AI can do: creating a 60 second video from a text prompt and near-human levels of English-to-Spanish translation.
Let’s dive in!
Text to…video?
Probably the biggest news this week is that OpenAI publicly announced Sora, their text-to-video model. Sora is capable of creating videos up to 60 seconds long and seems to be able to capture a lot of detail.
You should definitely check out their landing page to see more in-depth examples. To me, this was the last type of media that you wasn’t easily fake-able for the average person and it seems like we might be on the path to that no longer being true.
Fortunately, OpenAI is opening Sora up to researchers and red teams right now to help test the boundaries of it and hopefully make it a bit safer when it launches to the general public.
ChatGPT beats a human translator in an informal test
Paul got a friend to blindly review two translations, one done by ChatGPT and one done by a human translator and it seems like that ChatGPT version came out on top (earning an 8/10 rating as opposed to 6/10 human-translated version). Obviously there are a lot of potential variables here (we don’t know the “human” translator wasn’t using ChatGPT themselves, it’s only a single test of a single passage, etc) but it’s still pretty wild to me that these kind of results are possible already.
See you next week!
If you’ve made it this far and enjoyed “Last Week in AI”, please drop your email down below so next week’s edition goes straight to your inbox.