Uncategorized

Ep 255: Meta’s Llama 3, Microsoft’s VASA-1 – AI News That Matters

By,
  • 22 Apr, 2024
  • 16 Views
  • 0 Comment

Resources

Join the discussion: Ask Jordan questions on AI

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup

Connect with Jordan Wilson: LinkedIn Profile

Meta’s AI Chatbot and Llama 3

In an exciting reveal, Meta announced the public release of its Meta AI Chatbot and Llama 3. Surpassing other closed-source models in efficiency, Meta presents a strong case for businesses to consider this solution. Its highly accessible format does not require any specialized technical proficiency or superior computing equipment, making it ideal for small to large businesses. Moreover, its availability for public usage at no cost could be the game-changer that business owners were waiting for.

Knowing the importance of timely information, Meta ensures its AI is up-to-date with a knowledge cutoff in December 2023. For those inclined towards customization, Meta offers the opportunity to download and fork the entire model, enabling businesses to develop bespoke AI solutions.

Remarkable Performance of Meta’s New Models

One of the remarkable achievements of Meta recently has been the introduction of a set of new models, including a 70,000,000,000 parameter model, a 400,000,000,000 parameter model, and a smaller model available on their website. This places Meta’s technology on the same benchmark as other powerful models, presenting businesses with a wide array of options to choose from based on their requirements.

Microsoft’s New VASA 1: A Visual Revolution

For businesses centered around visual media and gaming, an innovation from Microsoft, VASA 1, may be the breakthrough you’re looking for. VASA 1, a framework for creating hyperrealistic talking faces from a portrait photo and an audio file, could revolutionize industries from gaming to social media and film production.

AI Advancements in Robotics

In the realm of robotics, Boston Dynamics’ fully electric Atlas robot could bring transformative changes to your manufacturing process. Armed with enhanced capabilities, this robot can elevate your capabilities in advanced manufacturing.

Apple’s On-Device Generative AI Features

Apple has set itself apart by unveiling plans to develop an on-demand generative AI for iPhones. Businesses should take note of Apple’s intended offline functionality and potential integration into its ecosystem.

A Note on OpenAI’s Upcoming Models

With speculations circling around the imminent release of OpenAI’s GPT-5, businesses must remain cautious about embracing rumors without substantial confirmation.

To sum up, the AI industry is continuously evolving, and it’s crucial for businesses to stay updated on the latest developments. This April proved to be eventful with Meta, Microsoft, and Apple introducing monumental updates that could reshape business operations. So when considering AI adoption, remember to keep an eye on the horizon for the next big thing.

Topics Covered in This Episode

1. Introduction of Meta Llama 3
2. Microsoft’s Introduction of VASA 1
3. Miscellaneous AI and Tech News

Podcast Transcript

Jordan Wilson [00:00:16]:
Is Meta’s new llama 3, the best open model that we’ve seen is Microsoft’s new visa 1 too good. And is it scary? And also, are we going to be seeing a new chat GPT 5 today? Well, I’m gonna answer all of those questions, well, as best as I can today and more on everyday AI. What’s going on y’all? My name is Jordan Wilson. I’m the host of everyday AI, and this is for you. We are your guide to learning and leveraging artificial intelligence. So we do this every single day, Monday through Friday, live stream, podcast, free daily news free daily newsletter to help all of us grow with generative AI, to grow our companies and to grow our careers. So if that’s you, thank you for joining us. If you’re listening on the podcast, we do this usually every single Monday where we bring you the AI news that matters because you could spend hours literally every single day trying to keep up worrying about what’s what’s this new update, how can I use this to grow my company, grow my career, or you can just tune in on Mondays and we’ll give it to you straight? Right? So extremely excited and, a lot to cover today.

Jordan Wilson [00:01:28]:
But as a reminder, if you haven’t already, if you’re listening on the livestream or the podcast, please go to your everyday a i.com and sign up for that free daily newsletter because, yes, we’re gonna be covering, you know, about a handful of some of the top stories, in generative AI. However, every single day, we bring you the most, relevant AI news as as well as a lot more today in the newsletter. So if you haven’t already, make sure to check that out on our website as well. More than 250 recaps that you can go back and watch for free. Alright. So let’s get into it. And, hey, if you are joining us live, like Michael’s joining us from Brooklyn on YouTube and, Mike Forji joining us as well. It’s it’s sunny in Milwaukee for Raul, and that’s great.

Jordan Wilson [00:02:13]:
So, hey, thank you for everyone joining us live. I’d love to hear what you think of some of these new stories as we go along. But, hey, without further ado, let’s get started. So probably one of the biggest AI news stories in a long time, actually, Meta has introduced not just the Meta AI chatbot, but also Llama 3, which I think could be a game changer for large language models. So Meta, the parent company of Instagram, Facebook, WhatsApp, and, I don’t know, just about everything else, they just introduced their Llama 3 model, a somewhat open source AI model used to develop, their AI chatbot. And they’re claiming it to be the best open model of their class, of its class. And potentially, it looks like it’s already surpassing, some closed source models from OpenAI and Google, so we’ll take a look at that here in a second. So here’s what this means.

Jordan Wilson [00:03:06]:
Meta has unveiled, which I think is probably one of the bigger pieces that people are talking about, but Meta has a built. It’s, unveiled its standalone AI chatbot, Meta AI. So you can go check that out at meta.ai. And they’ve also integrated Meta AI into all of its major apps, such as, you know, Instagram, Facebook, and WhatsApp. So you can go and chat, with the new META dot ai. So the launch of Meta AI outside of Meta’s social media, ecosystem positions the company as a strong competitor in the chatbot market, challenging leaders like OpenAI’s, ChatGPT, and Google Gemini. So, essentially, here’s here’s what you need to know for this. So previously, with previous versions of, Meta’s llama, so with llama 2, its previous most powerful model, it wasn’t available for the general public.

Jordan Wilson [00:03:58]:
I mean, it was. Right? So you could go in and you could download the model, but that did require a little bit of of tech know how. It required a pretty fast computer with a good GPU. So most people, you you know, you and me, average people, you couldn’t really run it. Right? Even on most of, you you know, I use a couple computers. I think, Meta’s Lama could only run on my one fastest computer. Right? So this is different now. So, Meta, I think, is really shifting its strategy here, with Llama 3 and now Meta AI.

Jordan Wilson [00:04:27]:
So, yes, anyone can go to meta.ai and try the model out. What I like is, you know, it’s you don’t even have to, log in. So you can log in if you want to save all of your chats. So this is, you know, now very similar to chat g p t where you can just go on, and start using it for free. So that’s the other thing to know right now is, Meta’s models are free to use. We’ll see because there’s another one coming which I’m gonna talk about here in a second. But right now, if you go to Meta AI, you can even log in with your Facebook account if you want to save your chats to go back and use them later. However, right now, you don’t even need to.

Jordan Wilson [00:05:03]:
So you can go and give llama 3, a try. So let’s look at some, some things I think that are important here. So like we talked about, you can go in and, for our live stream audience, I kind of have an example here like, Chat GPT and Gemini when you log on to Meta AI, they give you kind of some example prompts. And then also a couple other things worth noting is the knowledge cut off is December 2023, which is pretty nice. Right? When we talk about using large language models, one of the biggest things you have to worry about is hallucinations. And is it giving you up to date information? Is it just making things up? So the December 2023 cut off date is pretty good. Right? So, that’s only about 4 months ago now. And right now, it’s only chat g p t’s newest update that has it to December 2023 as well.

Jordan Wilson [00:05:53]:
So, not a I think making some, some noise here with with the open source model. And and we’re gonna get into this a little bit more in the website because I don’t wanna go into the details on, like, is it truly open source? You know, yes and no. So we’ll get into that in the newsletter. But, here’s what else this means is you can download the entire, model right now and you can essentially fork it. Right? So what that means is you can kind of create versions, you know, or, variations of this model. You can connect your company’s data to it as well. You know, you’ll have to have a little bit of of tech know how, you know, to take advantage of of that and, you know, using rag to bring your own company’s data in. But, you know, already, Meta is, Meta’s llama 3 is benchmarking.

Jordan Wilson [00:06:39]:
I won’t say off the charts because it’s still on the charts, but for a pretty much open model, this is pretty, like, I’d say it’s unexpected. Right? Like, you you normally don’t expect, open source models to compete with closed proprietary models from, you know, OpenAI, from Google, from Claude, but let’s actually look at that. So, for our, for our podcast audience, I am I’m sharing a chart now. But essentially right now we have the 2 different variations of llama 3. So you have their 8,000,000,000 parameter model and their 70,000,000,000 parameter model. So let’s just call these small and medium. Alright? There’s a large. I’ll tell you about that here in a second.

Jordan Wilson [00:07:21]:
Alright. So right now the, 8,000,000,000 parameter, la version of llama 3. So kind of the comparison that Meta shares in its benchmarking. This is its own internal benchmarking. Right? But we’ll share a little bit more on that, here in a minute. So, right now, it is hitting well above, kind of Google’s JEMMA, 7,000,000,000 parameter model, and also Mistral’s 7,000,000,000 parameter model. So, you know, when we talk about MMLU, which is one of the most popular, you you know, benchmarks that we talk about a lot here, on the show, which is the, multitask language understanding. Sorry, the massive multitask language understanding.

Jordan Wilson [00:08:01]:
Essentially, like, hey. Is this model as smart or smarter than a human? Right? More or less. So, right now, the meta 8 billion parameter model is, you know, outpunching its weight class by far for those smaller models. And then we look at the kind of, quote, unquote, medium model. So that is the 70,000,000,000 parameter. And what, Meta is comparing it to here is Gemini Pro’s 1.5. So that’s from Google. So that’s kind of its middle middle model as well as, Anthropic’s middle model.

Jordan Wilson [00:08:30]:
So Anthropic’s Claude SONNET. So as well, that has 70,000,000,000 parameter model there just barely getting above, Gemini Pro in the MMLU and then, you know, a couple points ahead of Claude’s SONNET. So what what most people, were talking about when this came out is, oh, well, what about, chat g p t? You know, gpt 4? What about, you know, Claude’s most powerful model, Opus? What about Gemini’s, Gemini Ultra? Right? So that’s the thing. These are technically Meta’s medium models. Okay? So they do have another model as part of this Llama 3 release, which is a 400,000,000,000 parameter. Right? So, we have the 8,000,000,000, 70,000,000,000 and 400,000,000,000 parameter models. So without getting too technical, that’s just the amount of data, that these models carry with them. And, obviously, when you think about downloading and using these models locally, 70,000,000,000 is already a pretty big model to try to run locally.

Jordan Wilson [00:09:28]:
Right? I I I I mean, the smallest model is is is pretty big model to run locally. So, this should be we should be hearing more, about this 400,000,000,000 parameter model, because and I I I did share about this, you know, about an hour after it came out. You you know, if you follow along on LinkedIn, I shared a pretty long long breakdown actually of this new unreleased model for Meta, what it could mean, and with the potential benchmarks. Because, Mark Zuckerberg, Meta’s CEO, did already share some, kind of in progress benchmarks, for the 400,000,000,000 parameter model. And he was saying that it was already hitting near, g b t 4 and some of these other models. So pretty interesting news here from Meta. And, yes. So, Jackie, yes, you can go download it.

Jordan Wilson [00:10:18]:
So, yeah, we’ll have those links in the, newsletter. So, you know, for our livestream audience, if you are, new to llama, yeah, you can download it. But, also, unlike previous versions of, Llama, you can just go on to, meta.ai right now and use the, I believe that’s the 70,000,000,000 parameter, flavor that they have available on meta.ai. So, yeah, you can download it and you can play around with it. And, you know, there’s obviously, different apps, also that you can download for your desktop that makes it much easier, to run these models locally, and we’ll be sharing about those in the newsletter as well. Alright. Let’s keep this thing going. Actually, no.

Jordan Wilson [00:11:02]:
I I I do wanna talk about this as well. So, if a a lot of people don’t see this, but, Nada did say that primarily that llama 3 was, trained for English. So if you go on to the chat arena leaderboard, which I’m probably gonna have a dedicated episode on that once. You know? So, essentially, it takes these Elo scores, which is, kind of an old scorings. I I wouldn’t say old but it’s a scoring system kind of borrowed from chess, in other competitive games. But, you know, essentially an Elo score is when you, kinda pit 2 models blindly side by side and, you know, there’s been tens of 1,000 or actually now 500,000 votes on, you you know, which model is stronger. I do this all the time. Anyone can go on.

Jordan Wilson [00:11:44]:
We’ll share the link in today’s newsletter, and people essentially you know, it’s it’s like the blind Pepsi taste test. I don’t know if anyone remembers that, you know, marketing from, like, 20 years ago. You know, you get outputs. You can put a a prompt in. It’ll give you 2 outputs from 2 different models. They don’t tell you which model. You both move which one’s better. But already, llama, if you just look at English.

Jordan Wilson [00:12:04]:
Right? So if you don’t look across all languages, Llama is already, beating out, the most powerful available model from Google Gemini, which is their Google Gemini Pro, and also Anthropic’s Quad 3 OPUS. So, already, Llama is hitting heavy. Alright. Our next piece of AI news. This one also, kinda scary, but Microsoft has unveiled its VESA 1 image to talking video model. I don’t know if that’s what we’re calling it, image to video, but it is for, essentially real life human avatars. Alright. So, they did this via research paper.

Jordan Wilson [00:12:43]:
So Microsoft’s VASA one AI research paper has introduced a cutting edge framework for creating hyperrealistic talking faces by converting a single portrait photo and an audio file into a live animated talking head video. So VASA 1 showcases impressive lip sync, realistic facial features and head movement, setting a new standard in AI driven animation technology. So unlike other existing tools, VASA 1 can work with photos, photos facing various directions and incorporating factors like eye gaze direction, head distance, and emotion for enhanced control and realism. So potential applications of VASA 1 include advanced lip syncing for games, creating virtual avatars for social media, and enhancing AI based, movie making for more realistic content. And I’m I’m curious for our livestream audience or, you know, hey. You can always check your show notes if you’re listening on the podcast. Hit me up. Have you all seen this? It’s actually wild.

Jordan Wilson [00:13:47]:
I’m gonna show it, here in a second. So if you haven’t seen it, don’t worry. But despite this being just a research preview, Basa 1 has demonstrated exceptional performance. Like, it’s really scary good. Included perfect lip syncing, to songs and handling different image styles like the Mona Lisa generating a 512, square pixel image at 45 frames per seconds in just about 2 minutes using an NVIDIA RTX 40 90 GPU. So this part’s important. Right now, there are no immediate plans for public release, thank thankfully. Right? Like, I don’t think you wanna see something like this, something this powerful, released right now, especially right before, the election here in the US.

Jordan Wilson [00:14:32]:
But, the technology’s potential impact on industries like gaming, social media, and entertainment is significant. And this is hitting at a future where AI driven animation could become more accessible and widespread. Alright. So let’s go ahead for our live stream audience. Let’s go ahead and take a look here. So I’m gonna go ahead. Let’s just do this. I’m I’m trying not to block, we’ll just do the middle one here.

Jordan Wilson [00:14:55]:
So, essentially this is let me recap and tell you what this is in a nutshell. So based off one image, just a single flat image, this new VESA 1, technology is able to create these videos. So it does need, one base image as well as a voice. And then from there, it can create this. So I’m gonna go ahead and play a couple examples here for our livestream audience. Again, this is based off one still photo and a, training, voice. So you do need a voice to start with. I I don’t know if the VASA one has kind of default voices that they use, but I’m pretty sure, it’s a person will upload their own voice.

Jordan Wilson [00:15:40]:
So let’s go ahead and take a watch and a listen. We’ll just do, 2 examples. We’ll just do about half of them because they’re each about a minute.

AI [00:15:47]:
Have you ever had maybe you’re in that place right now where you wanna turn your life around, and you know somewhere deep in your soul, there could be some decisions that you have to make. Like, you know like, it’s like things It’s like the invitation is to make the decision, commit to that.

Jordan Wilson [00:16:13]:
Y’all, have you seen this? This is wild. This is wild. Like okay. If you’re listening on the podcast, obviously, you can tell the the quality of the audio. Right? And even, I wouldn’t say stutter. Right? Like, sometimes I’m stuttering around and mumbling around, but it has this inflection in the voice that you don’t get right now from any, you know, text to speech AI, text to speech generators, any, obviously, video generators. The quality is mind boggling. Yes.

Jordan Wilson [00:16:47]:
This is wild. Yeah. And and, I do know, like, you you know, Matthew here, shared, like, you know you know, an example of of HeyGen. So, yeah, there’s a lot of, programs right now that do this, but I will say right now, at least compared to the base this base of 1 model, which is not publicly available, this this model is so far ahead of of others, I would say. Pretty, yeah, scary real. Yeah. TC here says that’s terrifying. Matthew says, but my video editor friends are scared.

Jordan Wilson [00:17:17]:
Yeah. It is scary. Good. Let’s take a look at one more, one more example here before we go on, with with more AI news. Here we go.

AI [00:17:29]:
The first thing we need to look at is the letter h, so the sound at the beginning. It depends what country you’re from, but many native languages have a problem with, putting too much tightness in the throat, and it can become more of a sound. So it’s very important not to over exaggerate this sound. It’s a very soft, very relaxed sound in English. So just softly release the sound help. K? So that’s the first thing.

Jordan Wilson [00:17:57]:
Alright. It’s it’s almost like so it’s it is. Yeah. It’s it’s so good. It’s scary almost. Right? So, in in that example and, again, we will have the links, that you can actually go and download, these these these videos if you want. Right? So, I do like that Microsoft is making this available, as well as they have just a lot more information. Right? So, kind of scrolling through here, this paper talks a little bit about how this new technology works, how it takes a single image, an audio clip, and then essentially, control signals that you can put in there as well in terms of if or how, you might want, the, avatar.

Jordan Wilson [00:18:36]:
It’s so weird calling it an avatar because it looks way way human, but you can kind of have certain level of controls. Alright. Let’s keep this thing going. Alright. Our next piece of AI news for today, Boston Dynamics has unveiled a fully electric Atlas robot for real world applications. Alright. So Boston Dynamics has introduced a fully electric version of its humanoid robot, Atlas, designed for real world and commercial applications. So Boston Dynamics, a leading robotics company, aims to showcase the enhanced capabilities of the electric Atlas in a lab setting, factories, and everyday life.

Jordan Wilson [00:19:19]:
So, previously, Boston Dynamics, I’m sure most of you have seen and heard they had an Atlas, robot before that they kind of retired making way for its new, much more humanoid but electric, iteration of Atlas. So the transition from hydraulic to electric power was made possible through a partnership with Hyundai focusing on advanced automotive manufacturing capabilities. The development of the new Atlas robot showcases the continuous evolution of AI technology in the field of robotics, paving the way for enhanced capabilities and applications in various industries. The electric Atlas boasts increased strength and wider range of motion compared to its hydraulic predecessor, predecessor, enabling it to handle diverse manipulation tasks in various environments. And despite the technological advancements, the release of the Electric Atlas has sparked skepticism on social media. Obviously, people drawing pair, parallels to science fiction scenarios like the Terminator franchise. So let’s go ahead. We’re gonna do this one as well.

Jordan Wilson [00:20:26]:
Give everyone a quick little 20, 32nd video, here that Boston Dynamics, shared on their, on their Twitter. So let’s go ahead and give that a play. So I don’t think we really need the audio here. There’s not really audio. So essentially what we have here is, yeah, a robot, humanoid robot doing a backbend in a very scary, fashion, and then walking, turning its whole body around, displacing its hips. Right? It looks like it can walk in all direction. There’s no front or back. Presume I mean, there’s a lot of AI, in this latest iteration of Boston Dynamics, so we’ll be sharing about that.

Jordan Wilson [00:21:08]:
Obviously, more in today’s newsletter. Alright. There’s more. Yes. Big Apple news. Hey. Any any other day, any other day, this could have been our lead story if I’m being honest. Hey.

Jordan Wilson [00:21:21]:
Thanks for what Matthew just said here from the livestream. He said Jordan is helping us manage, information overload effectively. Well done and fun to follow along. Alright. Well hey. Thanks thanks, Matthew. Make sure to check out the the daily newsletter as as well today. That’ll make it even easier.

Jordan Wilson [00:21:36]:
But let’s talk about this new report. So Apple is going to be introducing on device generative AI features for iPhones. So Apple is developing its own large language model to power on device generative AI features for upcoming iPhone series according to a recent report from Bloomberg. So, this new AI model is expected to run entirely on the device. So we’re talking edge AI on device AI, not cloud. So enabling offline functionality without Internet connectivity. So Apple plans to leverage the neural processing unit or NPU of the Apple Silicon chip to offer new AI capabilities. Apple so, again, this is where it’s like, wait.

Jordan Wilson [00:22:20]:
Which direction is Apple going? Right? Because we talked about here on the show, like, 8 months ago, a report that said Apple was spending 1,000,000 with an ass 1,000,000 of dollars a day on its own generative AI technology presumably for this exact reason. And then a couple weeks ago, we got a report that Apple was actually, no. They’re partnering. You know? They’re gonna be partnering with maybe Google and, you know, either using Google’s, Gemini Nano, you know, model or maybe one of its Gemma models. So there has been, obviously, Apple, reportedly still exploring partnerships with Google, with Microsoft backed OpenAI, and with China’s Baidu for AI tools integration into its next generation operating system. But this Bloomberg report is saying no. They’re actually going in a different way. So, you know, they’re not saying if this is going to be for the next iteration.

Jordan Wilson [00:23:09]:
So we could see as an example, the next iteration of the iPhone may be partnering, with a Google or an OpenAI or China’s Beidou, for the model and then maybe the next iPhone or the next device after that might have Apple’s own internal model. So we’ll have to see. But regardless, Apple’s approach to marketing its AI features will focus on how they can assist users in managing daily routines rather than emphasizing speed and power. So we should see a lot more news or, kind of which direction Apple is officially going on this, during the worldwide developer conference, so that’s WWDC, this June starting June 10th and where Apple is expected to unveil not just the next generation of its, operating systems, presumably powered by AI, but also now, hardware. Right? On device AI, pretty big news. But Apple is also planning reportedly to integrate these generative AI capabilities into series, into Siri messages, Apple Music, pages, keynotes, essentially across all of its, ecosystem there. But Apple’s, reported aim is to enhance the user experience, with features like text summarization, suggestions, and more thorough AI integration. Alright.

Jordan Wilson [00:24:28]:
Let’s I mean, what what do you all think though? I I was taken aback by the report about a month ago, that said that Apple was looking into exploring third party partnerships. I don’t know. I’ve I’ve been extremely underwhelmed, by Google’s model, and that’s what the last report said is that Apple was going to be, kind of, you know, partnering or, working with Google’s model. And I was like, I don’t think that’s a good good call. So we’ll see if if that’s what’s happened or if, you know, who knows? Maybe Apple is just gonna have a a lighter version, of generative AI features, you know, announced in June, and maybe there’ll be a more in-depth one, you know, the year after or, you know, at a later announcement. Alright. So our next piece of AI news. Oh, yeah.

Jordan Wilson [00:25:12]:
Here we go. We’re talking Internet conspiracy theories. Yeah. I don’t know if you guys saw this, but when I was, sharing, sharing my, my my screen here on the Boston Dynamics story, you’ll see on the right hand so this is on Twitter or x or whatever people call it now. But you’ll see it says, oh, trending. What’s happening? GPT 5. And you might have seen this and you’re like, wait, why is that trending? Well, let’s let’s talk about it. So, there’s been a lot of speculation over the last week because OpenAI posted a photo of a throne with the number 22 last weekend.

Jordan Wilson [00:25:50]:
I’ll get to what that means. But, essentially, all these, you know, geeky people like me, but, Internet conspiracy theorists are convinced that that means chat GPT’s GPT 5 will be released today, April 22nd. Right? Throne with a 22. 22nd, why why why today? Well, today is, Apple or, sorry, OpenAI CEO Sam Altman’s birthday, April 22nd. So that’s what everyone’s been, you know, saying is, like, oh, it’s it’s coming today. So, yeah, if you’re on social media, seeing this on on Twitter or LinkedIn or anywhere else, I’d say this, don’t believe it. It’s it’s not happening because guess what else you notice there? There’s a basketball. Alright? So, also, if if you look at the, the caption or the prompt that was used to generate this, it’s a queen’s throne.

Jordan Wilson [00:26:40]:
So what does that mean? A basketball plus a queen’s throne plus the number 22. Does that ring a bell for anyone? Maybe Caitlin Clark. Right? Because also along with this timing, and this was obvious this was posted to OpenAI’s, Instagram story last week. I I don’t I don’t follow, you know, anyone on Instagram. I just saw all the, you know, rumors swirling around, but it just so happened to be the exact same timing of, Iowa had kind of a a celebration, for Caitlin Clark and the Iowa women’s basketball team as well as she just went number 1, with the, over overall number one pick in the WNBA draft. So most people, I think, are looking at this as, kind of like a, you know, an ode to her and and her success. But, yeah, so be watch out. Everyone everyone out there today is saying, oh, the new version of GPT 5 is coming out.

Jordan Wilson [00:27:33]:
You know, look at what, Llama just did. And, you know, a couple weeks ago, we got updates from, Google’s Gemini and, you you know, Claude 3 now has been out for about 2 months. So people are like, oh, today is the day. I’ll tell you this. No. Absolutely not. I don’t know. I could be wrong, but I give it about a 0.0% likelihood that we get a new, GPT 5 today.

Jordan Wilson [00:27:56]:
Sam Altman’s talked about on, you know, some recent interviews that there’s going to be multiple, kind of refreshes or updates to their platform. He has signaled that, yes, there will be a new model, this calendar year. I don’t think we’re getting it today. I don’t even know if we’re gonna be getting any updates from OpenAI today. I I wouldn’t think so, but, presumably, we will first be seeing updates probably with, OpenAI’s Whisper, voice translation, maybe with their DALL E 3. Maybe we’ll get a dummy down version of Sora, available. So, OpenAI’s, you know, AI video project. But, yeah, I don’t see anything happening there, with, today in g p t 5.

Jordan Wilson [00:28:40]:
Alright. In our last story, so TED Talks went all in on a sorta looking future. So TED Talks, just posted this over the weekend. A little AI video made with Sora. I’m gonna show you guys just, just just the front half of this here, because it’s it’s like a 3 minute video. Very, very impressive video, but let’s just take a look at this one. So give me a second here. Live stream audience.

Jordan Wilson [00:29:09]:
You can take a look. I will put the audio on for this one. We’re just gonna let it play for 20 or 30 seconds so you can see. Another great example, so this is, TED Talks and we’ll link this tweet in, our newsletter. So it says, what will TED look like in 40 years? For TED at 2024, we worked with a couple artists. It looks like Paul Trillo and OpenAI to create this exclusive video using Sora, their unreleased text to video model. Alright. So let’s just take a look at what this looks like.

Jordan Wilson [00:30:04]:
Alright. And then, hey, we’ll we’ll we’ll end there on this strange you know, looks like some lab grown meat or something that we have that we have going on there. So, yeah, it it didn’t get the best, reception. Right? So I I I I think the, the video was actually surprisingly well done. Right? And if you’re listening on the podcast, you can always go back and, you know, watch, you know, today’s episode, the the video of it so you can see or check it out in the newsletter. But, essentially, it’s flying through, you know, a set of cornfields and then through these, you know, futuristic buildings. Presumably, you know, people are giving TED Talks, in these, various settings, but it’s just kind of a continuous one shot kind of flying through, you you know, a series of inside and outside, but very futuristic looking, venues. But didn’t get the best reaction.

Jordan Wilson [00:30:52]:
Didn’t get the best reaction here on Twitter, when someone said this is what Ted would look like in 40 years. A random generated video that doesn’t make much sense plus meat slabs. So yeah. I hey. I I thought it looked, pretty pretty cool, but didn’t get the best response there, in the Twitterverse. So, what what do you guys think? So much happening today in the world of AI news. So, hey, we’ll just give you the quick, 32nd recap here. So Meta has introduced, Llama 3, its new pretty much open source model that is benchmarking off the charts as well as its meta AI, kind of chatbot and rolling it out across its different platforms.

Jordan Wilson [00:31:39]:
Microsoft unveiled its scary good VASA 1 image to talking video model. We showed you some examples of that. It is not publicly available luckily, because it’s too good, and it would not be a good time right before the, election here in the US. So you can go, read the paper and, download some examples that we’ll have in our loop in our newsletter. Boston Dynamics unveiled a fully electric Atlas robot powered by a lot of AI after retiring its, hydraulic version, of Atlas, which has been, you know, been out in the wild there for a couple of years. Apple report according to a Bloomberg report, is introducing on device generative AI features, for its upcoming iPhones, which kinda goes against some previous reports that they would be, you know, partnering with maybe a Google or a Chat GPT instead. Rumors are swirling that Chat GPT is going to release or OpenAI is gonna release GPT 5 today after a, not very cryptic, but you could say it’s cryptic, you you know, story on social media, I’d say no. I’d say that’s just more Caitlin Clark.

Jordan Wilson [00:32:48]:
And last but not least, TED Talks kinda getting roasted a little bit, for its take at a look of the future with Sora. So that is it. A lot more in our newsletter, so make sure you go to your every dayai.com. And please join me tomorrow. We’re gonna come super hot for hot take Tuesday, so join me tomorrow and every day for more everyday AI. Thanks, y’all.

Leave a comment

Your email address will not be published. Required fields are marked *