Uncategorized

Ep 109: LLM Showdown – ChatGPT, Bing Chat, Google Bard, Claude 2 and Perplexity

By,
  • 26 Sep, 2023
  • 37 Views
  • 0 Comment

Resources

Join the discussion:Ask Jordan questions about LLMs

Upcoming Episodes:
Check out the upcoming Everyday AI Livestream lineup

Connect with Jordan WilsonLinkedIn Profile

Related Episodes

Overview

In today’s rapidly evolving technological landscape, Artificial Intelligence (AI) is becoming an indispensable asset for businesses across industries. Among various AI applications, large language models are revolutionizing the way businesses interact with and harness the power of AI. In a recent episode of the “Everyday AI” podcast, host Jordan Wilson and an AI expert explored the vast potential of large language models, shedding light on their impact on business productivity and decision-making. Join us as we delve into the highlights of the episode and discuss how businesses can leverage these models for growth.

Understanding the Advantages of Large Language Models:

Large language models, such as Chat GPT with plugins, empower businesses to access an array of capabilities and streamline their operations. These models serve as virtual assistants, with the ability to process massive amounts of data and generate valuable insights. With the ability to upload documents for training on platforms like Anthropic Cloud, users can tap into the vast knowledge base of these models, enhancing their problem-solving capabilities.

The Evolution of AI Models:

The podcast episode highlights the ever-changing landscape of AI models and the substantial investments being made by companies like OpenAI and Microsoft. The forthcoming release of Google’s Gemini and innovative features like Chat GPT’s multimodal capabilities indicate a continuous drive towards enhanced functionality and improved user experience. Staying informed about these advancements empowers businesses to better identify the right tools to meet their needs.

Enhancing Decision-Making with Large Language Models:

Large language models offer unparalleled capabilities, significantly impacting decision-making processes for businesses. By leveraging these models, businesses can gain access to a vast knowledge base derived from extensive data training. However, it is crucial to understand the underlying models being utilized and their limitations, ensuring reliable and accurate outputs.

Making Informed Choices: AI Chat Comparisons:

The hosts of the “Everyday AI” podcast draw attention to different AI chat options, including Bard, Bing Chat, and Perplexity. Each model comes with unique features and interfaces, ultimately catering to distinct user preferences. Understanding the strengths and weaknesses of these models empowers business owners and decision-makers to make informed choices that align with their objectives.

Conclusion:

As AI continues to revolutionize the business landscape, large language models have emerged as powerful tools for driving growth, boosting productivity, and enhancing decision-making processes. The “Everyday AI” podcast episode explored the immense potential of these models and shed light on the different offerings available in the market. Business owners and decision-makers are urged to embrace the capabilities of large language models, staying informed about new advancements, and leveraging them to gain a competitive edge. By harnessing the power of everyday AI, businesses can unlock new possibilities and shape the future of work.

Topics Covered in This Episode

1. Discussion on different AI chat models and companies
2. Productivity benefits and reliability of large language models
3. Applications and limitations of large language models
4. Comparison of user experience and extensions

Podcast Transcript

Jordan Wilson [00:00:18]:

What is the best large language model to use? Should we be using Chat, GPT, google Bard, Microsoft Bing or something else? Don’t worry. Today we’re going to tackle that and talk about probably the five most popular models and maybe put that question to rest and showing you the pros and cons of each of those. So, welcome. My name is Jordan Wilson, and this is Everyday AI. This is your Daily Livestream podcast and free daily newsletter helping everyday people like me and you not just learn about AI, but how we can actually leverage it. All right, so I’m extremely excited today. Thank you. If you’re joining us live.

Jordan Wilson [00:01:12]:

So if you are new to generative AI, or if you have a lot of questions about large language models, what they are, how they’re used, which ones are best, this is definitely the show for you. If you use large language models every single day, I hope you can join the conversation, drop in, let me know what your favorite large language model is. And as a reminder, if you’re joining us on the podcast, check the show notes. You can always come in, click, and come join this conversation that we mainly have going on LinkedIn and other places. But if you’re an avid podcast listener, go ahead, come join us. All right, so good morning. Thank you everyone for joining. Extremely excited today for a little large language model showdown.

Daily AI news

Jordan Wilson [00:02:01]:

And as a reminder, like I said, if you’re catching us on the live stream, make sure to sign up for the free daily newsletter. Maybe if you’re on the podcast, come join us on the live stream. We do each and every single day, 07:30, a.m. Central Standard time. We go live and then we put it out on podcasts and shoot the newsletter out shortly thereafter. All right, so before we get into diving into all things LLMs large language models, let’s first quickly take a look at what’s going on in the world of AI news. We do this every single day. So first, Pica Labs has released some new features.

Jordan Wilson [00:02:36]:

So the Text to Video platform and Runway competitor released some big updates. So Pica Labs AI video generator has two main new features. So one is video typing, and the other one is the ability to embed a company logo into a video. So video typing is essentially being able to insert text or words into an AI generated video and then also the option for the logo. So pretty cool. And we’ll probably do an AI and five on that next week. All right, next traditional stock photo giant is getting into the Gen AI game. So Getty Images is partnering with Nvidia to start offering AI generated images.

Jordan Wilson [00:03:19]:

So Getty has released their own AI photo generated tool called Generative AI by Getty Images. Not the catchiest name, but maybe they’ll work on that. So currently it is Paywalled on the Getty website and also it looks like it’s going to be available through an API as well. So, interesting development there, especially as Getty famously took stability. AI, another AI image generating company, to court for allegedly misusing their copywritten photos. But now they’re embracing the tech. All right, Spotify is going all in on AI. So some recent updates, which I think we shared about in the newsletter, but I wanted to talk about it here on the show.

Jordan Wilson [00:04:01]:

So they’re allowing podcasters to replicate their voice with AI and to translate them in other languages. So maybe everyday AI will be coming to a language closer to your home. Also, Spotify has slightly reversed course and said it won’t remove AI generated music from their platform. So, interesting. I’m going to go ahead and do a four story today. Normally, I cap it at three, but this one was extremely interesting. So a new Harvard study showed the impact of GPT four and productivity. So this Harvard led study showed a 40% performance boost when consultants used GPT four.

Jordan Wilson [00:04:40]:

So this study was done with Boston Consulting Group, one of the most well known consulting groups in the world. My only question is, why only 40%? I’m wondering what kind of training everyone had beforehand, but to only have a 40% performance boost while using GPT four probably means you weren’t using it correctly. Sorry. So, Boston BCG, holler at me, I’ll teach you all how to use it correctly. All right, so let’s get back to the topic at hand. Let’s talk about large language models. What are your questions? What do you want to know? About Chat GPT? About Bard. About Bing Cloud and Perplexity.

Top 5 AI-powered chats dominate the market

Jordan Wilson [00:05:24]:

Those are the five. I think there’s others, don’t get me wrong, there’s probably a dozen or so great AI powered chats, but I’d say those are probably the top five. I’m making that own list. But, I mean, Chat, GBT, Bard and Bing are undoubtedly the top three. And then we just saw Cloud yesterday rake in a, I believe, a 4 billion with A-B-4 billion investment from Amazon. So they’ve already catapulted themselves up into that top tier. And then Perplexity, I think Perplexity is something chat a large language model that not a lot of people talk about. So I want to know, what are your questions about large language models? We’re going to go over it live.

Jordan Wilson [00:06:18]:

I’m going to show you quickly the pros and cons. This isn’t going to be one of those episodes where it accidentally goes to 45 minutes, but I want to say what’s up to everyone joining us. So, Parimi, thank you for joining Val. Good morning, Mike. Thank you for joining Harvey Castro. Happy to be here. Happy to have Harvey on the show. I think in two days, Josh saying, good morning from Dallas.

Jordan Wilson [00:06:42]:

Thank you. Thank you all for joining. And if you do have questions, get them in. What’s up? Brandon Cox, Sanford. Good to see you. All right, let’s get it going. What questions do you have? I see a couple of questions. I’m going to go ahead and start them, right.

Jordan Wilson [00:07:05]:

At least on my end. Don’t worry. If you have a specific question, let me know. Harvey says throw in Poe.com. Yeah, poe’s. Good. Except at least I think the last time I checked, like, two weeks ago, you still can’t use plugins on PO. I don’t think I could be wrong.

Jordan Wilson [00:07:25]:

All right, let’s get into it. Let’s start talking about LLMs. So, very quick overview, especially if you’re new. What is a large language model? The simplest way for the everyday person, for me to explain a large language model is it is the world’s most advanced form of autocomplete trained on the history of the Internet. Usually billions or trillions of data points, essentially. Right. Without getting super technical and talking Transformers and neural networks, I’d say that’s a fairly easy way to describe it. Right.

What are Large Language Models?

Jordan Wilson [00:08:03]:

All these large language models, they’re trained on heaps and heaps of data. Essentially anything that’s probably ever existed on the internet is in some of the largest large language models, right? But essentially they are models. That’s what I always teach people when working with large language models. You shouldn’t be using it like you would use a traditional search engine, which is one input, one output. That’s not how a large language model works best, right? Contrary to what influencers on the Internet who are trying to sell you a book of prompts might tell you, you should never just be copying and pasting one input and looking for a great output. That might only give you a 40% increase in productivity when I think you could easily, easily double your productivity or more using a large language model. All right, so that’s the simplest overview. Now, let’s talk about a little bit about models, right? So let’s look at OpenAI bing chat and perplexity, right? So it’s also important to understand the relationship between Microsoft, Bing, Chat and OpenAI, the maker of Chat GPT GPT Four technology, right? Microsoft is the biggest investor.

Jordan Wilson [00:09:28]:

I don’t know the exact number. I think it’s like 49% or something like that. I could be wrong, so don’t quote me on that, but I believe it’s nearly half ownership stake in OpenAI. So it’s important to know that those two do work hand in hand. Perplexity, I believe recently they also announced the option in the paid version to connect to cloud two, as well as GPT Four. All right? So essentially, we have chat, GPT, bing chat and perplexity. All use GPT four. Also, it’s important to know Bing Chat uses other models.

Jordan Wilson [00:10:03]:

They’re not telling us everything, but it is based off of GPT Four and other models. All right, bard from Google is based on palm two and then Cloud from Anthropic. Their model is Cloud Two. Okay? So that piece is important to understand that some of these share models, some have multiple models that they’re using, right? But that’s important too, because it’s going to sometimes produce similar results. So you might get somewhat similar results at times if you use OpenAI’s Chat GPT and if you use Microsoft Bing Chat. And we’re going to do that live, so don’t worry. But I’m going into models because of this. If you are working heavily in large language models, which you should, because that is the future of work, right? Obviously Copilot did an episode on that yesterday, but that’s ultimately based off of large language models, right? So it’s important to know which offerings or which chats, AI chats, whatever you want to call them.

Jordan Wilson [00:11:17]:

I don’t often call them AI chatgpt, but some of them share very strong similarities and some are very different. Right. The reason I say that is a lot of times, even in my workflow, I have two screens here and I’ll split each screen in half, right? So this is how I’ll normally work. If I’m doing some heavy, deep work, I’ll usually have Chat GPT with plugins on at least two to three of those four, and then one is usually Perplexity or I’ll either have Cloud or Bard. So the reason I do that also is to switch up the models because you will get even if you’re training a chat in the same way, which that’s what you should always do because it’s a model, you’ll still get very different results. All right, I hope that’s a good enough overview. And yes, Gemini coming soon from Google should I think it is allegedly supposed to be much more powerful than GPT four. So, yeah, that’s also important to understand is these models and these companies are constantly changing and that’s where all the money is going right now.

Google Bard breakdown

Jordan Wilson [00:12:25]:

We talked about a $4 billion investment into Anthropic from Amazon to improve and build upon Cloud. So these models are constantly changing and constantly improving as well. That’s a great point to bring up. All right, let’s go here and see them in action. All right, we’re going to do something just super quick, all right? Just so we can see pros and cons. So I’m going to do pros and cons, but also show you some of the best features and pros and cons. All right, so here we go. Here is Google Bard.

Jordan Wilson [00:13:05]:

If I’m being honest, I’m not very impressed with Google Bard. If I’m being super honest. I do think when they kind of upgrade to Gemini, it will be much better. But something that I really like about Google Bard, and I’m going to go ahead and put this test prompt in and I’ll tell you what this test prompt is and why I’m running it. So all I’m saying is, please tell me about Microsoft’s recent announcement on Microsoft 365 Copilot and what the November 1 release entails? Please keep it simple and give me a brief bullet pointed list. Right, so, super simple stuff here. And luckily Google handles it with ease. So I will say this, even though personally I’m not the biggest fan right now of Google Bard, it is probably the easiest large language model to learn on.

Jordan Wilson [00:13:59]:

It has the most easy to understand user interface. The user experience is actually pretty good if we’re talking about quality of outputs and functionality and utility. It’s not in the top two by far. Also, people were initially raving about the new extensions from Bard, which essentially help connect Google Bard to other Google products like YouTube, your Gmail, Google Drive, I believe, et cetera, Google Flights. Right? Here’s the thing, on workspace accounts, it’s not available for everyone yet, right? You’ll even see my account right here. I don’t have extensions, I have to log into my personal Gmail because we use the G suite or Google Workspace for our email at Acceleratency, the digital strategy company that I own. So that’s a downside. And even when I tested the extensions out, to tell you the truth, not that great, right? At least for what I would want to expect.

Jordan Wilson [00:15:08]:

As an example, you can’t summarize YouTube videos and obviously Google doesn’t want that because they want you to spend time watching those YouTube videos so they can get their ad revenue, right, can’t interact and write in Google Sheets. So these are all things you can do in Chat GPT. So when people say, oh, Google Bard is great, and look at these extensions, well, the extensions at least compared to Chat GPT, lack utility, period. It’s probably tough news to swallow if you’re a huge Google Bard fan. However, I do love the user interface. It is the easiest. So if you want to teach a parent or an aunt or uncle or a coworker that’s maybe not as tech savvy about large language models, I’ll say 100% barred. It is the easiest interface you can click to upload an image, which is great, kind of that multimodal that Chat GBT OpenAI just announced yesterday and will be releasing.

Jordan Wilson [00:16:05]:

You can speak and speak your commands and then hear the response by default. Again, these are things coming to Chat GBT, but not here yet. So in terms of user interface, user experience, ease to use, best to learn on, I’d say definitely Google is where it’s at. They also do have this double check response, which they just announced, I think last week. If I’m being honest, I don’t think it’s very useful. If anything, it’s just a marketing angle, I’d say, because half of the time. So when I click that here, if you’re listening on the podcast out of the bullet points it gave me, when I asked it to give me bullet points, it gives me sources for some and highlights them. But then you get to thinking like okay, then what about the ones that aren’t highlighted? Does that mean Google Bar just made it up? Right? So that’s why this whole fact check thing and people are talking because one of the biggest things with large English models is they lie, they hallucinate, they make stuff up.

Anthropic Claude 2 breakdown

Jordan Wilson [00:17:12]:

Well, is this really useful if you click the quote unquote kind of fact check button and it just highlights 50% of what’s on there? Not really. It’s not really useful to me. Yeah, like Val said, you can already tell nothing like plugins from Chat GBT. Absolutely. All right, so let’s go next. So we gave Google Bard a little run there. All right, so now I’m going to run the same prompt. Here we are in anthropic cloud two.

Jordan Wilson [00:17:45]:

All right, so what’s interesting here’s so what’s interesting here, and I’m actually a little shocked, I do these live too, right? I love learning things along with you. So this isn’t like I spent 10 hours preparing this episode. It’s a daily show. I can’t spend 10 hours. But what’s actually surprising here, which I don’t know how, so I’ll have to investigate this later. Cloud actually got some things right about a recent event. So it did say Microsoft announced a new AIpowered feature called Copilot coming to Microsoft 365 on November 1. So that could in theory be inferred from the prompt that I gave it, which I said, please tell me about Microsoft’s recent announcement.

Jordan Wilson [00:18:38]:

Right. And everything else. There is nothing else specific that lets me know that Cloud is actually accessing up to date information. Everything else is extremely general. So let’s quickly talk about Cloud. So one advantage that Cloud has, I think, over all other competitors. Well, there’s two things. One, if you look at the bottom of the screen, it has the built in capability to upload documents, right? The biggest thing is PDFs.

Jordan Wilson [00:19:20]:

Okay? Again, you can do this in Chat GPT. If you’ve listened to the show ever before, you probably know I’m a huge fan of Chat GPT with plugins. That is probably one of the best advantages for Cloud. You can upload documents, which is great because in theory, you can quickly train Cloud on your company data. Again, never upload private documents to any large language model because most all of them use anything you upload to train their model. So don’t upload anything sensitive, proprietary, classified, don’t do that. However, you can in theory upload your entire website or your entire writing catalog into Anthropic Cloud. The other great thing about Cloud is it has a bigger memory, right? We’re not going to get too dorky into tokens, but essentially Cloud has a 100,000 token memory, which all that means is it can remember things for a much longer period than all of these other large language models.

Perplexity breakdown

Jordan Wilson [00:20:25]:

So those are two pretty big benefits of Cloud, and they did just recently announce kind of their paid pro. I also believe it’s $20 a month. Correct me on that if I’m wrong because I know we have some avid large language model fans in the house. So let’s go ahead and go to Perplexity, which love Perplexity. So I’m going to go ahead and turn on Copilot. Okay? So everyone uses this Copilot name, which is confusing. You have Microsoft copilot. GitHub copilot.

Jordan Wilson [00:20:59]:

Now you have perplexity copilot pilot. But anyways, that enables a more advanced model. So I’m going to go ahead and click that little put my little prompt in there. Again, just showing you guys this live. I’m saying, please tell me about Microsoft’s recent announcement on Microsoft 365 Copilot and what the November 1 release entails? Please keep it simple and give me a brief bullet point list. Right. So you’ll see here, Perplexity is a little slower, which is a funny thing to say because these large language models, when you think of what they’re actually computing, are lightning fast. So Perplexity is a bit slower.

Jordan Wilson [00:21:34]:

However, if you’re talking about out of the box accuracy, out of the box, avoiding hallucination and lies, perplexity hands down is heads and shoulders above everyone else. And it’s not even close. I’m actually perplexed accidental wordplay. I’m perplexed know someone hasn’t acquired perplexity specifically. Either OpenAI Microsoft or Google or maybe Anthropic. Well, I don’t know. But Perplexity, I think, is a huge player. And you look at the response here that you got, it’s amazingly detailed and accurate, right? So as an example, if you compare this to the response that we got on Cloud Cloud, it’s like I don’t know if it’s lying to me or not, if I’m being honest, because it did give me some recent information.

Jordan Wilson [00:22:31]:

But it looks like it just inferred that from my prompt, right? When we look at Perplexity, Perplexity I haven’t even done this yet. I haven’t ran this on all models, and I already know Perplexity will win in terms of, well, one A and one B. But it is extremely accurate. So Perplexity is not as creative, it’s not as flexible. But if you want the best of a large language model and accuracy, perplexity is where it’s at. I love perplexity. I always will have at least one Perplexity window open whenever I’m doing any kind of large language model work. There’s other great features.

Jordan Wilson [00:23:16]:

And again, yes, recently they did announce down here that you can use Cloud Two. If you’re on a pay plan, you can use cloud two or GBT four. So you can choose between those two different models when you use the Copilot feature. All right, so important to know, but Perplexity is huge. What do you all think? Does anyone in here like Perplexity? Just me. Am I dorking out. All right, Shannon. Shannon, thank you for joining us.

Jordan Wilson [00:23:44]:

Shannon says Perplexity came so close to my voice at one point I forgot for a second I wasn’t typing to myself. Yeah, if I’m being honest, I don’t use Perplexity a whole lot for more creative writing or content writing. I do think, at least for me, Chat GBT is the best at that. But yeah, I’d say pretty much. Aside from Google Bard, I’ve been able to do a pretty good job, uh, replicating a tone of voice or a company brand voice in most large language models. Pretty good at that. Yeah, harvey said. Perplexity chrome plugin.

Bing Chat breakdown

Jordan Wilson [00:24:17]:

Fantastic. Agree. I use Chat as well. Val loves perplexity. Same. All right, let’s keep it Bing y’all. Unfortunately, I got to switch browsers here. So some news is it’s not even that big of news, but we had to wait for a very long time to get Bing chat.

Jordan Wilson [00:24:39]:

So Microsoft’s AI model, we had to wait for a very long time to get it on Chrome. And we were so excited, we’re like, Yay, it’s on Chrome. But now they took it away. So I’m in Microsoft edge now. So, yeah, you used to be able to access all of these in Chrome, but recently, within the past couple of days, now you have to access this new model in Microsoft Edge, just FYI. So you all know. All right, let’s go ahead and run this. I hope Microsoft Bing Chat nails this about Microsoft Copilot.

Jordan Wilson [00:25:21]:

It would be kind of bad if it didn’t. Right? All right, so put in the same prompt. And again, you can choose kind of if you want to be more creative, more balanced, or more precise, which is a great option. Inside Bing Chat, it’s similar to creating or selecting a model, but it is more focused on accuracy, which I like for the most part. I usually use balanced. All right, so you’ll see the response here? Obviously it nailed it, right. Everything here, which I love, especially with Bing, I think Bing does the best job aside from Perplexity at citing sources, which is extremely important. So you’ll see here, it brought up all these different sources at the bottom and you can hover over, which actually, normally I don’t see it like this.

Jordan Wilson [00:26:17]:

Normally I don’t see where every single thing is sourced. It could be because I asked for a bullet pointed list that it decided to source everything. Whereas if I asked for more of a paragraph or a quick article on something, I don’t think it would have sourced line by line. But bing chat is actually really good. I like that. You can always, as you can see here, I’m hovering over you can always do the thumbs up, thumbs down, copy, export, share or continue on phone, which I think is a nice feature. The one thing that I’m not crazy about is sometimes even when you’re on search and you toggle between the search and the AI chat, the two can kind of run into each other. So as you’re scrolling on the search or scrolling on the chat, it’ll jump over to the other one, which is a little confusing.

Jordan Wilson [00:27:15]:

But aside from that, some other features. So you can just go ahead and click on a new topic. You can add an image, which is great. The ability inside a large language model to upload an image gives you so much flexibility, so much just cool use cases, right? And that’s coming to Chat GPT pretty soon here. And then you can also use the microphone. So at least right now, Bing Chat is, I’ll say this, it is a little limited compared to others, but I do with Copilot’s release today, right? Not the enterprise with all the apps, 365 Copilot, but basic Copilot features are being released today. So I do see there is going to be an enterprise version of this, of Bing Chat as well as Chat GPT. So I do see some big advancements and improvements coming to Bing Chat soon.

ChatGPT breakdown

Jordan Wilson [00:28:14]:

Right now it gets the job done. It’s not my first or second choice usually, but I do think it’s similar to Google Bard. It has a nice easy interface, but there are a little bit of some quirks. But I do expect Microsoft to be improving those soon. All right, last but not least, we saved. Let’s get the right one here. We saved. My favorite for last.

Jordan Wilson [00:28:43]:

It’s fine, I am a little biased, so I’m going to go ahead and put that same prompt into Chat GPT with plugins, okay? I’ve done a whole episode on plugins, and this is why I always come back to Chat GPT. The ability to have more than a thousand plugins that actually can push the envelope of your day to day business development and business processes is invaluable. I sound like it’s old man Jordan on the porch screaming at people like, why aren’t you running your complete day inside of Chat GPT with even if I’m being honest, even when I’m using a great large language model, and all of these are great, all five of these are great. If I’m in Bard or Microsoft big Chat or Perplexity or Cloud for too long, I feel limited, if I’m being honest, because you’re still working in a silo, right? Whereas when you’re working with Chat GBT, you’re not as siloed, you can tap in other plugins that give you access to the Internet, which is what I’m using here, right? So if anyone is wondering, like, oh, how did you access the internet? Microsoft Bing, they took browse with Bing Away, which used to be a mode inside Chat GPT. Well, I have the Browser Op plugin and the Vox Script plugin, which are both tied to the Internet and have different capabilities. And if you are signed up FYI for the Pro Course, for prime prompt polish, we have a free course, prime prompt polish. So type in PPP, I’ll send it to you. But the Pro Course, we actually go pretty deep into these different plugin packs and their capabilities, but let’s go ahead and look at the result here.

Jordan Wilson [00:30:33]:

So obviously, Chat GPT nailed it. It says here’s a bullet point list. It has the announcement, New visual identity. What I like about Chat GBT’s response here, working off the same prompt, is it did a little bit better job of categorizing and formatting things, which I know sounds weird, but even if you look at the response from Microsoft Bing Chat, it was just unsorted, uncategorized bullet points. The upside is they were all sourced and linked. But that’s why I like Inch GBT. I think it does a much better job of formatting without asking, without prompting. I think it always does a much better job than other large language models of formatting information in a way that our brains want to take it in, or in a way that we as humans learn best.

Jordan Wilson [00:31:21]:

I know that’s such a fine detail, but it’s important. Right. I’ve lost the track of how many hours? Thousand or more. Probably a lot more. Thousands of hours inside of large language models. And the ability to learn and to take information quickly is such an under, an undervalued part of a large language model. So when we look at this obviously fantastic recap, we do get a couple of sources in the prompt. I could have asked for more sources from these plugins, but we get the sources down there, which is great.

Answering audience questions

Jordan Wilson [00:32:01]:

Can even Google through them. So if you’re not using Chat GPT plugins, you definitely should be. All right, so that’s the roundup went over the top five. I’m going to get to a couple questions. I think we got time to go over a couple. Might not be able to get to them all. Maybrett. Absolutely, says Perplexity.

Jordan Wilson [00:32:22]:

Sounds like a great one for technical tasks. It is good for technical. I always use it essentially for fact checking. Or sometimes I’ll start. I’ll say this, I have chats within Chat GBT that I trained to help myself and our team learn things, right. Which is great, but sometimes I’ll just start because it’s a little faster inside Perplexity. I think it’s great for that. All right.

Jordan Wilson [00:32:52]:

Have a couple questions here. Let’s get to them. And I want to make sure I get them. Ben, I think we have the old man Jordan meme in an old thread somewhere. Also get it ready. All right. So Val says, Why use double web browsing plugins? Great question. And I go over this more in the pro course.

Jordan Wilson [00:33:15]:

Are you coming to the pro course, Val? I got to make sure I sent you the information. So we have one today and tomorrow, and we’ll get another one going in a couple of weeks. But different web browsing plugins have different capabilities. We actually have a huge chart, ran through more than 20 internet connected Chat GPT plugins and tested each and every one of their abilities across four major categories. So for a lot of the work that I use Chat GPT for, it’s a lot of internet research. It’s. A lot of learning things, and different plugins have different capabilities. So even browser, Op and Vox script in our testing, and we literally, I don’t know anyone else that has put through 20 plus Internet connected plugins and has created a spreadsheet on their capabilities.

Jordan Wilson [00:34:04]:

But as an example, some Internet connected plugins have the ability to read PDFs, some have the ability to summarize YouTube videos, and you can only have three plugins active at any time inside chat GPT. So you do have to be wise about not just getting a one trick pony. You do want a Swiss Army knife that you’ve tested. All right, I hope that makes sense. Need the pro course. All right, I got you. All right, let’s see. I think we have another question here.

Jordan Wilson [00:34:37]:

All right, let’s go. Kevin, thank you for joining. So, Kevin says, have you ever used a combination of them to craft something? If so, what do you tend to start with? Which one do you tend to end? Oh, such a good question. Yes, all the time. All the time. I kind of started to answer that, but usually I’ll do two things, Kevin. So one is I’ll run a parallel chat, and again, I’m always training chat. It’s never one input, one output.

Jordan Wilson [00:35:08]:

So I’m saying, if I’m going to sit down and go through the work of training a new chat to give it a new skill to share resources, all of these things, I’m going to usually do the thing in a separate large language model with a different model. Right? So not just doing it to those that are using GPT four. But sometimes I’ll run a parallel in Bard or Cloud, because if you’re going through the process, essentially you’re going to be just doing a lot of copying and pasting, so it’s not a lot of extra time, but you get double the output or double the possibilities and you will get wildly different outputs. When using even the GBT Four models from Bing, chat, chat, GPT, Perplexity, you’ll get wildly different results than Bard and Cloud. So, yes, all the time. I tend to start, actually a lot of times in Perplexity. It’s a quick way that actually goes to user interface, because chat GBT is great, I love it. The user interface is clunky, you have to scroll through.

Jordan Wilson [00:36:16]:

I have probably 1000 plus chats and I even need to do a better job of saving them, putting them in a spreadsheet. But the UI UX isn’t the best in chat GBT, or if you need to add a plugin, search for it. I spend so much time uninstalling plugins because I have so many installed, they’re hard to find. So a lot of times I’ll start in Perplexity and take those results and bring them into chat GPT or Bard. All right, hopefully that answered the question. Brian, what’s going on? Brian? So, Brian with a quick question. If you are not accessing the Internet for an analysis of data which has the best memory and analytical ability. Well, memory, you can definitively say Cloud just 100,000 tokens, no one else has that.

Jordan Wilson [00:37:09]:

So memory, hands down is Cloud. Analytical ability is probably up for interpretation. In my experience, it depends on also what your input is, right? Because if you’re going heavy data, that’s going to change the answer. If you’re just looking for strategy or competitive analysis, that changes the answer. I’d say for the most part, analytical ability without knowing what the input is. I’d say normally chat GPT with the correct plugins runs laps around everyone, right? Because if you’re talking analytical ability, it’s like, okay, well, we can use the Wolfram plugin inside chat GPT and you literally give it like genius superpowers. Right? So I will say analytical, I would say chat GPT just because of the ability to connect different plugins. All right, let’s see.

Jordan Wilson [00:38:12]:

Question from Michael. What’s up, Michael? So Michael says, seems like you are mostly into chat GBT, perplexity and barred. What do you use Cloud for? Great question. Sometimes I’ll use Cloud when I need to quickly chat with a PDF, right? If I’m in a time crunch or if I need to have a very long conversation with a PDF, then Cloud is the best for that. Cloud is the only one by default where you can click and add a PDF, which is actually a pretty big benefit because even I ran through. I’m going to have a video coming out this week with our AI in five. We do it every single day. So sign up for the newsletter.

Jordan Wilson [00:39:03]:

FYI, I’ll put that plug in now. Go to your everydayai.com sign up for the newsletter. But I did do a huge PDF plugin recap inside chat GBT and they’re great, but a lot of times you have to first upload a PDF to a server somewhere or separately use have an account with one of the PDF plugins. So if I’m ever in a time crunch or if I don’t just want to go through the hassle of uploading a PDF navigating to it, right clicking, copying the PDF URL for chat GBT, then I might go into Cloud. All right, Ben, I think I’ll do one more question. I’ll get to the rest of them after. So, Ben’s asking, when you’ve trained a large language model, do you find it forgets the training and runs out of yes. Oh, gosh, yes, it does run out of memory.

Jordan Wilson [00:40:03]:

Also have another video showing this probably this week, but even within chat GBT it starts to lose its memory pretty quickly. OpenAI has said that 32K or 32,000 tokens, which I think is like 26,000 words, is available to everyone in GPT four, but it’s not. It’s available via the API. But at least in my testing and everyone else, if you’re using GPT four inside chat GPT, I believe it still has an eight k memory, which isn’t a lot. 6000 words goes quickly, Ben. So that’s why I said there is a pretty big advantage to Anthropic Cloud if you’re working in very long chats. What we teach in prime prompt Polish, which is our PPP course for Chat GBT, we teach everyone how to do a memory recall. Because, again, I’d rather do a memory recall when you start to kind of, quote, unquote, run out of memory and still have all of the amazing business capabilities of Chat GBT with plugins than to use Anthropic Cloud, which has a longer memory, if I’m being honest.

Jordan Wilson [00:41:09]:

Right? You can literally, with one click of a button, automate very long, complex tasks within three different plugins within Chat GBT, which you can’t do anywhere else. All right, I think we got to some questions. Didn’t get to them all. I didn’t want this to turn into a marathon episode. So, as a reminder, I’m going to try to get to more questions here after the show, but please, if this was helpful, let me know. Did you all learn anything? Hey, if this was helpful, go ahead on. If you’re listening live, just repost this. Repost this to your network.

Jordan Wilson [00:41:47]:

If you’re listening on the podcast, share this episode with a friend. Right? If you’re listening on Spotify Apple, you can click that little share button, send it to someone. I think this is a fundamental skill set for any working professional in the future. You have to understand how large language models work. You have to know the pros and the cons, and they’re always changing. So to keep up, go to your everydayai.com. Sign up for the free daily newsletter. We gave our website a little love little facelift there, so check that out as well.

Jordan Wilson [00:42:19]:

I hope to see you back not just for more large language models in the future, but I hope to see you back for another episode of Everyday AI. Thanks, y’all.

Leave a comment

Your email address will not be published. Required fields are marked *