Uncategorized

Ep 148: Safer AI – Why we all need ethical AI tools we can trust

By,
  • 20 Nov, 2023
  • 16 Views
  • 0 Comment

Resources

Join the discussion: Ask Mark Surman and Jordan questions about AI safety

Upcoming Episodes:
Check out the upcoming Everyday AI Livestream lineup

Connect with Mark Surman: LinkedIn Profile

Related Episodes

Overview

Artificial Intelligence (AI) is revolutionizing industries, driving innovation, and transforming the way we live and work. However, with great power comes great responsibility. As we dive deeper into the AI space, there arises a pressing need to establish guardrails and develop ethical AI tools that we can trust. In this article, we will explore the implications of unsafe AI, the importance of transparency, and the role of open source AI in ensuring a safer future.

The Risks of Unsafe AI:

The rapid pace of AI development raises concerns about potential risks and unintended consequences. Misinformation, driven by AI, poses a real threat to our society, particularly during crucial events like national elections. The capability of AI to deep fake voices and identify fake content is just the tip of the iceberg. More sophisticated AI tools are required to tackle this growing problem and protect the authenticity of shared content.

Ensuring Transparency through Open Source:

To address the growing concerns surrounding AI, it is crucial for companies and governments to prioritize transparency. Open source AI, built upon open building blocks similar to Linux or the Webstack, provides a pathway towards greater transparency and accountability. By making AI code accessible to all, we can foster a culture of trust and collaboration.

Trustworthy AI from Mozilla:

Mozilla, an organization committed to building trustworthy and open source AI, has taken the lead in this arena. Their approach centers on creating safe and usable AI models that users can download and fine-tune. The aim is to ensure the accuracy and trustworthiness of AI systems, avoiding issues such as hallucinations and discrimination. Mozilla’s published paper, “Creating Trustworthy AI,” provides valuable insights into their mission and strategies. Expect more reports and software from Mozilla AI in the near future.

Navigating the Industry Landscape:

Recent developments within prominent AI companies have shed light on the need for ethical considerations in AI development. Disbanding responsible AI teams or leadership changes can raise concerns about prioritizing profit over safety. As business owners and decision-makers, it is essential to be aware of the interests behind AI tools and the control we maintain over our data.

Striking a Balance:

The call for regulations and guardrails in AI development is growing louder. Governments and the public are increasingly demanding accountability and oversight. As we tread this path, it is crucial to strike a balance between innovation and safeguards. History has shown that reining in technology for the public interest is possible. However, we must approach government involvement with caution to ensure it benefits society without hindering progress or becoming politically motivated.

Conclusion:

As AI continues to shape our world, it is imperative for business owners and decision-makers to recognize the importance of ethical AI practices. Building trustworthy and transparent AI tools is not only a responsibility but also an opportunity to ensure a safer future. By embracing open source AI and advocating for transparency, we can foster innovation while maintaining control over our data. Let us collectively work towards the advancement of ethical and responsible AI, benefiting humanity as a whole.

Topics Covered in This Episode

1. Future of AI regulation
2. Balancing interests of humanity and government
3. How to make and use AI responsibly
4. Concerns with AI

Podcast Transcript

Jordan Wilson [00:00:18]:

Do you trust the AI tools that you use? Are they ethical? Are they safe? I don’t even know if I ask myself those questions every time I jump in and I try new AI system or a new Gen AI tool. But it’s important, to to talk about having trustworthy AI, and that’s what today’s show is all about. So I’m very excited, to have someone from Mozilla join the Everyday AI show. But before we do, as always, we’re gonna go over The AI news, and and welcome. If this is your first time to everyday AI, everyday AI is for you. This is helping everyday People like you and me learn and leverage AI with our daily live stream podcast, free daily newsletter.

Daily AI news

So let’s go ahead and jump into the AI news Real quick, and there’s a lot, so, bear with me here. But there’s more drama at OpenAI over the weekend than a Shakespearean tragedy on fast Forward.

Jordan Wilson [00:01:19]:

So the OpenAI board, fired, founder and CEO Sam Altman on Friday. By Saturday, there were rumors that OpenAI was Trying to bring him back as there were dozens of high ranking OpenAI employees who were signaling they would leave and follow Altman to wherever he may go or whatever he may start. By Sunday, it appeared the 2 sides were in agreement, and reports indicated that Altman would probably be coming back if the whole board was kind of replaced. Microsoft CEO, Satya Nadella, was reportedly leading or helping lead the discussions, and it was announced late Last night on Sunday night, that former Twitch CEO, Emmett Scheer, was announced as the interim CEO at OpenAI. And then Nadella also, couple hours later, announced on Twitter that Sam Altman and Greg Brockman, Greg Brockman, who’s the president and cofounder at OpenAI, Together with colleagues, we’ll be joining Microsoft to lead a new advanced AI research team. Obviously, this is all way too much to unpack in 1 single day. So join us tomorrow. We’ll be having a dedicated conversation about this to unpack it And talk a little bit about what it means.

Jordan Wilson [00:02:31]:

So, entire show tomorrow dedicated to that. Other things we have going on in AI news. Meta Maybe snuck this 1 in over the weekend, but the Facebook parent company disbanded its responsible AI team. Wow. Okay. And then also Germany, France, and Italy came together To reach an agreement on future AI regulation. What a jam packed weekend of AI news. As always, we’re gonna have much, much more, so make sure you go sign up for the free daily newsletter at your everyday AI.com.

About Mark and Mozilla Foundation

Jordan Wilson [00:03:01]:

Check out the show notes if you’re listening on the podcast. It’s in there. Make sure to go sign up because my wow. My gosh. After all that, I need a breath. But, more importantly, that’s not what we’re here to talk about. Although, we’re probably gonna touch on it today because it’s very relevant. So, please help me bring on and Welcome to the show.

Jordan Wilson [00:03:19]:

There we go. We got a Mark Surman, who is the president of Mozilla Foundation. Mark, thank you so much for joining the Everyday AI Show.

Mark Surman [00:03:27]:

Thanks, Jordan. Really excited to be here, and what a weird day. I think maybe what happened is somebody just told Chatc BT to write a Shakespearean drama about OpenAI, and that’s sort of what we’re watching unfold.

Jordan Wilson [00:03:39]:

Yeah. And it was definitely fast forward. So, maybe maybe, Mark, let’s start super high level. We’ll probably probably hit rewind on that one, but Just tell tell everyone a little bit about what you do at the Mozilla Foundation. I’m sure a lot of people know Mozilla. It’s one of the most popular, Internet browsers in the history of the world, but maybe tell everyone a little bit about the Mozilla Foundation and maybe even a little bit about Mozilla AI.

Mark Surman [00:04:01]:

Yeah. Thanks, Jordan. Well, Mozilla, as as many people know, but actually many people don’t because we’ve been around 25 years, we were started as an open source project in the public interest. The idea that people should have a browser, they control this open source. And that was at a time when Microsoft They dominated the web world. 98% of browsers, the whole technology stack of the web were really in Microsoft’s hands. And so in many ways, Mozilla started as, you know, a counterbalance to corporate power and the idea that people should control the tech. And with Firefox 20 years ago, we were successful.

Mark Surman [00:04:37]:

We kinda took back a bunch of share from Microsoft, but also laid the the paths for things like Chrome. And all of the websites, you know, Gmail, Facebook, everything that emerged during that time, which needed open technology to get built. They all are based on sort of open web technology like JavaScript and, and HTML. And so that’s what Mozilla stands for, who we are. Today, we’re at the spot of saying the same risk is there that 1 or 2 or a few companies are gonna control the technology that defines the next era, and that’s AI. So over the last few years, we’ve set up a bunch of fully, you know, public purpose oriented activities, you know, our own activism through Mozilla Foundation, things inside of our new Mozilla Ventures, inside our our main Firefox company, But in particular, Mozilla AI, which exists to build open source AI that’s trustworthy and in the public interest. And I could talk a little bit more in detail what that looks like, but that’s something we’re really proud of. It’s a very tiny, but hopefully over time, A part of a a bigger coalition of of people trying to build an alternative to just, AI in the hands of a few big companies.

Big Tech and ethical AI

Jordan Wilson [00:05:51]:

And, let’s actually go there. And then we’re gonna get back to Mozilla AI because I have a lot of questions, and I’m sure our live audience does as well. And as a reminder, if you are joining us live, make sure to get your question in for Mark. What do you wanna know about, responsible and ethical and trustworthy AI? Because, he’s he’s one of the people out there leading this whole movement, so get that question in. But I do wanna talk about this, Mark, because usually the big breaking news Doesn’t always line up with what we’re talking about on the everyday AI show, but today it definitely does. So maybe, I’d like your take, as as someone like I said who’s helping, you know, push forward, responsible, ethical, safe AI use, in in the ever changing world of generative AI. What’s your take on on what’s happening, at OpenAI, with with Microsoft, you know, now obviously very involved and now Sam Altman and others, You know, creating this, you know, new research team at at Microsoft. What’s your take on it specifically from, kind of that safe and ethical AI?

Mark Surman [00:06:53]:

You know, I I think we’re gonna take days, weeks, months, who knows, in terms of knowing what’s behind that Shakespearean drama. But if you back up a little bit, it it’s kinda not surprising in that we’re in the midst of this huge public conversation about, You know, how do we balance the interest of humanity and the set of companies that are are kind of dominating the early wave of The only part of this wave of AI. And so, you know, we heard it all in your headlines. Right? It was Meta and Microsoft and OpenAI and Italy, France, Germany. You know, whether it’s governments or big companies or small nonprofits or people at home, people are saying, how do we want this to work? How do we keep it in control? How do we make sure our interests, are considered in that. And so I think you see that big tableau, playing out. OpenAI is really Interesting. It started out in 2015 as a nonprofit like Mozilla that was trying to build technology, AI in service of humanity.

Mark Surman [00:07:52]:

And I think at the time, it was just a tiny project backed by a bunch of wealthy Silicon Valley people. People were like, yeah. I wonder where that goes. Maybe that could be cool. And I guess in 2019, sometime around the the pandemic, Sam Altman effectively took it private. I mean, there was a nonprofit at the top to make sure that it followed its mission. But I think given the amount of investment that came in from private parties, you know, $13,000,000,000 from Microsoft alone, And people were pretty cynical that that nonprofit piece, which we believe in as a model, would have any any meaning anymore. And who knows what the weekend really meant, but it it does look like it was a conflict over safety and going too fast.

Mark Surman [00:08:36]:

The interest of people, and and making a lot of money. Maybe it’s not. Maybe there’s something else to it than than that. But but it looks like at At the base, it’s just a part of this big conversation about how do we balance the interests of people, and companies.

Is AI unsafe?

Jordan Wilson [00:08:52]:

Yeah. And Fast. Right? You mentioned that because it seems like everything happening in the AI space is is Too fast. It’s it’s literally moving forward sometimes weeks or months every single day. And, you know, there’s obviously the The famous, letter from, you know, now, which seems like a decade ago, but you had, you know, kind of all of these these big names signing a letter saying, hey. We need to pause on AI development to better understand it, what it how it impacts us all. Mark, is is AI going too fast? And if it is, What problems does that mean for everyday people? You you know, people who are using these systems? Are they unsafe because everything’s going too fast?

Mark Surman [00:09:36]:

You know, it it’s hard to know too fast. It’s certainly going fast. And I guess, you know, the question is Not just how fast is it going, but, you know, what you what are the risks and where are the guardrails? And so some of the risks, are the real near term ones. Right? We’re about to go into a planet, a where we’re about to go to a year, with 44 national elections in in 2024. Right? And we already have seen what misinformation driven by AI in previous elections has done. And so those are Kinda some of the things you gotta worry about, right, is how do we know what the truth is? Like, it’s that fundamental. How do we keep democracies if we don’t know what the truth is? And so I I I think, on some levels, maybe AI is going too fast. You know? You can deep fake somebody’s voice, You know, but we’ve seen that coming for years.

Mark Surman [00:10:28]:

On the other hand, we’re not going fast enough. So, you know, how are we using AI To watch for misinformation in more sophisticated ways than we have in the past. How do we use AI to help us see, like, maybe suit through a browser What’s real content and what’s fake. So a lot of it to me is actually what are we using AI for. And some of it’s too fast, and some of it’s not fast enough.

Responsible AI regulation

Jordan Wilson [00:10:53]:

It’s an interesting point. And it seems like a lot of times when we talk about, you know, even even regulating AI. Right? People say, hey. The the best way to regulate AI is to use generative AI systems. Right? Is that also problematic? You you know? And and you’ve seen even the biggest companies say, yes. We are going to, you know, regulate our AI with humans, but Also through use of AI. How, Mark, how do you find that balance? You know, even of, like, hey. How do we go faster? How do you find that balance of of still being responsible about it? When you do have to go fast now to keep up, how do people and companies do that?

Mark Surman [00:11:34]:

Well, again, I think you need guardrails, and AI is gonna be actually a part of the guardrails. And, really, the question is who’s holding them. So Governments, for example, need to, and I think are quickly trying to get their their act together in terms of creating accountability. You saw and A really ambitious executive order come out of the White House a few weeks ago that talked about all kinds of guardrails, and it talked about testing big AI systems. And you’re only gonna test them by using other AI systems to pressure test them. Right? I mean, it’s the same thing we know from cybersecurity, From the last, whatever that is, 10, 20 years, is we need to use the technology to test the technology to make it more robust. But unless you have people incented and paid to do the testing, to do the the the regulation, to have the oversight, Wish we don’t have enough investment in that. And incented to invest in the safety research and the guardrails inside the the core companies, which As we saw with Meta, you know, doing whatever they did, but looking like they kinda shut down their responsible AI team, There’s clearly not enough incentive there or they don’t care.

Mark Surman [00:12:44]:

I don’t know. So I think you technology has to be a part of How we build those guardrails, there’s no question. It’s true in other industries, but it’s about there being enough authority and expertise In the hands of the government, in the hand of public watchdogs and researchers, and enough incentive and and accountability, On the the part of the big companies that they can actually invest and do the right things. Me car companies, we know if you go back 50, 60, 70 years left to their own devices wouldn’t invest in safety. And it really took, you know, in the sixties seventies, A whole lot of pressure from the public to say you gotta invest in safety. It didn’t stop car companies from making cars or getting rich. In fact, it didn’t stop safety innovation. It sped it up.

Mark Surman [00:13:34]:

So we just need that balance between, you know, watching what’s going on and And the requirement to do things in the interest of the public while still running a company.

Jordan Wilson [00:13:43]:

Mhmm. Couple couple of things I wanna dive deeper Into their mark from from from that response. We talked about, or, you know, you just talked a little bit of there about some of the news there with, You know, Meta, the Facebook parent company, kind of disbanding its internal responsible AI team. Do you see this maybe setting the tone for other big companies? Because it seems like most big companies, you you know, that are developing the generative AI it seems like they all have those internal teams, and they’re there to, you know, quote, unquote, set up the guardrails. Do you see maybe this being a trend, where Maybe companies are saying, hey. Because of this internal team that we set up, it’s actually causing us to go too slow, and we want to go faster. Do you see that as as maybe something that might continue to happen?

Mark Surman [00:14:42]:

Well, it’s a thing to worry about. Right? There is a market pressure to to go faster, and then You do hear in that, you know, famous or now infamous Marc Andreessen letter, his manifesto on whatever the hell it was, where he said any ethical AI, any people who are trying to set up guardrails are the enemy of innovation, the enemy of progress, the enemy of society, which is boohockey. And so you, you know, you see a set of people who see the need to find this balance as, you know, as friction, and It’s actually friction that’s needed. Right? Friction is actually a a part of a good system used in the right way. There are people who feel like it’s slowing them down, and they may try to shut those things down. That might be a trend of of a kind. We’re also in a moment where Governments are saying, sorry. Slow down.

Mark Surman [00:15:31]:

We need guardrails. We’re in a moment where the public like, look at how many people are listening to your bobcat podcast are saying slow down. We need guardrails. Like, we were not having this conversation about AI at that level as our society A year ago. So I I as I say, I think this is a kind of a grand moment we’re in and a kind of grand wrestling with the balance between The public’s interest and the interest of of a few big companies. Hopefully, this has happened in moments like this in the past as they talked about Auto safety, oil monopolies. I mean, this list go on and on. Our societies actually do have a history when we get to this kinda hot moment of reining things in, and that’s what we need to do.

Mark Surman [00:16:13]:

And, certainly, I call on our our government leaders, whatever country you’re in, to to keep leaning in on that.

Creating balanced government regulation

Jordan Wilson [00:16:21]:

And, hey, as as a reminder, if you’re just catching up now or maybe you need a refresher, we have Mark Sermon, the president of Mozilla Foundation, Joining us to talk about Safer AI. So, actually, Mark, a great question from Tanya here in the live stream. Tanya, thank you for the question. So you we just talked about government involvement and, you you know, some of the pros and cons of that. But Tanya’s questions here, how do we regulate the government’s involvement So that it benefits society without the whole backs and politically instigated regulations. Tough question, Tanya, but, I’m I’m sure Mark can, can handle this one.

Mark Surman [00:16:56]:

Well, I I think one of the key things is really transparency on the part of the AI companies and transparency, of course, on the part of the government. I mean, It is a whole reason that we have been an advocate for open source in AI. We signed a letter with, you know, a bunch of very famous AI scientists as well as and Activists and policy people, a couple weeks ago around the UK AI summit calling for open source AI to be protected in regulation Because openness and transparency are so key to being able to look into the black box of what’s going on. So they’re key if you wanna regulate, But they’re also key if what you wanna do is even keep regulators in check. We need to understand what’s going on here, in order for us to regulate effectively and And hold people accountable whatever party they are in the system. So I would just really lean on transparency as key. And, hey, speaking of that, let’s let’s get back to what’s important here. Let’s let’s talk

Jordan Wilson [00:17:53]:

a little bit about, Mozilla AI. So kind of explain Now that we have actually a great foundation, in this podcast so far of some recent events and where we’re going, how does Mozilla AI play into this equation?

Mark Surman [00:18:07]:

I’m kinda embarrassed by this website because it came out of March when we launched, and we we kind of we did the, I don’t know, the perfect or the or the silliest thing ever, which was we announced what we’re doing and then kinda went into stealth mode. But what we’re doing is still what this this website said in in, March, which is Building a start up and a community, building trustworthy, and open source AI. And what that means is we really believe, As has been the case with the, you know, the last 30 years of the Internet that you need a set of open building blocks, whether those open building blocks in the past have been Linux Or the Webstack, HTML, JavaScript, all those things, or now, you know, things like open source, large language models. If you want A broad set of people to be able to innovate if you want things to be transparent enough for people to be able to keep things safe. And so that’s what we’re in the business of of doing. Right now, it’s a a kinda core team of about 20 engineers really focused on taking the wave Of open source, large language models that have come out. Yes, llama, but not primarily llama. You know, also things like Luther, Things like the Allen Institute is is working on, things like Falcon, which has come out of the UAE.

Mark Surman [00:19:18]:

You know, you’ve seen this growth of basically open source clones of things like OpenAI or of GPTX. And what we’re working on is how you make them safe and usable. So you can easily download those things, but there’s not that much you can use them for out of the box. And so just like 20 years ago, as more and more people turned to Linux, Linux distributions emerged as a way for you to quickly get Linux on your system, configure it to do things, configure it to add other functions around the the edge. So we’re working in Mozilla AI to do that, helping people take open source large language models, train them on their own data, Fine tune them so they’re super accurate and they’re not hallucinating and they’re not lying, and evaluate them to make sure that they’re safe.

Is AI too accessible?

Jordan Wilson [00:20:08]:

You know, Mark, I’m I’m I’m super curious because, from from my vantage point, right, I’ve always been a kind of a tech enthusiast. Right? I’ve been Building websites for, like, 25 plus years. So I’ve always been a little bit of a geek and, you know, been been, you know, putting my putting my foot in the AI waters for a couple of years now, but It seems like right now, it is easier than ever for an individual, an entrepreneur, a small business owner, to to to leverage AI. Right? So even some of the things that we’re talking about, you know, open source, large language models like Meta’s LAMA, you don’t you don’t have to be a A, you know, developer or, you know, an AI expert to necessarily use these tools, right, which I think is both, Exciting, and it brings, I think, a lot of, optimism, for for business growth. But at the same time, a lot of, I think a lot of times in the past, because of those, kind of higher hurdles to clear, you would generally have someone who’s maybe more quote, unquote of an expert Using and leveraging these tools. So are there actual downsides specifically when it comes to to trustworthiness and, you you know, safe and effective tools that it’s actually Almost so easy that anyone can use it that maybe people are just making mistakes when it comes to their data, when it comes to creating trustworthy models.

Mark Surman [00:21:26]:

Well, that’s exactly the question. And I I think that that question about are there risks that come from how easy this is to use and anybody can use it for for anything Emerge with both the closed models and the open source ones. I mean, you can bend OpenAI or Bard or, you know, GPT or Bard To do lots of different things as well as spending the the open source LMs, to do lots of different things. And that’s why we’re focused on trustworthy AI. Possibly, that’s why people, You know, got rid of Simon Altman at at OpenAI over the weekend. Right? It’s there are big questions around trustworthiness and safety. That’s a space for innovation, and so we’re trying to kinda build that safety and usability layer on top of open source large language models. Again, so they’re not only accurate.

Mark Surman [00:22:11]:

Like, let’s say I’m doing cancer research with a AI research assistant. I wanna know that it’s not hallucinating. Right? We know that these LLMs hallucinate, That it’s you know, what is giving me back from looking at this whole body of cancer research data that I’ve collected is something that I can actually trust to go design my next experiment. But, also, you know, if I start using that in social circumstances where I’m, you know, judging people’s loans or housing, any of these things where discrimination Emerges. I wanna be able to double check, evaluate that it’s not discriminating against people. So all of those things we’ve been talking about that Broadly people talk about as AI safety or trustworthiness are real areas where we need innovation As as well as guardrails, but we need innovation. So that’s what Mozilla AI is really focused on is making sure this stuff that’s easy to use now Is also the easy to use in a way that is trustworthy and safe.

Resources for AI best practices

Jordan Wilson [00:23:06]:

Great great question here, that’ll bring up, and I was also curious about this myself, because, Mark, we were talking a little bit earlier about even, you know, the, the Biden White House with their executive order, kind of on AI regulation, which I’ll just say say my thoughts on that for another day because there’s there’s some things in there that are great, but there are some things in there that are very, very broad and a lot of a lot of gray area. Right? But, Splittlana here is asking, are there any resources or reports Mozilla AI has published on safer AI best practices?

Mark Surman [00:23:38]:

The thing to look at now, and you can can pretty easily find it if you just search for this, you could Bing it or Google it or whatever you search in. Mean, it’s an interesting thing. I’ve been trying at Bing to see how they’re using generative AI. But, anyways, the thing you could get now is there’s a paper from 2020 called Creating trustworthy AI, by Mozilla that myself and a number of other people wrote. And it was an early take before we were in this frenzy On some of the things you need to do, which are, you know, shift industry practices and norms in terms of trustworthiness, build different building blocks, Help consumers to be more aware of what they’re asking for or getting in AI and then help regulators make better choices. So you can take that paper. I actually think a lot of what’s in there is really durable even though it’s 3 years old. We’re coming out with a new version of that paper We’re in a progress report on that paper, I guess, in January.

Mark Surman [00:24:30]:

And then Mozilla AI itself, because we’re building some pretty deep tech, has been quieter and in the background. They’ve been hiring engineers, setting up infrastructure, running experiments, but I think you can expect more from them also in the new year. So, you know, After we kinda rest from a year of AI over Christmas for a a second or two, I think you’ll see more reports and also some some software early from From us early next year.

Jordan Wilson [00:24:54]:

Alright. Fantastic. And if you’re listening on the podcast, don’t worry. Check the show notes. We’ll we’ll make sure to include that, kind of, paper that we have up on the screen now, from, Mozilla. And and when it comes out, I’ll I’ll hound out Mark and make sure that we can send out that Send out that new version to you as well. Absolutely. You you know, Mark, something else that I’m I’m very curious about because when we talk about, you know and You know, trustworthy AI and and ethical, AI tools.

AI concerns to be aware of

Jordan Wilson [00:25:22]:

It it it almost implies, right, without saying that maybe some of the tools out there aren’t or maybe aren’t ethical. What are some of the biggest, concerns that you have that, you you know, most of us everyday people who aren’t studying this should be aware of when it’s maybe like, hey. It’s not safe or maybe it’s not ethical to do a, b, and c. What are those things that we should be looking out for?

Mark Surman [00:25:45]:

Well, there’s there’s I mean, it’s a long list.

Jordan Wilson [00:25:49]:

Some of them like, how much time do we have here?

Mark Surman [00:25:51]:

How much time do we have? Yeah. Some of them are really at the societal level, but then they Act us individually, and I and I kinda go with this at the individual level because that’s what you’re asking. You know? So at the societal level, we have All these questions of misinformation, and and it’s impacted huge numbers of things from health issues to democracies. And I guess at an individual level, Just being hypercritical, especially if you’re gonna be sharing content about the source of the content. I mean, it sounds boring, But that’s an AI issue, and it’s an AI issue at a at a huge scale. So being being critical about the authenticity of of media material that you’re sharing, basic, but but important. And then I think, you know, when you you think about, more the generative AI tools that are emerging. Just be conscious of what data especially if it’s not yours.

Mark Surman [00:26:42]:

It belongs to your company. It belongs to your community. It belongs to your What data you’re putting in there. And that’s, you know, one of the reasons we’re interested in open source large language models is you can set them up on your own infrastructure or on some infrastructure you and really control where the data goes. So I think just be conscious that if you’re using data that’s not yours to use And you’re throwing it into AI systems, you’re basically offering that into the the, you know, the training data stream of those systems. And potentially, although I think this is not as as, common right now, you know, somebody’s gonna surface that data and find that actual fact There might be a secret.

Jordan Wilson [00:27:21]:

Yeah. And, you know, something that I’m thinking a lot about now, Mark, is is how Our everyday usage of generative AI is is probably going to change because I think for the last, you know, year Or 3 to 4 years depending on how early in of an adopter you are. You know, you were logging into a system. Right? You you were opening a browser, logging into a system and Use generative AI. But now as we see with, you know, systems like Microsoft 365 Copilot, whatever, Apple’s, forthcoming, Ajax, GPT, whatever you wanna call it, whatever their system may be. But it seems like, generative AI is is going to creep onto our devices, onto our operating systems. You know? We have the the humane pin, right, which is gonna be following us around everywhere. How do you, how do you prepare for that type of future? Well, in some cases, it’s not the future because, you know, 3 65 Copilot has already started to roll out, But how do you prepare for it then when it’s just everywhere?

Mark Surman [00:28:23]:

I think there’s the same set of questions. Right? Is In who’s in control, and if it’s somebody other than you, it’s you know, in whose interest is it being run? And so that’s where I actually think in the long run, I don’t think this is the focus of humane. I have a real interest in on device personal AI that you control that is sort of sovereign in some sense, and open source Let’s us get to that future faster. So if, you know, if there’s really something totally on my phone or on my pen or whatever, and I could control, you know, The relationships it has with all the 1,000, millions of other AIs out there in the world where it’s kinda my agent. Like, that could be a good world, But where that is something that is run by a cloud services company that’s vertically integrated, that is in everything from social media to back end to Blah blah blah blah that sells advertising. Probably the way that that thing is running, isn’t actually designed for my benefit. So I think a lot of this is is who’s in control and who benefits as we move into this era. And I I think there is a possibility, And those of us working on the kinda open source hybrid public interest commercial kinda space are trying to drive that possibility That we could be in more control, but it’s not foregone.

Mark Surman [00:29:43]:

In fact, you know, things are trending in the opposite direction.

Mark’s final takeaway

Jordan Wilson [00:29:47]:

Alright. So so, Mark, we’ve we’ve already talked about a lot. We we’ve talked about how some of the recent news, like what’s going on at OpenAI and, you know, not a disbanding their, kind of responsible AI team, how that affects us. We’ve talked a little bit about Mozilla AI and, you know, kind of even the future of of generative AI and how we can, you know, have and Safer and more ethical systems. But, to to kind of wrap this up, what is the one takeaway that that you hope whether, it’s a business leader helping to make, decisions on generative AI usage at, you know, a Fortune 500 company or maybe a a solopreneur entrepreneur, trying to figure out how to use it to to grow their new start up. What is that one piece of advice or that that most important takeaway that you can give to people, to to make sure that they have more, safety in their generative AI kind of tool stack and and just, tools that they can trust.

Mark Surman [00:30:38]:

I don’t know. I I guess it’s it feels like we’ve talked about this in different ways, but it is really make sure that you’re super critical about In whose interest this set of tools you’re buying, are operating, in particular, keep control of your data or know how your data is being used, because we’re moving really fast into a place where we’re kinda signing a lot away, and it didn’t be that way. And then start to look for providers. We will become one of them, and I think you’re starting to see more and more of them who bring open source as an option, that you can use as a way to keep More control over your own infrastructure and your own data.

Jordan Wilson [00:31:18]:

So much good advice in this episode about how we can have and Safer and more ethical AI tools that we use every single day. Can’t thank you enough for joining the show, Mark. We really appreciate your time.

Mark Surman [00:31:31]:

Thanks so much, Jordan. It’s been a blast.

Jordan Wilson [00:31:33]:

Hey. And as a reminder, yeah, there’s a lot more AI news we couldn’t get to today. A lot more from, from Mark and the Mozilla and and what they’re doing even at Mozilla AI. So make sure you go to your everyday AI.com. Sign up for that free daily news newsletter. Send the show notes every day as well For more on that, hope you can join us tomorrow too. We’re gonna have a dedicated, conversation about what the heck is going on with this Shakespearean tragedy and fast forward at at OpenAI. So thank you for joining us today, and we hope to see you back tomorrow and every

Mark Surman [00:32:03]:

day for more everyday AI. Thanks y’all. Thanks,

Leave a comment

Your email address will not be published. Required fields are marked *