Ep 117: Making AI Design User Friendly
Join the discussion:Ask Yaddy and Jordan questions about AI design
Check out the upcoming Everyday AI Livestream lineup
Connect with Yaddy Arroyo: LinkedIn Profile
In the rapidly evolving world of artificial intelligence (AI), businesses are finding new and innovative ways to enhance user experiences and accessibility. The latest episode of the “Everyday AI” podcast explores the crucial topic of making AI design user-friendly, highlighting its impact on various industries and the potential for transformative change. Let’s delve deeper into this subject matter, focusing on how user-centric AI design can unlock business success.
Enhancing Accessibility and Inclusivity:
The podcast episode emphasizes the importance of accessibility in AI design, showcasing how assistive devices can empower individuals with diverse needs. From pointer devices controlled by forehead movements to voice commands and sign language recognition, the broad spectrum of accessibility options allows businesses to cater to a wider range of users.
Driving Transformation with Multimodal AI:
Generative AI is primarily being utilized to enhance associate productivity within enterprise organizations. The podcast underlines how this AI technology is being used to streamline tasks, optimize content quality, and search functions. For instance, the implementation of natural language processing in chatbots enables accurate user intent comprehension, leading to better customer experiences.
Empowering Education through AI:
One of the significant benefits discussed in the episode is the potential of AI to democratize education. By leveraging AI-driven technologies, businesses can create more equitable learning opportunities and bridge the gaps in educational access. Adaptive learning, personalized experiences, and intuitive interfaces can empower learners of all ages and backgrounds, fostering a future where knowledge is accessible to everyone.
Designing Human-Centered AI Systems:
The episode emphasizes the significance of placing humans at the heart of AI design. User experience plays a pivotal role in determining the success of AI applications and solutions. As businesses develop conversational chatbots, AI-driven banking systems, and more, the importance of intuitive interfaces cannot be overstated. User-centric design enables businesses to deliver seamless experiences, gain customer loyalty, and ultimately drive growth and success.
In the dynamic landscape of AI, user-centric design is a catalyst for driving business success. Leveraging the principles of accessibility, multimodality, and generative AI, businesses can unlock transformative potential and create a more inclusive and equitable world. By placing humans at the center of AI design and embracing innovative interfaces, businesses can forge stronger connections with customers, enhance education, and shape the future of industries. Let’s embrace the power of user-friendly AI design and pave the way for a brighter future.
Topics Covered in This Episode
1. Multimodal Input Methods and AI
2. Accessibility and Assistive Devices
3. Transforming the Internet and AI Capabilities
4. Generative AI Tools and User Experience
Jordan Wilson [00:00:17]:
How can we make design More accessible for everyone. And what does that even mean for generative AI? Well, it means actually a lot more than you might think, And I’m very excited to have an everyday AI regular and fantastic contributor on the show To help walk us through and help us better understand what it means to have more user friendly design in generative AI. I can’t wait to talk about it. Welcome. My name’s Jordan Wilson. This is Everyday AI. It’s a daily livestream podcast and free daily newsletter.
Jordan Wilson [00:00:53]:
Y’all not reading the free daily newsletter? You gotta start doing that. But we’re here to help you better learn and better leverage generative AI because Whether you know it or not, we’re all gonna need it. We’re all gonna need it. If you’re not already using generative AI in your role, it’s only a matter of time probably. But before we talk, design, UI, UX, making design more accessible. I can’t wait. Get your questions in. What do you wanna know, about design’s role, in generative AI and and how it can be more accessible.
Daily AI news
Jordan Wilson [00:01:26]:
Get your comments in now. I can’t wait. But first, As always, let’s start off with what’s going on in the world of AI news. Here we go. Let’s talk about it. This one’s this one’s interesting y’all because there could be a new Chipmaker in town. Alright? We’re not talking potato chips. Obviously, we’re talking, the chips, the GPUs, that power generated AI.
Jordan Wilson [00:01:48]:
So according to new reports, very new reports less than a couple minutes old, OpenAI is actually exploring the possibility of creating its own Chips to be used for generative AI and has already been considering acquisition targets to do so. So, you know, according to this new report, OpenAI has considered different paths to the point where it has performed already due diligence on potential acquisition targets. This is extremely interesting because right now, we’ve already seen, especially here in the US, generative AI is really actually powering our economy, But we all need these powerful GPU chips that power, all of these different systems, these large language models like chat, GPT, Google BARD, but We’re running short on ships, so keep an eye on that to see if OpenAI is gonna be a player in that space. Next, DALL E three Released inside of Chat GPT. Well, not for everyone, but for many users already who are on the Chat GPT Plus plan, $20 a month. Open it up. Go go refresh your browser. Look.
Jordan Wilson [00:02:54]:
You might have, DALL E 3 like I did, in your, in your modes. So go in. Go to chat gbt. Check your modes. See if DALL E 3 is there. Pretty pretty big news. So, DALL E 2, OpenAI’s previous model, has been out for a very long time. And as other, AI generating, image companies, you you know, such as, Stable Diffusion and, MidJourney make great improvements.
Jordan Wilson [00:03:20]:
A lot of people were saying, alright. Well, hey. This DALL E 2 thing, not so good anymore. But DALL E 3, at least inside Chat, gpt, just released, just started Getting rolled out yesterday, so, exciting news to see what happens there. And last but not least, and speaking of AI images, one thing a lot of the big companies have been talking about is creating AI watermarks, but a new study from some US universities Says those AI watermarks probably aren’t even going to work. So this new study said it is not possible to create, in quotes, and Reliable watermarks for AI content. So, multiple American universities, have found that it’s Alright. Alright.
About Yaddy and being a VUI designer
Jordan Wilson [00:04:09]:
That was a mouthful. That was a lot. That was a lot. But I’m excited y’all to talk AI design. It’s it’s actually so important. You know? There’s a big difference between when you go to experience a new generative AI tool, And it’s disastrous versus when it just feels like it just works, like it gets you. That is what, UI, UX, and accessible design is all about, and I’m excited to bring on our guest for today. So please help me welcome to the show.
Jordan Wilson [00:04:40]:
We have Yaddi Arroyo who is the principal Viewee designer at Truist. Yaddi, thank you for joining
Yaddy Arroyo [00:04:48]:
us. Hi there, guys. Thank you for having me.
Jordan Wilson [00:04:50]:
Alright. I’m excited. I’m excited for this. But, Yadi, just real quick, tell us a little bit about, you know, what you even do as a as a VUE designer. You you know, I even butchered it up. Like, I I thought it was VUI. So I’m I’m not even in the know, so let everyone know what what a Vue Designer even does.
Yaddy Arroyo [00:05:07]:
Yeah. Vue Designer is usually, of, like, conversational design, which is like a specific subset of AI. Now it’s I I kind of misnomer because that’s what my company named me, but it’s really multimodal designer. Right. So designing for, like, multiple interfaces, which could include voice. So BOE stands for voice user interface. So that’s kinda like the acronym. But yeah.
Yaddy Arroyo [00:05:28]:
And and, basically, what what I do right now currently is different than what I’ve done in the past. Currently, I’m focusing on the conversation with the user in chatbots at a bank. But in the past, I’ve actually created generative AI, like social listening tools and other type of Telematics, which is working with cars and stuff and using voice command. So I have some experience just all over from, like, a little bit of health care to a lot of telematics to a lot of, like, AI, you know, enterprise side. So it’s not it doesn’t see the the light of day in terms of consumer side. Right? These are power tools that people on the inside use to be able to create, visualizations of what people are saying or what people are thinking or what people are mentioning. Yada yada. Right? Like, that was just one example, but yeah.
Yaddy Arroyo [00:06:08]:
No. That’s what I work on. You know, I got into it by accident just because I like design.
Jordan Wilson [00:06:12]:
You know, you know, speaking of that, I do think in general, design is becoming and more accessible to people, to to people of all ranges, backgrounds, and skill sets. And and maybe it has something to do with and Generative AI as well. But can you talk a little bit about, accessibility in design and and and even, how it’s changing, over the years?
Yaddy Arroyo [00:06:37]:
Yeah. No. So what I like about product design is you can actually create a product that most most, if not all, people can use. Right? So I’m in banking. So that’s you have to understand my mindset. Banking is very, like, narrow in terms of regulations and data protection and accessibility. We have to meet a certain standard Because everyone deserves money. Everyone needs money.
Yaddy Arroyo [00:06:57]:
Everyone needs to pay for stuff, and it doesn’t matter if you’re blind. So the reason I I I do what I do is because there’s people that wouldn’t be able to do it otherwise. Right? So if you think about voice command, that’s super important. Voice, you know, Being able to talk it through if you’re blind. Right? Like, you need to be able to bank. So, that’s one example of how, like, I kinda use that motivation of, like, alright. Banking Kinda sucks. But what’s cool is we have to address issues that maybe other people push to the side, that they don’t prioritize.
Yaddy Arroyo [00:07:26]:
Right? So with accessibility now with AI, we can do so much stuff. Right? I mean, doctor Harvey Harvey Castro can tell us, you know, with medical stuff. Right? You can use a lot of information. You can use you know, automate it, make sure that you can kinda pick up patterns, stuff that we have to use you know, do by hand with metrics and analytics. We can now have a machine do it for us. So, like, there’s a lot of different applications of AI, Gen AI, and just in general product design Where we can actually make humanity better. Right? So that’s where I kinda come in. Like, I’m always human first.
Yaddy Arroyo [00:07:56]:
You know, the machines are cool, but I’m human first. And If people are afraid of the takeover, I’m not so much because I I feel that, like, we’re already going through a rough patch, and and we’re only clarifying it from now. Right. It could get worse, but I think we’re all aware now of, like, how we could make it better. I hope I answered your question.
Designing in multimodality
Jordan Wilson [00:08:13]:
No. You did. And and, like, now I have so many other questions, that are that that are good. And I think, you know, people who are joining us live and if you’re listening on the podcast, here’s here’s what I think is, what Yadi just Just slipped in there. That is so, so important. Accessibility for, for most people, I think we don’t think about it. And Right.
Jordan Wilson [00:08:34]:
And and and we take the fact that we have, you know, sight and hearing and and touch for granted, but not everyone has that. So, You know, when we talk about how impactful, generative AI can be and and and how it I I I do think it is probably the most and Transformational technology I think I’ve experienced in my lifetime more so than the Internet. But, you know, when we speak about multimodality, so, you know, when it comes to text and images and voice recognition. Yadi, I mean, I I mean, how do you even go about, you know you know, walk us through. Like, how How does that process even work? Because it sounds, you know, extremely difficult, but so so important.
Yaddy Arroyo [00:09:16]:
Well, okay. So I I’d like to kinda, like, put it out there. You don’t have to have a PhD to do what I do. Right? I’m one of the few people that don’t, I feel. A PhD helps, but the thing is I I came up through product design. And and, like, as you guys know, if if you get in it early, Then you have an advantage. Right? So that was my advantage, like, just, like, wanting to learn. So right right now, whoever is listening already has an advantage.
Yaddy Arroyo [00:09:38]:
They wanna know AI. They’re on top of it. They’re getting the source, you know, directly from you, Jordan, and and directly from other people who listen. Right? So that’s important because education and Curiosity is super important in our field. If you don’t stay curious, if you stay kinda, like, mundane, you’re not gonna be up to trainings. You’re not gonna be fun to work with. So I would say you don’t require an education, but you do require curiosity. Right? So, that’s number 1.
Yaddy Arroyo [00:10:03]:
And, yeah, and I would say, you know, doctor Alham Mahmoud, mentioned something about, like, AI breaking both language and accessibility barrier. I would also say that design is one of the few fields that you can demat demat democratize education and like curiosity. Right? Because you don’t have to be a PhD. You can just do it. You can just look at products. You can have critical thinking skills. The biggest thing that’s important for creation in I in the AI field is basically abstract thinking. So it’s the stuff that’s currently not being done now.
Yaddy Arroyo [00:10:33]:
You can be a great graphic designer and be a UI person. You can be a great, you know, strategist and be a UX architect. But to be a good AI person, there’s 2 skill sets. You have to get along with people. Right? And you have to be able to think, right, abstractly without having to draw. You have to think In real time as people are talking solutions, algorithms, or whatever. So I would say, like, people that like math and also people that like people could do this. And and those could be 2 different people.
Yaddy Arroyo [00:10:59]:
Right? Yeah. You need those 2 different people to be able to create, but the biggest key is you have to be able to listen And and and advocate for the user. Right? Because the minute you forget your user, you forget what you’re designing for. So I would say if you’re a product designer, you could easily correlate this into AI design. Just focus less on UI because, like, AI is more zero UI interface versus having an actual interface. It’s the brain. Right. So if you can create the brain and figure out how to work that out, there’s a lot of, like, nondesign in my work.
Yaddy Arroyo [00:11:28]:
There’s a lot of negotiation. There’s a lot of, like, not even manipulation because that’s horrible, but definitely, like, making a case for something and using the user at the forefront.
Will chatbots become multimodal?
Jordan Wilson [00:11:39]:
Yeah. Absolutely. You know? And like everyone joining us live saying, you know, Mike saying yes. Be curious. Bronwyn saying absolutely, you know, thinking out of the box. Yep. You know, Yadi, it it is very curious because or, or I’m very curious because I’ve seen this trend specifically over the last month or so. Right.
Jordan Wilson [00:11:58]:
So with, you you know, chat GPT, you know, I think people are calling it GPT 4 v for visual, you know, with being able to upload photos, and work directly, you you know, being able to, you know, upload your voice and to be able to hear back. You know? But those are all features that have been on, you you know, Google BARD and and Microsoft Bing for a while, but is that just the future then? Are we seeing the future that that most, you know, generative AI systems are are going to go multimodal because maybe it’s
Yaddy Arroyo [00:12:28]:
Mhmm. Better? I think so. I think you hit on something. I think there’s a lot of stuff we don’t see. So I don’t have access to all of that. I have access to some of that. Right? And, yeah, I think you’re right. I think everybody’s going more towards the multimodal.
Yaddy Arroyo [00:12:41]:
When you launch something huge, sometimes you just gotta, like It’s like the iceberg. Everything’s at the bottom. Right? But you just see the tip of it. I think that’s how it is right now. Like, they’re they’re watching what they can with what the resources they have. But behind the scenes, they know, like, well, we need to be able to do this and upload pictures. So if you think about multimodal, it’s both multimodal input and multimodal output. Right? So someone who maybe is blind or let’s say, a disability a a hidden disability.
Yaddy Arroyo [00:13:08]:
Right? Let’s say cognitive. You can’t look at someone and know if they’re cognitively disabled or not, but, you know, we need to make things so easy that anyone can use them. So that’s that’s another dimension of accessibility. Right? Like, you can have physical disabilities. You can have, temporal, like seasonal ones, like, I’m pregnant. I can’t move. I can’t carry heavy stuff, Or you can have stuff that’s hidden. Right? So when you design for stuff, you kinda like I mean, I almost start with the outliers first.
Yaddy Arroyo [00:13:34]:
Like, can this person do this? And if they can’t, How can we make it? Right? So, like, I I think you’re on to something. Multi modality, I think, is is a given. We may not just see what it looks like yet. I will tell you that the way the back end of OpenAI helps other products do that is awesome. Right. Because maybe OpenAI hasn’t gone to it, but I’ve seen tools like Figma. Everybody knows Figma. They have this, this plug in called Magician, And that’s generative AI.
Yaddy Arroyo [00:14:02]:
It gives you utterances. It gives you all this stuff that I use in my daily basis. So I’m like, it gives you icons. It gives you images. It gives you copy. Right. And and it’s in a very easy to use interface. Right? But but it’s not it’s not OpenAI 100%.
Yaddy Arroyo [00:14:15]:
Right? It’s on the back end, and the front end is different. So, yeah, I would I would say that, that multimodality is definitely in our future, especially if we think about AI being the brain. The brain could be in anybody. Right? Like, literally, it could be in a kiosk. It could be at ATM. It could be in a computer. It could be on a laptop. It could be on a phone.
Yaddy Arroyo [00:14:34]:
It could even be on your watch. Right? So, like, yeah, it’s everywhere. Yeah.
Accessibility: improving input methods for assistive devices
Jordan Wilson [00:14:37]:
That’s true. Yeah. There’s small I guess there is, you know, because you can, On the watch, you can you can type in. You can talk. So, yeah, I I didn’t even think about that, you know, and and how this has probably already been in our lives for a very long time. He says, love the topic. Spend a lot of time thinking about UI and the user experience with AI in the future. My question would be, what Would you want your UI experience to be ideally on the Internet going forward? Great
Yaddy Arroyo [00:15:12]:
question. Yeah. Accessible. You know? There’s so many things that aren’t accessible. How do blind people use chat gpt now? That’s what I wanna know. And I bet anything there are people Because I have a lot of friends that are, like, in tech. They’re not gonna stay behind. They’ll figure it out.
Yaddy Arroyo [00:15:26]:
But what if they didn’t have to? Right? So I guess what would be With what you experience, I think I would want it to be equitable. I would want something that can be shared and and easily shared with any type of person with any type of anything. Right? Right now, it’s 1 dimension. I feel like that’s the the Internet. It’s 1 dimension. But, really, with AI, you’re kind of flipping it on its head, and it’s like, well, it is, but What if the Internet was just different? Right? Like, the Internet can exist anywhere as long as you have Internet connectivity. Right? So I kind of envision it as like, I envision it everywhere. That’s my vision for the future.
Yaddy Arroyo [00:16:00]:
It would be everywhere like IoT, Internet of things, and all that. But at the same time, Safeguarding information because that’s what’s important to me. What a lot of people miss when they product design is they think, oh, let’s add all this information. Got all this user data. It’s like, No. No. We’re not meta. Like, let’s, like, keep our, like Yeah.
Yaddy Arroyo [00:16:16]:
Let’s be respectful to the user. So I think that’s the only caveat I would have. It’s like, I get my, you know, my privacy respected and that I’m allowed to not have everything tracked. Because with AI, you can’t have everything tracked, and I rather not. Right? So that’s where
Jordan Wilson [00:16:31]:
a good point. Yeah. It’s it’s yeah. It’s almost like the more AI is involved in our lives and everywhere, It’s almost like the less sensitive or the less cognizant we become with with our data. Right? Like, it’s it’s it’s like before. It’s like, alright. You know, Google search, Do I allow to have location? Yes or no. But what about when I’m using 4, 5, 18 different AI, you know, softwares and systems? It’s like you lose track.
Jordan Wilson [00:16:58]:
Yaddy Arroyo [00:16:59]:
Mhmm. You kinda become immune. So that’s that’s another problem we have to be cognizant of because I think if you grew up in that age where you had a computer, Not an issue. You’re like, that’s life. Like, I’m used to being trapped. I did not. I grew up with a Tandy. Right? So I’m like, you know what? No.
Yaddy Arroyo [00:17:15]:
No. No. No. I remember when I used to be private. That’s why part of the reason I hide my face is that I don’t want people to know me unless you know me. Right? And the same thing with my dad. I want a company to know me through my actions and me buying products, not through, like, marketing and, like, tracking me through all these different websites. Right.
Yaddy Arroyo [00:17:31]:
It’s okay to track me if I know it, but just don’t track me if I don’t, which there’s a lot of around that. So I would say data is a huge consideration with AI.
Jordan Wilson [00:17:40]:
Yeah. It’s probably one of the most it it is it is strange, the, the, polarity of that because it’s one of the most important things, and it’s The thing that probably the end user thinks the least about. Another another good question here, doctor Mathana. Thank you for this. So he said, wanting to get your feedback on this. So, on the chat interfaces of chat gbt, Bing, Bard, Cloud, Meta, all of these, He said they all have the same chat text based interface, and then he’s asking, is that the end all for generative AI, or what other interfaces Can we imagine for interacting with generative AI applications?
Yaddy Arroyo [00:18:18]:
Such a good question. We can get crazy here. Like, if it was up like, I have a couple of on the way, right, that don’t involve any type of interface. It’s 0 UI. So what does that imp that what does that imply? It implies either a tan motion, it’s facial recognition, it’s and analysis. Right? I’m not gonna say what the patent’s on. I will tell you that it’s 0 UI. Right? So so imagine being able To communicate with the system.
Yaddy Arroyo [00:18:43]:
And and this is where accessibility comes in. Right? I currently if I have an assistive device, like, there’s people that can’t move their arms and can’t do whatever. They have, like, a pointer on their forehead that they have to go 1 by 1 and and use their forehead to point at stuff. Right? What if I made that input easier? What if it’s, like, based on blinks or based on, like, like, you could use Morse code nowadays. Right? You could go, you know, like, There’s different ways of inputting. So I would say that, they’re probably limited by, like, the immediacy of having to put something out there, and the easiest thing to put out there is text. Right? That doesn’t mean they’re not working behind the scenes to be multimodal. I imagine a lot of them are.
Yaddy Arroyo [00:19:21]:
Right? I’m imagining, like, chat, Bard for cert for sure. GPT looks like they’re working on it. You know, and then meta, I mean, I’m surprised, like, they they haven’t had something already, but I’m pretty sure they have to. Right. And multimodal could be as simple as, like, sign language. Right? Like, nowadays, you know, AI can read sign language almost effortlessly because of all the, you know, information that they have out there that they can gather in real time about how people sign and the different variations of it. So, yeah, I would say that, I think the future is multimodal. It’s like we just haven’t seen under the hood yet.
Yaddy Arroyo [00:19:53]:
Right? Because it takes forever to launch something like that.
Jordan Wilson [00:19:57]:
Yeah. You you know, Yeah. You’ve had so many so many great insights, but I’m curious. And we kind of just started to talk about, the future of of of generative AI, and and kind of the user interface. But what are you even personally most excited about. You know, when it comes to, you you know, UI, UX, designing these these generative, Or or or maybe something you’re excited about on the back end as as as a user, but, you know, with all these developments and all of these, new ways and in more accessible ways that generative AI and happen in our daily lives. What what gets you excited about it?
Yaddy Arroyo [00:20:35]:
So much. I would tell you the news you just announced Made me excited because we need to have more competition in the chip space because that’s keeping price up. Right? Right now, I think I think NVIDIA got rated. No. I think I heard it from you, actually, That, like,
Jordan Wilson [00:20:48]:
big help. In in France?
Yaddy Arroyo [00:20:50]:
France? Yeah. Yeah. So that that was interesting because, like, I’m actually a big NVIDIA fan. I have Stocking it. Same. So I’m like, yay. But at the same time, though, I’m like, well, we do need competition. Right? Like, Apple already kind of figured it out.
Yaddy Arroyo [00:21:03]:
Like, we have to make our own chips in order to compete or else we’re gonna be laggards. And if priority is given to certain companies for certain chips, you’re already not democratizing the creation. Right. So I think I’m excited actually for the back end stuff, right, more than the front end because the front end, I feel always has to catch up to what the back end can do. Right. We’re limited. Like, as a designer, I’m limited to what my devs can say they can do. Right? The cool thing is when you work with really awesome developers, They don’t care.
Yaddy Arroyo [00:21:28]:
They’re like, I’ll try. You know? And that’s how you get innovation. Right? But there’s a lot of negotiations. So I would say I’m excited about the back end. I’m excited about, like, Bringing this technology to the common folk, I think that’s why it’s super important to to start being curious now because if you wanna get into product design, you can do it. Just get into it. Just start doing it. Start volunteering.
Yaddy Arroyo [00:21:46]:
Do something where you’re, like, practical experience. There’s a lot of theoretical apply you know, non applied People that are like, oh, this is what AI is. But, like, start doing AI. Like, start either doing gen gen AI, like how you do it and you, like, do prompts and become a prompted engineer Or, like, start thinking about how you can simplify those prompts, which by the way, that’s part of my job, Jordan. Right? Like, in my world, I wouldn’t need a sophisticated prompt engineer. It should be easy for anyone to use. Mhmm. But these tools are so complex that you need to specialize in prompting just to be able to do, like, the output you want.
Yaddy Arroyo [00:22:20]:
But in theory, if you have a good designer, you could, like, tune for it. You can make sure to customize it. So because that’s the thing. I do I do copy all day. I do machine learning training all day. Like, part of my job is designing, but the other part is negotiating. The other part is actually writing and saying, okay. This is how the machine learning is gonna be trained.
Yaddy Arroyo [00:22:37]:
So I do a lot of the machine learning training with copy and different utterances. And, I mean, I could go on and on about that, but I don’t wanna bore you guys. I just wanna say that I’m I’m excited about the back end stuff. Like, super excited because the more we can, like, bring that up to the surface, bring it up to the common folk, then all of a sudden, anybody could be anything. Right? Like, that’s why I love that’s why I love AI, technology, computer, Internet, because it’s like the 1st field where you don’t have to go to school To do it, you just have to, like, pay attention and take notes. Yeah. You know? and be willing to learn.
Limitations as a designer
Jordan Wilson [00:23:04]:
Right? That’s that’s another big one always. Great great question here from, Fatima Saying, how are you limited as a designer? And in whether you wanna take it personally or just designers in general, but what are those limitations?
Yaddy Arroyo [00:23:19]:
I’ll take it personally. I’m limited by our the people on the team, right, in terms of, like, okay, what can we do collectively? And that’s not a bad thing. I mean, the thing is you have to be able to, like, create together. So whatever you create is the best version that we could and possibly create together. That’s what I’ve learned through creation. Right? There are some people that I can create really cool stuff with. So when I don’t work with them, that’s a limitation. But I would say, like, In general, you’re you’re only as strong as your weakest link.
Yaddy Arroyo [00:23:47]:
Right? And I don’t mean that in a bad way. I mean that in a alright. If if as a group you can’t succeed, Then that tells you something about the group. But if you can create and just do stuff, it’s fine. Like, I think, like, I never Take things personally that I launch. I always see them as version 1 or MVP. It’s always gonna get better. Right? So I’m not limited by, like, oh, yeah.
Yaddy Arroyo [00:24:08]:
It sucks right now. Yeah. It Sucks right now, but that’s fine. It’ll get better. Right? I’m more limited about, like, how far those iterations can happen because if we put something out and it’s crap and we’re not intending on changing it, That’s a huge limitation on me. I’m like, oh, no. Like, we need to update. We need to make this better.
Yaddy Arroyo [00:24:24]:
We need to integrate user feedback and optimize. Right? Yeah. I don’t know. I mean, also, I’m I’m limited by regulation. I think that’s another point I’d like to make that sometimes that we’d like to do really cool stuff, but, right, like, Really. Some some rules are smart. Some rules are not smart. So having to deal with the nonsmart rules of regulations Sometimes hurts the users more than helps.
Yaddy Arroyo [00:24:46]:
So it’s almost like how do I circumvent that and actually provide value. Right? So that’s how I’m limited.
Gen AI tools and UI/UX
Jordan Wilson [00:24:52]:
Yeah. It’s it’s It’s funny because, sometimes, you know, hearing your response to that, it’s almost like the limitations are actually the things that in theory are supposed to be safeguarding and or supporting you, but I guess that makes sense. Right? But I you know what? I actually have a kind of a, I want your personal take on this because I was actually thinking about this the other day that, it it seems like at least for me, it like, if if we’re talking large language models, it seems like The best ones in in in terms of quality of the large language models, at least for me personally, have the worst user interface or the or the worst user experience. So I’m I’m curious of of all the, you know, kind of, you know, quote, unquote popular generative AI tools out there. What are some of your favorites In in in terms of, you know, the user experience or the design and maybe which ones need a little need a little love.
Yaddy Arroyo [00:25:45]:
You know what? I think they all need a little love, But only because they require, like, awesome prompt engineering. Mhmm. So that would be my one thing. Like, don’t make it so hard that you have all these parameters or whatever. Like, you should be able to use slots and be able to, like you should be able to do it in a different way. Right? Now this is a thing. Their power. The LLMs are so powerful that that right now, that’s the only way you can do it.
Yaddy Arroyo [00:26:07]:
Right? So I’m not gonna judge anyone. I’m gonna be like, okay. In general, we’d we need to make it more accessible to humans. Right? Because not every because that’s the thing. You have to be a certain type of person to be a prompt engineer. That’s what I I’ve got it prompted you. I’m not, and I’m, like, in the field. Right? So that tells you a lot.
Yaddy Arroyo [00:26:23]:
I have ADHD, which means I have a short attention span, Which means I have to be quick. I have to this. I have to that. Right? So, like, being able to address that type of cognitive issue in, like, design upfront with and Prompting and being able to control the machine and what it outputs. I think that’s number 1. And that’s just in general, not a not a diss on anybody, just like In general, if we can make it easier so that I’m not doing all these parameters or doing this just to get this specific result, let’s make it easy. That’s where multi interfaces come in. Give people ideas.
Yaddy Arroyo [00:26:52]:
Give people, like, different ways of inputting. The other thing is I think I’d like to give credit to Midjourney because it looks like they have really cool ways of, like, Making slight edits to an image that other people don’t have yet, like the lasso tool or whatever. And and keep in mind, I wouldn’t know about this if it wasn’t for your show because I’m not I’m, like, too busy creating a tool than to look at other tools, so, like, this type of show is helpful for me because I’m, like, oh, yeah. Like, I’m a noob. Like, I don’t know this, but this is cool. Right? So, like, I will tell you there’s a lot of cool stuff. I mean, I have to give props. It’s almost like what you mentioned.
Yaddy Arroyo [00:27:24]:
The bigger the LLM, The crappy are more like condensed or more like whatever the UI is. But that’s fine. That’s fine because we know where their focus is. The focus should be on the LLM because, by the way, that’s the back end. Right? Anybody can do the front end. So I’m having friends actually do amazing front ends. Like, there’s this one, Peter, I would probably tag him afterwards. He has Fluid Memory.
Yaddy Arroyo [00:27:45]:
He’s a founder for 1 of I think I might have told you. Yeah. Yeah. We told him. Yeah. You know, Peter? So you know that Fluid Memory is using, like, You know, Chat GPT 4, I believe, as the back end. Right? But the fate like, the interface is beautiful. You’re able to tag stuff.
Yaddy Arroyo [00:27:58]:
You’re able to, like, snippet Stuff you’re able to organize information in such a way that, like, you know, it brings it up organically. So yeah. No. I would say that, like, What big tools are missing, the little guys in the industry are compensating for. Right? So that’s what I’m excited for. Like, seeing what the little the little guys, The nonopen AI people are looking at because they’re using that back end technology to really create awesome interfaces.
Yaddy’s advice for accessibility and AI
Jordan Wilson [00:28:24]:
You know, Yadi, we’ve been literally all over the place. We’ve covered so much ground in this episode from, You know, what, you know, UI, UX designers even do to some of the implications and outside factors limiting the future of design, accessibility, so many things, but I wanna end with this. You know? If if if you’ve caught someone’s attention, you know, whether they are an aspiring UI, UX designer, or maybe even someone Who is now understanding for the 1st time what goes into all of these, you know, systems that they use and and and, you know, all of the, the care and and the work that goes in. But what is your kind of one takeaway message or maybe piece of advice for people Just about the accessibility and making better, experiences within Gen AI.
Yaddy Arroyo [00:29:13]:
Care. Right. Just care. Just care about people, care about yourself. I mean, selfishly, I started in this field so I could create products for when I grew old. I was a cat cat lady, Jordan. I was like, I’m gonna grow alone. I need to make sure I have, like, robotics taking care of me.
Yaddy Arroyo [00:29:29]:
And then it shifted once I had a son to, like, oh, a A son with special needs that may or may not need help all his life to, like, oh, shoot. I need to create something that he can use. Right. So so it became from a selfish endeavor to caring, to, like, oh, I wanna create something that doesn’t exist that should exist Because we need a more equitable world, and I think that’s what people are missing with AI. We can actually level the field in so many different ways. We can democratize, education, Right? Before we didn’t have education available, like, the Internet made that feasible, but imagine AI. Right? So I think that’s what that’s what I see. Right? Like, just just care And and put humans first Mhmm.
Yaddy Arroyo [00:30:10]:
And figure out everything else will fall into place once you figure that out.
Jordan Wilson [00:30:13]:
Wow. Such you you know, I didn’t Know that this morning when we started this show that I was going to leave feeling inspired about about the future of of, you know, user interface and user experience. Right? Because when you just look at it, it’s not, you you know, it’s not something you think, alright. I’m gonna walk away from this, inspired. But but, Yadi, I think you did that, today. I think you opened our eyes to, the importance of accessibility in, UI UX and Design and AI systems. Wow. Fantastic.
Jordan Wilson [00:30:45]:
Yadi, thank you so much for joining the Everyday AI Show. Super appreciate it.
Yaddy Arroyo [00:30:49]:
Oh, man. I’m so honored. Thank you so much. Yeah. This is a, I hate to say it, a dream. Dreams come true. No. You’re an awesome guy, Jordan.
Yaddy Arroyo [00:30:57]:
Thank you for having
Jordan Wilson [00:30:58]:
me. No. Absolutely. Absolutely. And this was And I don’t say this often. This one was a masterclass. We covered so much. So if you didn’t catch it all, maybe you’re driving or walking your dog.
Jordan Wilson [00:31:10]:
Some people say they listen to the show when they’re, like, on the Peloton or bike. Good on you, but, you know, make sure you go sign up for the free daily newsletter. We recap it all. So, you you know, there’s so many great insights in there. We’re gonna break down what it all means and how you can put it to use For you. So, Yadi, thank you again for joining us. Thank you all for joining us, and we hope to see you back for another episode of everyday AI. Thanks y’all.
Yaddy Arroyo [00:31:33]: