Ep 129: AI Image Generators – The Good, The Bad, and The Awesome
Join the discussion: Ask Leonard and Jordan questions about AI image generators
Check out the upcoming Everyday AI Livestream lineup
Connect with Leonard Rodman: LinkedIn Profile
As technology continues to advance at an unprecedented rate, businesses must adapt and harness new tools to stay competitive in the ever-evolving landscape. One such tool that is revolutionizing the world of marketing and advertising is AI image generators. In a recent episode of the Everyday AI podcast, the hosts delved into the incredible potential of these generators, highlighting their impact on various industries and shedding light on their capabilities in enhancing creative visuals.
Advancements in AI Image Generators
Gone are the days when AI image generators were notorious for producing bizarre and unrealistic images. Thanks to significant improvements in machine learning algorithms, the quality of AI-generated images has skyrocketed. Today’s generators are capable of producing visually stunning and highly realistic images, often indistinguishable from real photographs. The hosts expressed their excitement about the close-to-reality results and usability of these cutting-edge technologies.
Recognizing the Importance of Subtle Details
One key aspect of AI image generators highlighted in the podcast is their ability to capture intricate details that make an image come alive. Whether it’s the glint in an eye or the play of light, these generators have evolved to convey the nuances that bring realism and depth to visual content. This newfound precision allows businesses to communicate their brand narratives more effectively and engage their target audiences on a deeper level..
Unlocking Creativity: Exploring Style Expressive and Different Subsets
To ensure optimal image generation, the hosts suggested utilizing style expressive and different subsets within AI generators, such as MidJourney. By experimenting with various prompts and exploring different image generation models, businesses can discover the perfect visual style that resonates with their audience. The hosts emphasized the significance of conveying desired outcomes to the AI system, incorporating elements like mood, lighting, and medium to create compelling visuals.
The Future of AI Image Generators in Marketing and Advertising
Reflecting on their extensive experience in photography, the hosts contemplate the eventual recognition of AI-generated images as being distinguishable from real photographs. Drawing a parallel to the digital revolution in photography, where the advent of digital cameras expanded creative possibilities, the hosts suggest that anyone, regardless of expertise, will eventually become adept at identifying AI-generated visuals.
AI image generators are paving the way for a new era of creativity and innovation in marketing and advertising. As businesses strive to captivate audiences, AI-generated visuals offer a wealth of possibilities to communicate brand messages effectively. With advancements in AI technology, increased accessibility, and healthy competition, the potential for these generators to reshape the marketing landscape is immense. By embracing these tools, businesses can stay ahead of the curve and unlock unprecedented creative potential.
Topics Covered in This Episode
1. Introduction to AI Image Generators
2. The Evolution of AI Image Generators
3. Techniques and Tips for Effective AI Image Generation
4. Identifying AI-Generated Images and Legal Considerations
Jordan Wilson [00:00:18]:
There are so many AI image generators, and they’re extremely capable. You know, you’ve probably heard ones like MidJourney and DALL-E, but there’s, I swear, new Amazing AI image generators coming out almost every week, and maybe you’re wondering which one’s for me. If so, Today’s episode is for you. Welcome. My name is Jordan Wilson, and this is Everyday AI. It is your daily livestream podcast and Free daily newsletter, helping everyday people like you and I not just make sense of what’s going on in the world of AI, but how we can Make sense with it or make dollars. Right? Make money, grow our businesses, grow our careers. So I’m extremely excited today, for for our guest who is, I will say bar none.
Daily AI news
Jordan Wilson [00:01:07]:
I’d say one of the best out there in in sharing and creating AI art. So I’m extremely excited for that. But before we get into that, Let’s go over what’s going on in the world of AI news. So, speaking of AI image generators, mid journey, one of my personal favorites, is coming to the browser. So a big move from what I would say is the leader in the AI image generating, field. So, Everything was previously in Discord, which if you are new to, Discord or AI images, It’s essentially a separate program that you had to, log into. So MidJourney, with this, pretty pretty big announcement saying that they’re moving some AI image generating features insight of their actual website, so you don’t have to go into Discord. Alright.
Jordan Wilson [00:01:53]:
Our 2nd piece of news, also Image related. So, poisoning your artwork, is that a thing of the future? So a new tool called the nightshade Allows artists to poison their own artwork in order to disrupt and damage AI models trained on their work without their permission. So, this new nightshade can cause AI models to malfunction by injecting little poison images into their training data. This to me is extremely interesting, but it could be, in my opinion, too little too late. So many of these, big AI image generating platforms Have already been trained on countless, images. It might be too little too late, but I like it for artists trying to protect their future work. Alright. Last but not least.
Jordan Wilson [00:02:40]:
So some big news, for the US becoming an AI destination worldwide. So the White House announced 31 designated tech hubs, with a focus on AI to improve American competitiveness And we’ll provide grants for each hubs. So, the Biden White House today announced that these hubs will receive grants, between $40,000,000 and $75,000,000 each, and they are kind of scattered throughout the country. And the focus of these tech hubs, will include everything from quantum computing and clean energy to, obviously, artificial intelligence. So, I’m excited to see what’s gonna come out of that. There is 1 hey. I’m from Chicago. It looks like there’s 1 coming to Illinois, so I’m gonna keep my eyes off, keep my eyes out On on what’s coming here.
About Leonard and Rodman.ai
Jordan Wilson [00:03:29]:
But you probably joined today not to hear about the AI news. Maybe you did. Thank you if you do. But, you probably wanna know a little bit about AI image generators. I do. I spend a lot of time in these, but my guest today Leonard Roman, what’s going on? Thank you for joining the show.
Leonard Rodman [00:03:55]:
Hey. So much of a pleasure to be here, man. I really appreciate it. Yeah. That’s definitely interesting, like you mentioned about the image poisoning. I mean, I would say the biggest takeaway from that before we jump into the show is that, I don’t think it’s gonna last forever. Right? Like, as you mentioned, they’ve already trained those, you know, image datasets, and, Like, I don’t expect that this image poisoning is gonna last and keep up with every technological advance in the same way people are claiming they can detect AI. Google actually also said that they have a new project that can detect, generative images with a 100% accuracy or something like that.
Leonard Rodman [00:04:35]:
And I hear things like that, I’m like, that’s never gonna hold. Right? That’s never gonna
Jordan Wilson [00:04:40]:
Same. Same. You you know, Leonard, it’s kinda like when, you know, OpenAI, You know, came out with their, you know, quote, unquote, you know, text generation, and they said, oh, we can help you detect If, this text was AI generated and then they quickly found out, like, no. They can’t, so they shut it down. But, anyways, I digress. So Let’s let’s actually quick quick tell everyone a little bit about what you do, you you know, with Rodman AI and, in the AI space.
Leonard Rodman [00:05:07]:
Yeah. So I work for a tech company here in Chicago. We’re both local. Jordan and I hung out the other day and had drinks, which is cool. In my spare time, my copious free time, I’ve been working on my personal website and portfolio, Rodman.ai. I initially started it to provide, free diverse clip art because as an instructional designer and l and d executive, I’ve always found that it’s really hard to find diverse clip art. So that was like a pet project of mine. Banged out, 10,000 free clip art, which I still get probably a 100 visitors a day coming by to grab free clip art from me.
Leonard Rodman [00:05:44]:
And then I went from that into ChatGPT prompts and mid journey prompts. And from there, I’ve been doing other types of image generators. So ChatGPT, as many of you are familiar with, is text generator. So I’m creating prompts for different, fields, different jobs, sort of trying to show how it can apply to different, people doing different things, because I think that’s really important to show everyone how they can benefit from AI, how it’s not just a them thing. It’s Me too. Right? That I also can benefit whoever you are. MidJourney, I’ve been trying to, sort of notice What I’m calling styles within the image generation, meaning that the, the algorithm is trained on certain images, certain words, and trying to figure out which they are and what they produce. It doesn’t always work out, and the idea is to basically provide people with keywords that they can use in prompting.
Leonard Rodman [00:06:41]:
So I just hit guide number a 100 for midjourneys. I’ll take a little break on that until they come out with midjourney 6. So I’m focusing on Leonardo, which is a big, they have a cool name. Right? Just like mine. SD and SDXL, which is stable diffusion and stable diffusion XL based image generator. There’s also another new one called Musavir, which is coming from Dubai, which is very exciting. I believe it’s also SD based stable diffusion based. And then also looking at DALL-E, Firefly with Adobe, which is Also just come out with a new update, so Firefly 2, Leonardo 2, and really just trying to compare and contrast these image generators.
Leonard Rodman [00:07:27]:
Show what you can do in each of them, what you can’t do in each of them, and really just trying to educate people about how they can best use them and get really cool results.
Jordan Wilson [00:07:36]:
And and, you know, hey. As a reminder to everyone, Leonard’s gonna be dropping, I think, A ton of knowledge. I can’t wait. I’m I’m, like, ready to learn along with y’all. But, as a reminder, you you know, if if you’re joining us on the podcast, Check out the show notes. Come and join us live, and and thank you to our our live audience. Maybritt saying, yes. This will be a great one.
Jordan Wilson [00:07:57]:
Been experimenting a lot. Looking forward to, everyone’s insights. Alar saying big on Midjourney. You know, Doug’s just saying good morning. Doug, thank you for joining us. So Get your questions in now because I I would have loved, just FYI. I would have loved to known Leonard, like, 9 months ago. Like, when I was first, you know, or or, you know, when DALL-E two
Leonard Rodman [00:08:20]:
Good beginner AI image generator
Jordan Wilson [00:08:20]:
He spends so much time and creates such great guides. So so get your questions in now. But Maybe maybe let’s start let’s start with this, Leonard. We said, hey. We’re gonna go over the good, the bad, and the awesome. So maybe let’s start kind of in the middle. Like, what would you say if if if we’re talking about, hey. What’s a good, AI image generator? What’s a maybe a good one for people to start, to learn if if they’re new to the space.
Leonard Rodman [00:08:45]:
Yeah. I would say what I’ve really been recommending to people the most has been, and am I frozen?
Jordan Wilson [00:08:51]:
No. I think you’re good.
Leonard Rodman [00:08:52]:
Oh, good. Gotcha. Alright. So, you know, I think people should really be starting with DALL-E. That’s my recommendation to them. DALL-E is really good at following instructions. You don’t need to sign up. You can use it on the web.
Leonard Rodman [00:09:05]:
So if you go to bing.combing. Slash create. That’ll take you to the DALL-E image generator, and let’s see if I can share that briefly and get it to work.
Jordan Wilson [00:09:17]:
I got it I got
Leonard Rodman [00:09:18]:
it pulled up right here. Bam. Perfect. Alright. So let’s try generating something. I think what’s most exciting about DALL-E is that it follows instructions. So if you tell it, like, I want a dog standing on a truck, Holding a sign that says Happy birthday. Okay.
Leonard Rodman [00:09:43]:
Jordan Wilson [00:09:43]:
we go. And if if you’re listening on the podcast, I’m literally typing in, and this is how easy it is. I’m typing in what Leonard is saying live. So, you know, we went to bing.com/, what was it, slash images or slash create?
Leonard Rodman [00:09:56]:
Jordan Wilson [00:09:57]:
Slash create. And we we typed in, dog standing on a truck holding a sign that says, happy birthday. And, I’ll I’ll zoom in a little bit here, and It’s pretty pretty impressive. Right, Leonard?
Leonard Rodman [00:10:11]:
Yeah. That’s not bad. Right? I mean, the dog can’t really hold things because It doesn’t have hands, but pretty good. Right?
Jordan Wilson [00:10:21]:
Yeah. And and and maybe let’s let’s talk about this a little bit more because, I love that you picked DALL-E because I love it. It is easy, but, you know, one thing that it’s probably worth talking about because it’s Something that a lot of people run into issues right away is the text. Right? So maybe can you just kind of quickly explain, you know, kind of the intricacies or, you know, the pros and cons of working with tax because not all of them handle this Like DALL-E. You know, if we’re looking at the results here, it looks like DALL-E got the text about, like, 90% right in 3 of the 4 images, Which is actually pretty impressive.
Leonard Rodman [00:11:00]:
Yeah. So people who haven’t seen AI first are probably like, what’s happy birthday. Right? But this is a big step for us if we look at how things used to come out of or still come out of things like MidJourney. Basically, image generators were never really taught to read and write. So just like a kid who was never taught to read and write, if you ask them to draw someone holding a sign, They know that, you know, letters are like glyphs, and they’re gonna draw some glyph looking things on that sign, But they don’t really understand how to read and write. They don’t know cursive. So that’s been a really big thing that DALL-E has been doing that, that it’s been bringing us that. Whereas other image generators haven’t had that before.
Leonard Rodman [00:11:41]:
There’s still some guess and check. Right? So, like, we did it. It’s like, okay. Not all of them came out perfect, but, considering we gave them, like, a pretty complex prompt, that was really good output, I think. Right? A dog standing on a truck, Holding a sign, a bunch of different things there. So if you gave that to mid journey, you’re just gonna end up with an image with a dog, a truck, and a sign in it, And the sign’s gonna say some, like, gobbledygook that doesn’t make sense. But I think, you know, that’s what makes DALL-E exciting. I’d say that Image quality is not quite as good in DALL-E, so that’s pretty good.
Leonard Rodman [00:12:17]:
It does photos pretty good Mhmm. But not as good as mid journey. When you look at a dog, it’s a little easier. If you look at a person, for example, their skin texture It’s more likely to be like a porcelain doll or plasticky looking, and I’d say it’s really the defects, that make people look realistic. Right? Like, normal, real human beings have skin tone. They have skin imperfections, And you’re used to seeing that. And when you see someone who looks perfect, you’re like, oh, they look like a Greek statue. Maybe in a good way.
Leonard Rodman [00:12:52]:
Right? But they don’t look real to you, and your eye and your brain recognize that even if you’re not perfectly trained on image recognition and Photography. Right? We’re all used to just what do people look like, and we have a really good idea of that, and we just, You know, know that innately. Yeah. And that’s that uncanny valley that people have talked about in the past that’s kinda gone away because, like, All the image generators are so good. They’re passed on County Valley, but the idea is, like, it looks like a human, but not quite. So it sets Kinda like this alien part of your brain where you’re like, that’s an imposter.
Jordan Wilson [00:13:29]:
It And just hey. Just just as a reminder, unfortunately, Leonard’s screen is frozen. Don’t worry about it. We’ve got his audio. We’ve we’ve we’ve got the, the screen share going, so, don’t worry about that. We can still tap into all of his insights. So, Leonard, this is a great example, and I love that we started with DALL-E because it’s it’s also Aside from what Leonard said by going, you know, to bing bing.com and accessing it that way is you can also if you have, the paid version of ChatGPT, you can access it, there as well. Alright.
Ideogram – one of the originals
Jordan Wilson [00:14:01]:
So let’s let’s maybe, transition, Leonard. So, the good We said is is DALL-E 3, and and it handled text really, really well. Maybe what’s what’s 1 maybe not bad, but what’s one that maybe and Kinda still needs to be improved a little bit.
Leonard Rodman [00:14:18]:
Okay. Well, before DALL-E came out, everyone was really excited about Ideogram, And I haven’t checked them out in a while. But Ideogram was kind of the 1st people to do text well, and people got really excited about it, myself included. I started including robots holding signs that say this, that, and the other for, like, weeks just because I was so excited about the idea to communicate in that way, And it’s still pretty good. I think that the press that they got from being first to, do text Mhmm. Really helped them. They’re still getting there. I’ve checked on them periodically, and let’s see what we get out of this.
Jordan Wilson [00:14:56]:
Yeah. Let’s yeah. Well, we’re gonna try the same. So if you’re listening on the podcast, You know, ideogram, I believe I I could be wrong. I think it was started by, former Google, I believe it was, but it’s it’s start it’s It has some, some really heavy hitters on the founding team, so it’s it’s not just, you know, one of these that’s, You know, kinda started in in someone’s basement. So I did, the the exact same, prompt that Leonard suggested for, Bing chat. We did the same thing. Dog standing on a truck holding a sign that says happy birthday.
Jordan Wilson [00:15:31]:
So the results here, not as good, but, you know, again, why not? Advice. Terrible.
Leonard Rodman [00:15:36]:
No. So, like, totally usable. I would say if you wanted bad bad, you could look at, like, DALL-E 2, for example, which was of the first image generators I used, like, a year ago. And I was using it at work, and I was like, hey. Can we use this to make assets that we could actually use with, like, Like, could we show this to a human? And the answer is, like, no. Like, everything that comes out of Dali has, like, 2 heads and 20 fingers.
Jordan Wilson [00:16:03]:
Leonard Rodman [00:16:03]:
I can talk a little bit about that as well. That’s essentially the same thing as signs and text. Right? That Fingers are really tricky and tough. They do a lot of different things. It can be complicated. Artists have trouble with them and, you know, basically, just you need training to do it well. So, you know, just like a human artist would need training to do fingers well and AI needs training. And until we got that training, we had a lot of AIs putting stuff out with, like, 20 fingers, which that’s, like, obviously wrong.
Leonard Rodman [00:16:35]:
Now we’re down to the point where it’s just like the skin texture is imperfect, and occasionally, you might get, like, One extra finger.
Jordan Wilson [00:16:42]:
Leonard Rodman [00:16:43]:
Sometimes we still get extra hands in mid journey or in DALL-E, which surprises me. But, you know, that problem is mostly in the past, and now we’re just down to, like, really refining it. So, you know, we’re still close to the point where you can’t even tell the difference between the photograph and something generated by AI, and that’s crazy.
Recognizing AI-generated photos
Jordan Wilson [00:17:01]:
Yeah. And and and, Leonard, I think you bring up such a good point because, You know, when these kind of AI image, AI image generators first debuted, you know, specifically if we’re talking about DALL-E two, That actually predated chat gbt, which a lot of people don’t realize. So some of these early AI image generators, people shared them, And they went viral maybe for not the right reasons. Maybe for, hey. This person has 3 hands or this person has 9 fingers, but, you know, The quality now is fantastic. It is you know, even myself, I’m, yeah, it I’ve taken more than, 500,000 photos before in my life. Right? I was a photographer ish in my former life. And and now Can can you even tell the difference? Like or do you really have to, like, stare at an image for a long time to see if it’s real or if it’s AI, AI generated?
Leonard Rodman [00:17:54]:
Well, each AI has certain specific things that it does wrong that you can kind of learn to recognize. So I would say that in, like, a split second, I can tell still if something’s AI generated or not. But I’m also in the same boat where I’ve done photography for, like, 30 years. They used to take fewer photos back when it was on film. Once it switched to being on, like, a 2 gig or bigger 2 terabyte flash card, and you can just take, like you know, you look at, like, the number of photos you can take, and it literally runs off the edge of the screen on your digital camera just because it’s like, can take, like, a 1000000000, 1000000000 photos, but, you know, you sort of learn to recognize stuff like that. I think everyone can still see it. I think someone who’s less expert might be fooled initially for longer, but I think that really anyone could look at one of these AI generated images and that it’s not real, and not even necessarily just because of what I would call miscalculations. So So for example, in the image we have on screen, the dog is holding a sign that says happy birthday.
Leonard Rodman [00:18:58]:
He’s holding it with, like, a stick that kinda, like, passes through his chin instead of his mouth, And the truck he’s standing on only has one side, the right side, not the left side. So you can see things like that, but you would also recognize it in, like, you know, the glint in an eye, for example, or the specular things, Involving lighting can be really a big clue. You know, I think really what’s more important though is just Getting all these different image generators so close to reality that they’re really usable. Yeah. So for me, what’s most exciting, recently is that Leonardo came out with a new model, based on SDXL, Stable Diffusion XL. And I’d say this is really the 1st true competitor I’ve seen to MidJourney, where I pulled up images from Leonardo and MidJourney, and Occasionally, Leonardo actually does it better, where in the past, MidJourney just won every single battle, every single heads up. So that’s really exciting just seeing, like, real competitors in the space.
Jordan Wilson [00:20:04]:
Yeah. Abs absolutely. And, you you know, I I I feel the same way. It seems like, You know, for many months, at least in my opinion, MidJourney was kind of running away with it. And then we got DALL-E 3, and, yeah, some of these new models are are fantastic. So, actually, here, before we get into the awesome, before we get to kind of the the the final phase, there’s a couple great questions here, and and please continue to get your questions in. So, I’m gonna I’m gonna let you, handle this one here, Leonard. So, Monica asking, Leonard, are there any restrictions using these images for commercial purposes? Great question.
Jordan Wilson [00:20:38]:
Probably should’ve got to that sooner maybe, but, yeah, Leonard, what what is the, kind of the the general school of thought? Because there is technically No, quote, unquote, law out there, but what is the, the school of thought or best practices for using AI image, AI images in for commercial purposes?
Leonard Rodman [00:20:56]:
Yeah. So I’m using it currently at work for commercial purposes. I know lots of other people who are, and nobody’s gotten sued yet. I’m sure someone will get sued Eventually. I imagine it’s gonna be some big firm because, generally, you wanna sue people who have money and not people who don’t have money. And any lawyer will tell you that. Let’s find someone with money to sue, not someone who doesn’t have it. I think really the biggest place you might get yourself into trouble potentially is if you were, like, Going out of your way to try to rip off someone else’s work.
Leonard Rodman [00:21:26]:
Mhmm. Or if you, like, released a soda and it looked exactly like Coca Cola, but it was like Coca Cola or something like that. Then they might come after you and be pissed. Other than that, I really think that it’s pretty much fair game and safe to use, and Generated images for commercial purposes. I would just advise people to, like, count your fingers and toes and check your quality, but I wouldn’t be super concerned about the legality. Of course, be, you know, aware that I’m not a lawyer. But I have talked to a bunch of lawyers about this and so has Jordan, and I don’t think anyone’s really going after individual creators yet. You know, it’s really just things that are obviously illegal that I would shy away from.
Jordan Wilson [00:22:05]:
Yeah. It’s it is important to talk about. Right? Because that was the 1st hesitation. Well, I believe the 1st wave was like, okay. This isn’t good enough to use for commercial purposes. But then as, you know, probably once mid journey got to, you know, version, you know, 5 point whatever, and now DALL-E And now, you know, we’re talking about Leonardo with some some new models. Now it’s like, okay. Yeah.
Jordan Wilson [00:22:30]:
Now they’re definitely good enough for commercial purposes. So it’s it it is. Yeah. But there’s there has been, plenty of lawsuits already, but mainly geared, you know, at the makers themselves. So, you know, we Talked about stable diffusion, which is a model. They’ve taken on a bunch of lawsuits. But, yeah, as as far as I know, there hasn’t been any, individuals, kind of targeted because, yeah, it’s it’s it’s a gray area right now, which which does make it a little tricky. But, I will say that it is very and Widespread and common place to be using AI image, generate AI generated images for commercial purposes.
Jordan Wilson [00:23:10]:
Here. We’ll do we’ll do 1 more before we get to the to the awesome, Leonard. So Mayward asking, what is your favorite generator to generate high quality pictures for ads or website content? And maybe this is the transition to to to the awesome. Right?
Leonard Rodman [00:23:23]:
Yeah. I would say definitely go with New Journey for the best quality. Like I mentioned, Leonardo can be a really close second sometimes. So they basically both cost about the same, $30 or $40 a month for their and Sort of medium level, that most creators can make good use of.
Jordan Wilson [00:23:40]:
Leonard Rodman [00:23:41]:
So I would say, yeah, try out, Midjourney, give Leonardo a shot. I would probably say you’ll end up with Midjourney most likely, but see what sort of suits your taste. I would say Leonardo gives you more shots that, like, a real person would have taken, and Midjourney gives you more dramatic and dynamic and Professional photographer type ones. But yeah.
Jordan Wilson [00:24:07]:
Leonard Rodman [00:24:08]:
Alright. So let’s look at our dog standing on a truck here that Jordan’s rendering for us.
Jordan Wilson [00:24:13]:
Leonard Rodman [00:24:13]:
We kinda got lucky. So mid journey doesn’t really understand things like dogs standing on a truck, but usually if you have a picture of a dog in a truck, the dog’s gonna be on the truck or sitting in the truck bed or something like that.
Jordan Wilson [00:24:26]:
Leonard Rodman [00:24:27]:
As you can see, the text did not come out. It gave us some random letters, But those are actually an improvement. So clearly, they’ve been working on this and must be working on this for the new one because you used to get, like, a bunch of letters that didn’t even look like in English alphabet letters.
Jordan Wilson [00:24:40]:
Leonard Rodman [00:24:41]:
And now at least all the letters look like English alphabet letters, which is interesting. So mid journey and these other generators will, like, Sneak little mini updates in. They actually work on the live version of their product instead of releasing it. So, like, one day, all of a sudden you get, like, A new feature that you don’t even know about, which might be something they announce or might be something like, you know, better looking letters as we work on research about letters.
Jordan Wilson [00:25:06]:
Yeah. And, you you know, if you’re listening on the podcast, yeah, Leonard was kinda giving us the breakdown, but we put the exact same prompt that we’ve been using in some of these different image generators Into MidJourney. And, yeah, MidJourney right now obviously struggles, with text. But in terms of, you know, photo quality, I don’t you know, now that I’m looking at it, I would say there’s less maybe errors aside from the text. There’s maybe less errors in midjourney, but, overall, if I don’t know. If if I had to use 1 today Because of the text, I I might use in in this very specific use case, the one that we generated, from, Bing in, you know, using DALL-E 3 just because it it came out with a a nice sign, but, you know, there were some errors.
Leonard Rodman [00:25:55]:
And that’s sort of where I stand too, where If I really need it to follow my instructions precisely or especially if I need letters, I would say go with DALL-E. And it’s getting closer and closer, so that’s exciting for everyone. I’m sure it makes mid journey nervous, but I’m sure they have something up their sleeve.
Jordan Wilson [00:26:12]:
Oh, yeah. Absolutely. So alright. Here, we have a we have another question, and I’m I’m also gonna throw on, some tips. But but Leonard so, doctor Harvey Kasser asking, What are some top, suggestions, for prompts, tips, etcetera? And I’m gonna go ahead if if you’re listening on the podcast. Leonard said so I kid you not. He has a 100 guides already. But but, Leonard, maybe, as I even, scroll through, what what are some of those, suggestions for prompts and tips? Maybe just for DALL-E, but, we can also talk in general.
Leonard Rodman [00:26:48]:
Yeah. So in general, if once, Jordan gets to the There we go. 4th slide here. Yeah. I put on all my prompt guides, this sort of short and sweet summary of some ways to what I call dress Your styles or your prompts, so to dress it up a little bit, with descriptive words. And so here I’ve got kinda 5 hot ones that what I recommend to people, so mood. And that isn’t just the mood of the characters, but it’s the mood of the whole scene. Right? So do you wanna happy and enjoy a scene photograph? Is it you know, what kind of mood is it supposed to evoke in the person who’s viewing it? So you can throw some words about mood in.
Leonard Rodman [00:27:31]:
I think that lighting can be helpful. So throwing in the type of lighting you want, and it doesn’t have to be a fancy word like or whatever. It can just be bright, dim. Right. So just trying to communicate what you want, what kind of medium you want. So do you want a photograph? If you do, Tell it. Right? Every single time, and then you’re gonna get a lot more photographs. Whereas you saw that we got a mixture of photographs and illustrations and other things.
Leonard Rodman [00:27:59]:
Throwing a camera in there can be helpful, and people go nuts with this one with, like, every little detail about the camera, And that’s not really how the image generator works, but if you do put a professional camera name in there, you’re more likely to get a professional looking photo, because people uploaded a bunch of photos tagged Canon ES Mark 5 d. And if you own a $10,000 camera, you’re usually pretty good at photography. So that gives you, like, a little bit of a gate, around, like, what kind of images are gonna go into your training set. You can also do things like iPhone if you want a less professional photo, GoPro if you want something action oriented, Polaroid or other old school cameras, if you want something more old school looking. Composition can also be helpful. So if you want something more action and oriented, you might want skewed or off center. If you want everything centered, you might wanna put that in. If you want the character on the left or the right, you might wanna put that in.
Leonard Rodman [00:28:57]:
I frequently find that if I’m not using text, you know, you can flip an image left or right, but if you’re using text, then you’re limited. And it’s definitely helpful just in terms of graphic design. If you have a character, they should be looking at the rest of your page. Right. So, like, if I were to put someone in the bottom right of this page here, they should be looking up into the left at the text because that directs the viewer’s eyes. Midjourney has a bunch of parameters that don’t necessarily apply to other things. So aspect ratio that’s coming 2 other image generators not there yet, I believe, for DALL-E. And that’s super specific to Midjourney, but it’s its own specific anime style, style raw.
Leonard Rodman [00:29:40]:
We could also use things like, style expressive, for some, Excuse me. Different subsets within mid journey, and they also have weird. But really the first things up top, mood, lighting, medium, those are the things that are gonna work in any image generator, and they’re really gonna improve your prompts just by communicating to the computer what you want. And then the other thing is just to iterate, to try over and over. So if you don’t get it right the 1st time, you try it again with some new keywords. Even use a thesaurus or you might even use ChatGPT. I wouldn’t use it to write your prompts, but you could use ChatGPT to give you suggestions for a word to throw in or a word swap. So for example, for a long time, I was trying to get clear plastic tubing, and I couldn’t get it.
Leonard Rodman [00:30:28]:
So I Eventually learned that, like, you have to try synonyms. So I tried, like, glass, ice, crystal, leaded crystal, and Translucent. Right? And you just try all these alternative keywords, and that’s how you eventually get at what you’re looking for. So you don’t mind not describe it Exactly how it is. You wanna find a way to describe it that communicates your need to the computer.
Jordan Wilson [00:30:52]:
Yeah. It’s it’s So so good there. Like, y’all, like, we’ll share a lot of this. And and, you know, Leonard, if you’re not already following him on on LinkedIn, I suggest you do so, Especially if you’re interested in AI image generator because he has now a 100 of these guides, and they are so, so good. So alright. We have a lot of questions here. Leonard, I don’t wanna keep you forever, but let’s go let’s go just quick rapid fire And see if we can get a couple couple questions answered here. So, Ben asking, any tips for how to get images of people or animals to look less and CGI like.
Leonard Rodman [00:31:30]:
So I’d say number 1, realistic photograph. Throw that keyword in there. If you’re using something other than, honestly, MidJourney or the latest, Leonardo, you’re probably gonna get things that look kinda CGI like. That’s that uncanny value in skin texture I was talking about before. So really, Musivir is also one that’s pretty good at this point, but they’re in beta, so that doesn’t help you. So really just you gotta go with Midjourney or Leonardo and pay for it, I’d say. Use the keyword, you know, realistic photograph. You can use things like and Cinematography or people use 8 k, which doesn’t really make it 8 k, but they think it does.
Leonard Rodman [00:32:09]:
I don’t know. People like Those, outputs. You can also, like I mentioned, mention a fancy expensive camera, and I think that can also be helpful. Great question, though.
Jordan Wilson [00:32:21]:
Great tips. Great tips. Alright. Here, we got another one from Brian. Brian, thanks for the question. So he said, what about copywriting The images that you actually generate. Is that a thing? Does it work? Some people
Leonard Rodman [00:32:32]:
flap their logos on, and that does nothing except Or make it harder for someone steal your work, which that’s fine. I don’t object to that. If you really wanna copyright one of these images, you have to Put in 51% of the work, which no court has really determined what that means yet. So if you took, like, Two different works, and then you photoshopped the heck out of it for an hour or 2 hours and videoed yourself photoshopping it, and then sent that in. You could probably copyright it. But honestly, I would probably just wait a little bit longer. I think that more likely you know, Not even more likely. Definitely, you know, you own the copyright on the design you put on top of an image in Photoshop or whatever.
Leonard Rodman [00:33:16]:
But right now, it’s probably not looking good if you wanna copyright something that’s a direct output until you put all that extra work in, And even that, it’s not really settled.
Jordan Wilson [00:33:26]:
Yeah. Yeah. Brian, this is one that’s continuing to go through the courts. So far, I don’t think anyone has successfully been able to copyright something they produce in terms of just strictly AI image generator, but Like what, Leonard said, it is kind of, kind of ongoing. Alright. We got another one here. Mike, is there a way to upload an image and have it modified?
Leonard Rodman [00:33:46]:
Yeah. And the next person also, Anibal, also asked something pretty similar. Can you correct an image? So can you upload an image and modify it? You can, depending on what you mean by modified. So in mid journey, you can upload an image in and use it to inspire your next work, But I found that it’s really just turning that image into text and the text back into an image, and maybe following some of your character placement a little bit. If you really wanna modify it, I would say bring it into Leonardo, which will let you do, it’ll let you upload an image and then directly modify, like, a section for example, which is something you can do in, mid journey with an image you created in mid journey, but you can’t do with an upload. Whereas in Leonardo, You can modify a piece of an image, like, for detail editing or to take fingers out or add fingers. Oh, so you can do that in Leonardo and MidJourney with images you’ve made with them, but only in Leonardo. Can you do it with an upload?
Jordan Wilson [00:34:49]:
Mhmm. Yeah. Great great question. Alright. And we we got to our last one here. So, Monica, since you got the 2 for 1 there, answering, Anibal’s question. So, Monica asking, do you have any data or case studies on performance for ads using AI generated images versus real photographs?
Leonard Rodman [00:35:07]:
I don’t have any data. I’ve definitely seen, though, and you probably have too. I believe Jordan talked about it, the study where they found that, venture capital pitches that were generated by AI got, like, 50% more funding or something like that. And it’s really just about ticking the boxes. Right? Like, Those venture capital pitches probably did, like, more standard thing and didn’t leave anything out, whereas a lot of people applying for money probably are not great at writing pitches. When you’re generating images with real photographs versus or generating images versus real photographs, you know, I think that, Something I’ve had trouble with as a web designer and doing advertising design and doing learning design is you can never find the picture that’s exactly right for your purpose Unless you literally go out and take it yourself. So AI gives you the ability to have an image that suits your storytelling. And not only that, we didn’t really talk about this, but I think it’s really gonna change storytelling because in the past, you would write your copy in just words and then try to find images later.
Leonard Rodman [00:36:11]:
And now I think people are going to be coming up with images as part of their flow because they know they can make the perfect image, and it’s gonna become part of Storytelling in a way that it wasn’t before. So no case studies, but I think that AI generated images for advertising are gonna blow everyone away.
Jordan Wilson [00:36:29]:
What? Like, I wish I had something to, like, pound a bunch of emojis on what Leonard just said, but I think what he just said there is a gem of where we’re going in marketing and advertising, and I know this even from personal experience. A lot of times, you would really have to massage the copy or other parts of a campaign To fit the image that you had because maybe you had very limited imagery, but now with AI image generators, it’s vice versa, and that’s very exciting for the future marketing and advertising. So we’ve, Leonard, we’ve kept you for very long, but I I wanna give you the chance. So, if if someone now is very interested, so we’re gonna share all of your work, all your website, but maybe give everyone just that one last piece of advice. If they wanna get into, you know, being better At, creating AI, generated images, if they wanna go from, you know, 0 to 5 or from 5 to 10, what is your best piece of advice, for people to really up their game now that we’ve seen kind of the good, the bad, and the, awesome.
Leonard Rodman [00:37:29]:
Yeah. I mean, depending on how you wanna look at it, you can call it homework, you can call it play, But really just get in there for 10, 15 minutes a day. Pick an image generator. It could be the same thing for ChatGPT. Just write prompts for 10 to 15 minutes a day. Make sure you’re iterating. So you take your output and you say, how can I make that better? And you try it again and again. But really just getting in there and playing.
Leonard Rodman [00:37:51]:
I think another great way is taking what you get out of your play and posting it on the Internet and seeing what people say about it. People are generally surprisingly helpful, actually, about this particular topic area. Everything else on the Internet, they’re pretty mean, but people about AI seem to be pretty nice, and they’re trying to help people. So that’s cool.
Jordan Wilson [00:38:08]:
So true. And, hey, don’t worry if you didn’t catch the whole thing, if you weren’t able to take notes fast enough. We’re gonna recap everything that Leonard and I talked about, so go to your everyday AI.com. Sign up for that free daily newsletter. Leonard, thank you so much for joining us. I’m kinda bummed your your your video froze, but Yeah. Your your your insights were on point. Thank you so much for coming on the show.
Leonard Rodman [00:38:33]:
Yeah. It was my pleasure, and thanks so much for having me. Everyone, hope to keep in touch. Feel free to shoot me a message anytime with any questions.
Jordan Wilson [00:38:39]:
Absolutely. Go check today’s newsletter. We’re gonna have, A ton of more information, you know, from Leonard’s website, more ways that you can connect and engage with him because it’s a great way to grow, is is to connect with him as well. So, we hope you enjoy this, and we hope to see you back. Actually, I gotta quickly plug. We’re gonna be building Tomorrow Thursday, we’re gonna be building a brand live from scratch with AI throughout multiple parts. People have always asked for something, a show like this, So I’m excited if if if you wanna know how to actually use AI from ideation to publication, join us tomorrow and the day after and every other day at Everyday AI. Thanks, y’all.
Leonard Rodman [00:39:18]: