AI Feed

Ep 89: AI’s Role in Responsible Research

By, Admin
  • 28 Aug, 2023
  • 205 Views
  • 0 Comment

Resources

Join the discussion: Ask Avi and Jordan questions about AI and research

Upcoming Episodes:
Check out the upcoming Everyday AI Livestream lineup

Connect with Avi StaimanLinkedIn Profile

Video Insights

Overview

AI is transforming the landscape of research and knowledge dissemination, presenting both immense opportunities and profound challenges. In a recent episode of the podcast “Everyday AI,” experts delved into the role of AI in responsible research. This article aims to highlight the crucial takeaways from the episode, shedding light on how businesses and decision-makers can leverage AI tools responsibly to drive innovation, foster accuracy, and improve research processes.

Accelerating Research Publication:

The process of research publication has long been perceived as clunky and time-consuming. However, emerging AI tools such as Ciretto are now revolutionizing this landscape. By significantly reducing the time it takes to publish research, Ciretto enhances researchers’ experiences and ensures timely knowledge dissemination. As a business owner or decision-maker, keeping abreast of such transformative tools is paramount for staying ahead of the curve in your respective industry.

Enhancing Writing and Review Processes:

Writing plays a pivotal role in research, but it often becomes a bottleneck due to time constraints faced by researchers. AI-powered language models, like GPT, are being embraced as powerful editing tools that can streamline the writing process and help researchers organize references more efficiently. While these tools hold tremendous potential, it is crucial to utilize them responsibly, understanding their limitations as language tools rather than sources of unequivocal truth.

Responsible AI Use in Academic Environments:

The podcast episode also highlighted potential concerns regarding the use of generative AI tools for academic purposes. It is essential for universities and educational institutions to foster an open dialogue on the responsible use of AI to avoid unintended consequences. Educational programs that equip students with critical thinking skills regarding AI technologies are vital to help them navigate the evolving landscape of AI-driven research.

Transforming Information Accessibility:

Access to reliable and verified information is critical, not only for researchers but also for decision-makers in various fields. The limitations imposed by paywalls and licensing restrictions often hinder the free flow of knowledge. However, AI models can be harnessed to address this challenge. Collaborations between academic publishers and AI companies offer the possibility of making reputable and important information accessible to a wider audience, fueling innovation and knowledge sharing.

Building a Responsible AI Ethos:

As decision-makers, nurturing an ethos of responsibility is pivotal when leveraging AI tools in research. Educating yourself and your team about the proper and responsible use of AI, while also understanding the limitations and potential risks, is essential. Open dialogue, collaboration, and partnerships between academia, industry, and AI developers can help strengthen research integrity while harnessing the transformative potential of AI in a responsible manner.

Conclusion:

The paradigm-shifting potential of AI in research is undeniable. However, deploying AI tools responsibly is equally paramount to maximize their benefits while mitigating unintended consequences. By embracing responsible AI practices, decision-makers can expedite the research process, enhance critical thinking skills, improve information accessibility, and ultimately drive innovation across industries. Together, let us foster a culture that harnesses AI’s potential while upholding the standards of rigorous research and responsible knowledge dissemination.

Topics Covered

– Importance of having a strategy in marketing
– Cohesive marketing strategy and brand voice
– Creating a single sentence brand voice that summarizes the company’s mission
– Applying the brand voice consistently across all marketing channels
– Using AI to create and maintain a marketing strategy
– AI tools like ChatGPT for consistent messaging and content creation
– Connecting different pieces of content with AI to maintain consistency
– Using AI to generate authentic content
– Using a transcription tool like Descript as a basis for AI-generated writing
– A solid presentation as training for AI to create authentic content
– Importance of strategic marketing in light of rising ad costs and increased competition
– Lifetime customer value as the primary metric in the future
– Utilizing AI for running paid ads and experimenting with strategies
– Introduction to the speaker, Jordan Wilson, and the podcast “Everyday AI”
– News updates on AI developments
– Background and experience of the speaker in content and marketing
– Overview of the company and its offerings
– The need for a cohesive marketing approach and the consequences of siloed efforts
– Criticism of reliance on paid advertising and the importance of true marketing strategies
– Tips and best practices for training AI for desired output
– Starting a new chat each time to avoid confusion and hallucinations in AI responses
– Effective prompting and training stages for refining AI responses

Podcast Transcript


Jordan Wilson [00:00:18]:

How can we use AI for research without it either taking forever or without it just lying. Right? you know, there’s so many breakthroughs happening that require, research papers, but sometimes they can take months or longer. So that’s one of the things that we’re gonna be talking about today on everyday AI. I’m very excited to join you. My name’s Jordan Wilson. I’m your host. And we do this every single day. Everyday AI is for everyday people, and we’re all trying to learn and leverage AI. So that’s what we’re talking about today. Excited. If you’re joining us live, make sure to leave us a comment, when we bring our guests on and what what do you wanna know? What do you wanna know about using AI for responsible research. Right? Is this something you’ve seen? Is it something you’ve experienced? let us know. If you’re joining on the podcast, make sure to check out the show notes. We will have links to even join the conversation, and you can ask our guest questions as well as other just very important, tools, tips, things that we’re gonna be talking about. It’s all in the show notes.

Daily AI news

So Let’s get started first with AI news. So a major AI breakthrough in heart health So researchers at Osaka Metropolitan University just released some groundbreaking and, accurate AI based methods for classifying cardiac function. So they’re using some, some new AI technology and chest, chest X rays to better and more accurately, diagnose heart conditions. So exciting, exciting news there. So make sure to check that out in the newsletter.

Not so exciting news. Apparently, the AI boom is causing an epidemic of underpaid overseas worker. So a new Washington Post Expo to say kind of lift, lift evales up on this one. So there is a a company called remote tasks, which, uses remote workers in, the Philippines for different tasks. So this Washington Post reports kind of showed that a lot of the workers interviewed for this story, were making less than the minimum wage in the Philippines, which is 6 to $10 a day. So, scale AI is is kind of the, the owner, reportedly of this remote test. So something to keep an eye on, but this is it’s not new, but, you know, so many of these different models in AI need a lot of human training, and just kind of this this piece kind of goes into the uglier side of that.

Alright. Last but not least, journalists You can keep your job for now. Right? so a lot of companies have been experimenting with having AI kind of be their their their writers. Sometimes it works. A lot of times it doesn’t. So, there is a recent mishap, in the Columbus dispatch using their AI sports writing tool, So this was a recap of a football game at Westerville. and it faced a lot of criticism on social media for some of the action packed phrases like close encounter of the athletic kind. Yes. That is what their AI writing tool describe this football game as a close encounter of the athletic kind. don’t worry, though. There’s people out there working on more responsible, AI tools that don’t hallucinate or make things up like close encounters of the athletic kind. So let’s actually bring our guests on, for today and talk about how we can use more and responsible AI tools to get better results. So, welcome to the show. Avi Saman, the founder of scireider.ai. Avi, thank you for joining us.

Avi Staiman [00:04:03]:

Thanks so much, Jordan. It’s a pleasure being here. and I from that headline, it sounded like maybe a UFO kinda landed in the middle football game. So, you know, who knows? Maybe that was an accurate description.

Jordan Wilson [00:04:14]:

But you’re right. And I I I guess that could be athletic. You know, people had to run and jump out of the way. that’s good. So, hey, as a reminder, if you’re joining us live, like, mercy is saying, hi, everyone. make sure Get your question in for Avi. What do you wanna know about responsible AI research? Val saying good morning. Wuzzi. Good morning, doctor Muthanna. we’ll get to your your your question there in a in a second, but thank you everyone else for joining us live this morning, Avi, what’s wrong in the in, like, what’s going on right now with AI and research tools, and and kinda what led you because I’m sure you saw things that were going wrong or things that just weren’t right out there in the field. So kind of like what led you to to create sci writer and why is it needed?

About Avi and SciWriter


Avi Staiman [00:05:03]:

Yeah. So that’s a great question, Jordan. first of all, I would say that I think all of us, anyone who’s played around with ChatGPT or any of the generative AI tools, has come across a hallucination, and hallucination otherwise known as, you know, crap that’s made up that isn’t accurate. And I think sometimes we forget that the second l of LLM stands for language. And, we we kinda treat it as a, I don’t know, Wikipedia light, sort of tool and try to get all our information from there. We’re actually is reporting to be a language tool. So the issue in my specific field, which is scientific research and site and the publication of that research, is that we can’t afford, a high degree or even a low degree of hallucinations. If we’re just writing marketing copy, so maybe we can get good first draft, then we can play around with it, fix up the mistakes, and send it off. If we’re talking about doctors who in real time are looking to the scientific record, looking to research to answer critical questions on the fly, well, then they can’t afford to have those mistakes that are baked in and built in. So I think there’s an, in general, there’s a trend, there’s this sort of, small section, which maybe isn’t known as much, but research tools that are using AI to, to kind of, harness the good of generative AI and use that, as a tool. So sci Riders specifically, which is the project that I’m working on, with a buddy of mine is an attempt to ask ourselves, can we take the power of, let’s say, a writing tutor Right? Everyone has ever worked for the writing tutor before knows it’s can be so powerful to have these this question to answer this dialogue. Someone to help you actually tease out what you wanna say. Well, can we turn, can we use generative AI and can we ask the researcher, give us tell us what methods you used, give us your, you know, the results from your lab or from the library that you were doing your research in, feed that to us and then we can take that and turn it into a an output that actually resembles, what a typical article is like and bring the hallucination hallucinations down to nearly 0. And if we can do that, then what we’re actually doing is saving researchers a lot of time. And that translates into more time for them to be doing their cancer research for them to be you know, tutoring their students so that they’re the next generation of researchers for them to be explaining their research to the public. So that’s why we see our tool as really critical, for, you know, for the next generation of research and science.

Jordan Wilson [00:07:36]:

Yeah. So so talk, Avi, a little bit about what this process looks like now for researchers because it sounds like it’s something you either run if you want your research out quickly, you either run the risk of, you know, maybe using some other, you know, like a ChatGPT type tool out there and getting hallucinations or It just might take forever to to to get a new, you know, scientific breakthrough out to the masses. So so what is it like now for researchers, to, you know, is there just too much gray area on, hey, what’s the right way to maybe tap into AI to expedite that process?

Traditional researching takes too long


Avi Staiman [00:08:09]:

yeah, it’s a the answer is it’s a big mess. first, I mean, I can I think we can all relate back to a couple years ago when we were in the, you know, climax of the pandemic, And we were all waiting for these labs to come out and say, okay. Here’s, you know, kind of the latest study. Here’s the vaccine. Here’s what we’re proposing. and I think everyone was kind of frustrated by the pace. even when it was expedited and even when the drugs were pushed through, to have that peer review, it’s really critical to go through what’s called the peer review process. The peer review process is essentially some what takes a study that someone have made based on their research and turns it into actually verifiable scientific literature that we all rely on for all sorts of decisions on a daily basis. So now the issue becomes is that process can take 2 to 3 years. And I think in the pandemic, we realized that holy crap, we don’t have 2 to 3 years to actually do this. So what generally happens, I can I could tell you like a typical scenario is that the researchers do the study. The study itself could take anywhere from 6 months 2 or 3 years depending on what you’re actually studying, then becomes the writing stage. And during the writing period, so oftentimes the researchers who are running these labs are running a bunch of experiments and trials in parallel, and they don’t really have the time to write up the research. So they’ll give it off to a student, and they’ll say, you know, here, a master’s student, PhD student, you know, go and write this up for me. We’ve already done the research. How hard could it be? It’s really not easy. They break their teeth. They struggle. They get super frustrated. In fact, half of the authors that publish, half of the students and researchers that publish never publish again because it’s such a frustrating experience. It’s a really big problem. Then they send it back to their professor who’s running lab. Professor’s like, oh my goodness. This is total rubbish. This is crap. I need to throw it out and start over. and then they rewrite it and only then it gets sent to the publisher. And then I won’t bore everyone with the gory details, but even at the publisher, there can be endless back and forth with the publisher about what’s accepted, does it meet their formatting standards, does it meet their their, you know, can it be reproduced So this whole process is very clunky and I think part of it is important. We don’t want someone to submit an article in the next day it’s published because then we’re like, woah, Did anyone actually look at this? Like are we okay with that? That’s a problem. if it takes 2 years, it’s also a problem. So my goal is to kinda boil that time down and and and experience first of all, is to bring the time down considerably. But second of all is to turn that experience into enjoyable. this should be the climax. You finished your research. You want to tell the world about what you’ve, you know, discovered and then like you just have this downer experience. So that’s what we’re trying to do at Ciretto has really turned that around.

Jordan Wilson [00:10:45]:

Yeah. and and we actually have a couple of people in the, in the medical field here joining us in the comments. So doctor, Doctor Harvey Castro. Thanks for joining us. a good a good question here, Avi. So this This one might go over my head, but doctor Rossepaggettas asking, you know, about oral defenses. So I’m guessing, you know, that’s part of the process of the research you know, getting something out. And then so he’s kind of asking, like, is there a good way or a bad way to even maybe use different AI tools to help in that oral defense. I’m guessing is is part of the process to to get this out there. I’m not sure, but, you know, walk us through and and maybe, you know, whether it’s sci writer, or if there’s other AI tools that maybe are or aren’t a good idea in that step of the process.

Using AI in researching


Avi Staiman [00:11:31]:

Yeah. So I’m not I can’t I’m not sure that I fully understand exactly what he’s what he’s referring to. But I can tell you when it comes there, I have seen a number of, researchers and and and teachers at universities and colleges who have said, that they are going to actually phase out written works because they’re afraid of, you know, of AI generated, you know, works and they’re going to rely quite heavily on oral, presentations and defenses. I’m not I don’t think we should be running to do that because I think there’s a power written word that it can be shared afterwards and then critiqued and followed up on and revised whereas an oral presentation doesn’t exactly give that. I also think that why are we trying to look away from generative AI? Can we ask the students to think critically about generative AI and and actually teach them how to prompt in a way which can get them the best results. So I’m not I I’m not against, you know, kind of I think it’s actually really critical that researchers learn how to speak their research and not just how to write their research or how to understand it themselves, and a lot of researchers are terrible at that. but I’m not sure that I would use that as a replacement for the written record. I think it’s important that anyone anywhere at any time can actually tap in and be able to look up, you know, if if god forbid, I don’t know, your grandparent, has some illness. And you wanna you’re not a doctor. You’re not a researcher. If you understand what’s going on here, there are amazing tools that are being built now to create layman summaries. right? So that me and you as non subject experts can go in and understand what that do they have? What are the treatment possibilities? How, you know, what what doctor should I be looking to? And how do I ask those questions? So that’s kind of, where I’m excited about some of the specific research tools, in the AI space. Yeah.

Jordan Wilson [00:13:14]:

you know, couple couple questions and comments here. I wanna get to one more, So, you know, Bronwyn just saying this would have been helpful when I had to research stem cell therapy, kinda like what you said, even for people maybe who aren’t You know, trying to publish, this is something that can help just people understand, topics for sure. Brian just saying, accuracy as paramount in social science research too, pollution. hallucinations definitely make you look incompetent. but here’s here’s a question, Avi, I’d like your take on doctor Muthana, is asking. So, you know, we’re just talking about that a lot of different, publishers out there are now blocking large language models from seeing the information out on their website. So, you know, let’s say there’s there’s large, you know, companies that normally put out great scientific research So what happens for the large language models out there? And maybe, you know, not specifically sci writer, but maybe maybe so. So what happens when all these publishers start blocking access you know, to these large language models to get this, you know, really needed information that would in theory help make that model smarter on whatever scientific research someone is using it for?

Researching with AI after publishers block access


Avi Staiman [00:14:21]:

Yeah. This is a really good question. So I wanna break it down to a few parts. First of all is it’s not clear what’s been used already and what hasn’t been used, what’s been scraped and what hasn’t been scraped, right? And what’s interesting about the academic and publishing industry is that they have a lot of things behind a paywall, right? and they’ve been doing this for many years. So imagine, I don’t know, you want access to New York Times article and, like, all of a sudden you get stuck. Well, actually, That content is super valuable, and it may not be so simple for OpenAI to use that. Even things that are available might not be under a license, that can be used by these open that by these AI companies. So it’s unclear exactly what they have, what they don’t have, and you know, what they’re missing. My claim, and I recently wrote about this in, in a magazine called the scholarly kitchen, which I’m also a a member of the editorial board there. is that if that actually there’s a tremendous opportunity here because if we think about after all the hype dies down right? And after are we all kind of buckle up and say, okay. What can we actually do with generative AI? I think that for the majority of use cases, we’re gonna want it to be relying on reputable, verified, important information. Okay? And with all due credit to Reddit, I don’t think the regurgitating Reddit does that much good for society. It may help understanding kinda how people talk and maybe replicate that, but it’s not going. No one’s gonna take that and put it into their doctor’s office. So the I think the real question here is how can academic publishers who own and basically can license this content and the large language models, how can they join forces to actually cover an entire field, cover an entire space, not just what’s available through Wikipedia or, you know, secondary sources, but actually go back to the original research and then build either large language models or what I’ve seen is more small language models that are very hyper specific based on certain content inputs and actually take that and turn it into lifesaving, application. That’s where I get really exciting about it excited about it. It’s a challenge because the first of all different publishers have different contents. You meant you mentioned at the beginning of the show about cardiology. Well, you know, you might have some of that content with one publisher and some it might be another. So you need to get these publishers to work together. then you need them to trust the AI company they’re working with. So I think it’s beginning and starting, and I think there’s probably a lot going on behind the scenes, but it’s still early days. So we’ll have to see how this develops.

Jordan Wilson [00:16:45]:

Yeah. And speaking of developing, I think there’s a lot of just developments in general, right, because now you also have bigger companies, you know, like Google, you know, trying to, be a player in this space. So you know, with their new, I think, poem 2 is kind of a large language model, for the for the medical field. so in in in general, maybe not just specifically talking about, you know, palm too, Avi, but is it a good thing or a bad thing when you have, you know, large companies like Google you know, creating models specifically for, you know, the medical space. Right? So that’s obviously what a lot of scientific, you know, research articles, that’s that’s kind of their, category, so to speak. So is that a good or a bad thing? And does that make it you know, easier or maybe more difficult for, for researchers to have something like Google Palm as as a resource.

Issues when you don’t research with AI properly


Avi Staiman [00:17:43]:

Yeah. So I think that they to use it as a resource is is is is perfectly fine, and I think even recommended it. There I want a warn of 2 two kind of, issues that I’ve already seen, that can be really, damaging if you don’t know how to use it properly. So example number 1, and these are both kind of on the one hand funny, but also on the other hand, kind of, sad and and ironic. The first example was a professor He got a bunch of, student, student works or student assignments that he got back papers at the end of last year semester. And He then was, you know, wanted to be very vigilant and decided that he wanted to check if they were written by generative AI or not, and he through it. He actually put the papers into, ChatGPT and asked GPT. Well, did you write these papers? And GPT says, yes, of course, I did not realizing that that’s not a great question to ask JBT. and he then went on to fail all of his students, and the students basically had, you know, had a mini rebellion, because they’re like, we didn’t use ChatGPT. So that’s kind of one example, which again, is comical, but also, quite sad. The second example, which is even more troubling potentially, is if you go into Google Scholar and Google Scholar is kind of like, you know, to find research articles through Google, and you, I believe it’s you type in the name of, like, you know, academic articles and then and then specifically look for the words, regenerate response. You will find there are already published articles that have the words regenerate response in there. Now I will give them the benefit of out. There may be that phrase may exist in certain academic articles, but most likely that’s a sign that these researchers, not only do they copy and paste directly from ChatGPT, they may not have even gone over the output and may have just published this even worse. A lot of the stuff is actually published. It’s not just that someone threw it up on a blog. Anyone can do that. but it’s actually made it through that peer reviewed process, which is supposed to catch those issues and errors before they happen. So I just think it’s, you know, I I’m always like in two minds because the the Kona entrepreneur innovator, you know, embrace the good of technology. And he says, yeah. Like, let’s use it. Like, if you’re an, for example, if you’re not a native English speaker, right? And you need to publish your article and you’re competing against your American and your British colleagues and your Australian colleagues. And you find writing English to be really hard. It could be your 3rd or 4th language, and maybe you never learned it in school. using GPT to edit your work is is is really really can be game changing. It can be totally game changing or to help you to organize your references according to a specific style guide. So on that part of me says, yeah, like, let’s embrace it. And then the other part of me when I hear these stories is oh, that’s cringe worthy. Right? Like, like, this is really problematic. And if these are the stories we know about, what about the stories we don’t even know about? So I think the answer is education education, doing podcast like these, getting out and explaining how to use it right, how to not use it right, and making sure you’re part of that conversation and dialogue, you’re not afraid of it on the one hand, but you’re also not just using it blindly on the other.

Jordan Wilson [00:20:52]:

Yeah. and I think, Avi, that’s a good point because so many people will just blindly you know, use, you know, something like ChatGPT or Google’s Bard and, you know, kind of think of it as, fact. Right? And they say, alright. Well, hey. If, you know, Google Bard or Microsoft Bing chat gives me this response it must be, you know, ready to go. It must be well researched. It must be, you know, ready for whatever I’m going to be using it for. so kind of like with that in mind, I know we’ve talked about a lot top to bottom on the show so far. But what’s kind of your your one piece of advice? for people out there, whether they’re reading, you know, recent, you know, scientific studies or maybe they’re they’re writing them. what is the one piece of advice that you have for people to responsibly use AI kind of regardless of of of where they’re, you know, where their output may be.

How to responsibly use AI


Avi Staiman [00:21:48]:

So the way I describe it is in three words, Wordle on steroids. Okay? That’s what I call ChatGPT. For those who aren’t familiar Wordle was this game that got really popular about a year ago where You get, you know, you have to guess 5 letters and and make up the word. And why do I think that ChatGPT is wordle on steroids? First of all, what I said before, it is not Wikipedia of light. let’s get that into our minds. Now there are certain interesting, research applications where you could actually add on references. So scholar AI is a really cool app where you can kind of add. It’s an add on to ChatGPT where it gives you actually referenced answers. Really cool. But just GPT on its own and some of the other generative models, are not referenced as we know, references are often made up, which is a big problem. So we need to not think of it as an information machine. That doesn’t mean that it never has accurate information. It can. But if our expectation when we go into Wikipedia is that we’re getting accurate facts, that should not be our our baseline, assumption when we’re going using a generative AI model. We should be using it for exactly what it says language, large language model. So language can be used in all sorts of ways. It can be used to take a very long and shrink it into a shorter one. It can be used to take a text and translate it to a different language, which is an application that I’m working on now in my in my original business academic language expert. it can be used to even kind of generate, you know, a social post about a certain piece of research that was published. So but those are all kind of it takes ideas, it takes words, and it processes them, or it can even generate words in a very creative way. But when we think about it as a word language tool and not as an inform a source of information or a creator of information, then I think we can be much better empowered, right? And that professor that I mentioned before would never have asked GPT if he had written this paper or not because he’s not a source of information. Hit, Haddie asked UPT, is this written well according to the average, you know, standards for a student at, you know, a university after a certain amount, well, actually Gbd might have been able to give him a, you know, a semi intelligible answer. I I I think part of the blame, by the way, rests with OpenAI I think they kinda came out and are like, here, take this and figure out what it does. and I think it’s important. I and I haven’t seen enough educational materials on their end. And I know it’s not just OpenAI, right? There’s other Google and Facebook and and anthropic and, you know, hugging face But from all these companies, I haven’t seen enough educational resources and material around. Well, here is what it can do. Here’s what it can’t do here is responsible use and not irresponsible. So I’d love to see more of that. But in the meantime, let’s educate ourselves and and, you know, so that we’re using in the best way possible.

Jordan Wilson [00:24:29]:

Yeah. And I think that’s a good point because, you you know, Avi, you kinda mentioned, you know, it’s there’s almost, too many, you know, new new tools, too almost too much you know, kind of new software out there claiming to help, you know, it’s it seems like there’s either a new large language model popping up weekly or at least a large update. so, actually, real quick, just just give a plug. We’re gonna share it in the newsletter today. So, make sure to go to your everyday ai.com. Sign up for the newsletter because, Avi’s been dropping a lot of a lot of good, names. You know, he said, like, scholar, scholarly AI or or scholar AI, which is a great ChatGPT be debugging for research. So make sure to go sign up for the newsletter, and we’ll share it with you. But just real quick, tell people about kind of a tool up to Tuesday and a great resource that you’ve created, a great free resource, which we’re we’re big on here at everyday AI, but a great resource for people to, kind of help write better research papers and know what tools are out there to help them.

Free resource for better researching with AI


Avi Staiman [00:25:28]:

Yeah. So I put together AI tool up Tuesdays on a whim. I was inspired by a colleague from EO, from entrepreneurs organization who had done this in the marketing space. And, basically, the idea is is I said, okay. there is a if we try to look at all the AI tools in the world, well, we get overwhelmed very quickly. But in my specific, you know, industry, so that’s academic research, we actually can there’s, you know, a couple dozen, AI tools, which I think are mature enough that can actually be used. Some of them are in the research discovery, world. Some of them are in the research, processing or image you know, image production, like, creating new images from scratch specifically for science. In the writing, that’s where Scott well, that’s where, sci writer lies is in the writing tool. section. So there’s all sorts of different areas where AI can actually be really, really important and helpful. So what I did was I put together, AI tool up Tuesdays which is which has, I guess, I’d call said gone viral. we already have over

AI [00:26:31]:

35100

Avi Staiman [00:26:33]:

researchers from around the world who are actually registered, for the course. It’s 8 sessions. Each session is 3 entrepreneurs Most of them are academics, former academics who have built tools to really address real big problems in the academic workflow and in research, so like this month, you’re you’re showing on the screen, we’re gonna be talking about research, veracity, and integrity, super critical. How do we make sure that researchers actually can be relied upon in a generation And in a time where there’s so much, you know, kind of rubbish being floated out there. so like I said, entirely free, each each entrepreneur researcher only has 10 minutes to present. So it’s really, really straight to the point, hard hitting, exactly what the capabilities of their tool are. And I think anyone who’s involved in research in any way in their business or in their, you know, personal life, this is gonna be a can’t miss because this will save you hundreds of hours, and I’m not exaggerating, in, you know, kind of the old fashioned way of writing a doctoral thesis over 5, 6, 7 years into being a super frustrating process, We want this course to really supercharge, research. And like you said, it’s free. So nothing to lose.

Jordan Wilson [00:27:38]:

Awesome. Cool. And, yeah, as a reminder, make sure to check out the newsletter. There was a lot of great information that Avi shared today. And like doctor Muthana here, just just left in the comments, the newsletter does go beyond, just just recapping the show. We have a lot of great insights and information in that newsletter, and we will include what Avi just talked about the, tool Tuesday. So, Avi, thank you so much for joining us on today’s show. We went all over the place, top to bottom, but I think it was important to talk about just responsible AI and research. So thanks again for joining us.

Avi Staiman [00:28:10]:

Awesome. Thanks, Jordan. If anyone wants to, you know, be in touch or follow. LinkedIn’s a great place. Abby statement. Just my name. feel free to send out an invite. quite active on there and sharing my latest thoughts about AI and in research. And, Jordan, thanks to you for, you know, this awesome podcast and really just making the best of AI, available and educating us about, you know, what’s possible.

Jordan Wilson [00:28:33]:

Absolutely. So what else is possible? Find out this week. We’ve got a great, great lineup. I think we have 5 speakers this week every single day Monday through Friday. So join us again. 7:30 AM Central Standard Time Live. Ask questions just like you, just like y’all did today with experts we bring on the show. with Avi. Avi, thank you again so much, and we’ll see you back again tomorrow and every day with everyday AI. Thanks y’all.

Leave a comment

Your email address will not be published. Required fields are marked *