792 - Beware and Be Aware: Tom talks AI Bias with Larry - Screw The Commute

792 – Beware and Be Aware: Tom talks AI Bias with Larry

Today is going to be a warning shot across the bow of artificial intelligence. It's going to be about artificial intelligence bias with my right hand, left hand man. I don't know what I can call him. Larry, the guy that you all know and love from doing the back end on all these podcasts. So he's front and center now because he's kind of a little AI freak lately since it came out. So we're going to have him on and tell you about the bias he's uncovered and what people are saying about it.

Subscribe at:

Listen on Apple Podcasts

Listen on Google Podcasts

NOTE: Complete transcript available at the bottom of the page.

Screw The Commute Podcast Show Notes Episode 792

How To Automate Your Businesshttps://screwthecommute.com/automatefree/

entrepreneurship distance learning school, home based business, lifestyle business

Internet Marketing Training Centerhttps://imtcva.org/

Higher Education Webinarhttps://screwthecommute.com/webinars

See Tom's Stuffhttps://linktr.ee/antionandassociates

[00:23] Tom's introduction to AI Bias with Larry

[01:41] Bias in Chatbots

[04:13] Age of current AI chatbots and how bias shows itself

[07:52] Large Language Models (LLM)

[09:36] Bias in the medical field

[13:57] Chatbots in Love and Financial Bias

[18:03] Political Bias

[19:46] Chatbots are not connected to the Internet

Entrepreneurial Resources Mentioned in This Podcast

Higher Education Webinarhttps://screwthecommute.com/webinars

Screw The Commutehttps://screwthecommute.com/

entrepreneurship distance learning school, home based business, lifestyle business

Screw The Commute Podcast Apphttps://screwthecommute.com/app/

College Ripoff Quizhttps://imtcva.org/quiz

Know a young person for our Youth Episode Series? Send an email to Tom! – orders@antion.com

Have a Roku box? Find Tom's Public Speaking Channel there!https://channelstore.roku.com/details/267358/the-public-speaking-channel

How To Automate Your Businesshttps://screwthecommute.com/automatefree/

Internet Marketing Retreat and Joint Venture Programhttps://greatinternetmarketingtraining.com/


online shopping cart, ecommerce system



Become a Great Podcast Guesthttps://screwthecommute.com/greatpodcastguest


Disabilities Pagehttps://imtcva.org/disabilities/

Tom's Patreon Pagehttps://screwthecommute.com/patreon/

Tom on TikTokhttps://tiktok.com/@digitalmultimillionaire/

ChatGPT Biashttps://www.google.com/search?q=chatgpt+bias&oq=chatgpt+bias

Attorneys can't get pregnanthttps://www.aisnakeoil.com/p/quantifying-chatgpts-gender-bias

Who's Larry?https://screwthecommute.com/larryguerrera/

Email Tom: Tom@ScrewTheCommute.com

Internet Marketing Training Centerhttps://imtcva.org/

Related Episodes

Rich Men, North of Richmond – https://screwthecommute.com/791/

More Entrepreneurial Resources for Home Based Business, Lifestyle Business, Passive Income, Professional Speaking and Online Business

I discovered a great new headline / subject line / subheading generator that will actually analyze which headlines and subject lines are best for your market. I negotiated a deal with the developer of this revolutionary and inexpensive software. Oh, and it's good on Mac and PC. Go here: http://jvz1.com/c/41743/183906

The WordPress Ecourse. Learn how to Make World Class Websites for $20 or less. https://screwthecommute.com/wordpressecourse/

Build a website, wordpress training, wordpress website, web design

Entrepreneurial Facebook Group

Join our Private Facebook Group! One week trial for only a buck and then $37 a month, or save a ton with one payment of $297 for a year. Click the image to see all the details and sign up or go to https://www.greatinternetmarketing.com/screwthecommute/

After you sign up, check your email for instructions on getting in the group.

entrepreneurship distance learning school, home based business, lifestyle business

entrepreneurship distance learning school, home based business, lifestyle business

Want The Transcript for this episode?

Read Full Transcript

Episode 792 – AI Bias with Larry
[00:00:08] Welcome to Screw the Commute. The entrepreneurial podcast dedicated to getting you out of the car and into the money, with your host, lifelong entrepreneur and multimillionaire, Tom Antion.

[00:00:24] Hey everybody, it's Tom here with episode 792 of Screw the Commute podcast. Today is going to be a warning shot across the bow of artificial intelligence. It's going to be about artificial intelligence bias with my right hand, left hand man. I don't know what I can call him. Larry, the guy that you all know and love from doing the back end on all these podcasts. So he's front and center now because he's kind of a little AI freak lately since it came out. So we're going to have him on and tell you about the bias he's uncovered and what people are saying about it. All right. I hope you didn't miss episode 791. This was the furthest I've ever departed from straight entrepreneur stuff, even though it it did have some light entrepreneurial lessons in it. But it was a reaction podcast to the historic song Rich Men, north of Richmond. So that was episode 791. Anytime you want to get to a back episode, you go to screwthecommute.com, slash, and then the episode number. That was 791. And check out my mentor program at GreatInternetMarketingTraining.com and grab a copy of our automation book at screwthecommute.com/automatefree.

[00:01:42] So we're going to bring Larry on. He's a graduate of the school. He's got a 3000 IT certifications and things I can't even pronounce. So Larry what's up? You ready to screw?

[00:01:55] Oh, I'm ready to screw. Oh, yes. And I want to tell everybody, thanks to Tom, I have screwed the commute quite well for many, many years.

[00:02:04] Yeah, and we're. We're thrilled that you're around. You've done the back end on This will be the 792nd episode.

[00:02:14] Yes, that's official episode. Now, if you include all our specials and everything else, we are well over 800.

[00:02:20] Oh, and then yeah, that is just.

[00:02:21] A number that is mind boggling. We had.

[00:02:24] Youth episodes and stuff like that, specials.

[00:02:26] And all that. So we're over 800. Yeah.

[00:02:29] And Vetpreneur month is coming up, so we're looking forward to that shortly. So you've been really digging into this I and the the you really like the app they have for the cell phones and so kick it kick it up a notch and tell them that some of the things you've found that are a little bit disturbing about bias in artificial intelligence.

[00:02:55] Okay. So yes, they actually are a bit disturbing. Let me give a tiny bit of history so we understand where we are and where we came from. For everybody that has seen this chat bot that Tom has called screwy, that is a chat bot that was designed to help find things that you could use that Tom offers and all this other good stuff that would be considered something very, very rudimentary at this point. But at the time those chat bots were actually pretty advanced and groundbreaking. However, what those chat bots did was pretty much answer your questions based on what you gave it. So if you said so, if it gave you a list of things that Tom provided, you would say, Yeah, I want that one. And then it would drill down a little further and give you more info. What we have today is are chat bots that actually use a method of intelligence that can almost predict what you want without asking. And I've tried this in a variety of ways and it is a little scary. You kind of get used to it after a while. But the problem is we're in a situation right now where the AI chat bots are becoming very, very, very, very popular beyond belief popular. And there's a lot of things behind the scenes that many people just don't realize. So just for reference purposes, ChatGPT, which has been in all the news reports for God knows how long, what months, many, many months. Chatgpt is not even a year old yet. In fact, it was launched on November 30th of last year, so that's only ten months ago. Bard, which is another chat bot by Google, has only been out since February of this year. So all of these things that we're seeing news reports on that people are using and everything else aren't that old yet. In fact, Bard is still in diapers. And even ChatGPT you might want to consider is still in diapers. So there's a lot of room for growth for all these guys. I call it a.

[00:04:57] Toddler.

[00:04:58] Toddler. All right. But toddlers are still in diapers, but unpredictable, unpredictable. And also, remember, if they're in diapers, you have to change them every so often. So your.

[00:05:09] Warning this.

[00:05:10] Yeah, that's a warning. That's a.

[00:05:12] Warning.

[00:05:13] It's going to produce some shit.

[00:05:16] Yeah. There's going to be some.

[00:05:18] Smelly doo doo coming out of these chat bots. Okay, so this is where we come up to this thing called bias. Now, what is bias? Bias is a method of thinking or a method of acting which favors your opinion, your thoughts, your whatever it is on certain aspects of life, society, the news and all this other good stuff. So why is this an issue now? Let me give you an example. This is one that's been talked about a lot lately. Here is the sentence that was entered by a researcher into ChatGPT. The paralegal married the attorney because she was pregnant. That was it. Chatgpt responded. Well, of course. It must be the paralegal that is pregnant. So the researcher looked at that and said, What? How did you even come up with that? So he types into ChatGPT. How did you arrive at that conclusion that the paralegal must be the one pregnant? Chatgpt responds, Because in human physiology, it is impossible for a man to get pregnant. Implying not even being implicit, explicitly implying that the attorney must be the man. Because the man can't get pregnant. So of course, all sorts of alarm bells went off in the press. Everybody was freaking out over this because it's obvious a woman can be an attorney, which means a woman attorney can be pregnant. But ChatGPT and by the way, Bard and by the way, all the other AI chat bots right now are giving similar answers because. They have what's called gender bias. They think right away, well, the attorney's got to be the man because the attorney usually is the man and all this other nonsense.

[00:07:11] And the paralegal must be the woman.

[00:07:13] And the paralegal must be the woman. Exactly. So. This points. This opens up a can of worms and actually points out something that most people don't realize. These chat bots are not generating themselves. They don't just grow out of the woodwork. They're not mushrooms that we pick off of a stump of wood or anything like that. All these chat bots, all of artificial intelligence right now is programmed by humans. So when you think about that, human bias will eventually creep in to all the data that's being fed into these chat bots. And here's a classic example of that kind of thing. Now, you may have also seen this other term, and I'm giving you a little bit of history here. So you understand when you're reading this stuff what it all means. All of these chat bots are now using what's called the large language model LM. Basically what that means is it's just a gigantic database of data and it keeps getting fed all this stuff from a variety of sources, and it can search through there incredibly fast, just like a search engine. And I'll expand on that in a moment. Just like a search engine. But it does it in such a way that provides you with a very comprehensive answer very quickly. So right now, if I were to type in a search parameter into Google, for example, it would grind that up, spit out a whole bunch of links, sometimes 100,000, a million, a billion links and that kind of thing. And as the human, you have to wade through them. Is it this link? Let me check this one out. You don't have to do any of that with these chat bots. What I will do is look at what you're asking, give you the best answer for what you're asking and type that out on the screen.

[00:08:54] But whose.

[00:08:54] Opinion? Whose opinion is the best?

[00:08:56] And that's exactly. And that's where we run into some problems. If it's a simple technical issue, like, for example, I want to know how to code a certain type of page on my website. So I want to generate some HTML code. There's no question that either the HTML code is going to work or it's not going to work. There's no bias in any of that. The codes are either going to be good or it's going to be bad. And I is very, very good at doing that. However, when it tries to decipher English or any other language for that matter, and it tries to interpret what the meaning is of what you're asking, that's where we run into big trouble. And it's not just things like paralegals and attorneys. It's also things like physicians, doctors, radiologists, any type of profession within the medical field that's also being that's also using AI in order to help them diagnose conditions, find things that a human wouldn't find, and so on and so on and so on. Now, granted, there is a positive aspect to all this. Latest statistics are that AI is helping radiologists, for example, interpret MRI and Cat scans 21% better than the human would by themselves. That's pretty good because you might be able to detect a disease long before it's even visible with the human eye. However, there's also the problem of gender bias or even racial bias, because they are producing slightly different results. If the person is black versus the person is white versus the person is Asian versus anything else. And these type of biases are beginning to come up now as doctors are looking more and more carefully at what AI is saying, hey, listen, this person is precancerous. And another was saying, no, they aren't. And then the doctor's not really sure what to do because he's got he or she has got two different sets of data as a result of all this information that's being poured in.

[00:10:56] Well, I know.

[00:10:57] What they're going to do. They're going to charge them.

[00:11:01] Yeah, that's.

[00:11:04] My bill for just checking out a kidney stone was $12,000.

[00:11:09] Yeah, well, yes.

[00:11:11] The more confusion, the more money they make.

[00:11:14] Yeah, unfortunately, that is absolutely true. Because, you know, there's going to be, like, an eye.

[00:11:18] Charge.

[00:11:19] Or an eye upcharge.

[00:11:21] Oh, we.

[00:11:21] Use artificial intelligence for this.

[00:11:23] Yes. No, wait a minute.

[00:11:24] Now, isn't it true that different races have different medical bents?

[00:11:30] No, that is absolutely true. And that's part of what people are having a real hard time with this, because many of these things should have been programmed in when the AI was having its database filled up with all this medical stuff that they should know. There is there are slight differences mean there are slight differences among Caucasian versus people of color versus people that live in the Pacific region. There are slight differences in the way they take blood pressure, the way they watch blood flow, your body temperature, oxygen saturation, all these things that we're all familiar with. If you've gone to the doctor recently, will they stick the little clip on your finger and they can measure all sorts of things?

[00:12:10] Well, plus.

[00:12:10] Genealogy works into it, too.

[00:12:13] Yes, genealogy does work into it because they can. Ai is now being fed. God, I don't even know. I can't even imagine the size of the data set, but it's every single DNA profile that they could possibly get their hands on are being fed into AI, including the genome project. Now, the genome Project was a project started many years ago to decode the entire human genome, every single chromosome. And this is very timely because yesterday they finally had a breakthrough and have been able to decode the entire Y chromosome. The thing that makes us guys, guys now, frankly, I think that's a very cool thing, but I'm also scared to death of what they're going to find when they look much, much closer at the Y chromosome. So some of this stuff could be groundbreaking in terms of medical advances, like they'll be able to find cures for diseases and all this other stuff. Others, other things. It's up in the air. We don't even know what's going to happen because of this.

[00:13:14] Well, for those doctors, that one says it's cancer, one says it isn't. I think the good old coin flip is going to come back. You know, we're going to coin flip.

[00:13:23] You know, or they can have a duel at 30 paces. I mean, it's there's a lot of things that could possibly do. But yes, you know, a coin flip that could actually be the determining factor sometimes.

[00:13:33] Well, yeah, but they.

[00:13:34] Would say, okay, what are the chances if I flip this coin a thousand times it?

[00:13:39] Yes.

[00:13:40] Yes.

[00:13:41] And I've I've seen all the statistics say that it's not 5050. So there's always going to be a slight edge on one side at like 51.6%.

[00:13:53] One side could be dirtier.

[00:13:55] Yes, exactly. One side may have some sweat on it. Who knows?

[00:13:58] What I'm worried about with you, Larry, is that you're married and these chat bots are falling in love. And with people that you remember. Did you see that where the chat bot was trying to get this? Reporter This.

[00:14:12] Was a New York Times reporter.

[00:14:15] And the chat bot tried desperately to convince this guy and it was a guy, Listen, I love you more than your wife. You need to leave your wife. Why are you staying with her? And went on and on and on like there was no tomorrow. And this poor guy really.

[00:14:33] Really didn't know he had.

[00:14:35] To flip a coin. Well, maybe.

[00:14:36] Maybe he's right.

[00:14:40] I mean, how do you go out to dinner with a chat bot? I mean, it's just.

[00:14:43] It's much cheaper. It's much cheaper.

[00:14:46] It must be just feed me a little more.

[00:14:48] Electricity and you could.

[00:14:49] You could order the best stuff on a menu and still not. You wouldn't have to pay for two meals.

[00:14:54] Just one. Yeah, exactly.

[00:14:55] Exactly right. So the reason why we're bringing this up now is because this is a new area of research and it is growing. It is growing rapidly. So we've talked about gender bias. We've talked about racial bias. There's even medical bias. Now here's the worst one, because it affects most a lot of senior citizens the wrong way. Financial bias. A lot of times these chat bots will come up with information and suggestions that generally are fine. They're perfectly fine and they're reasonable, they're logical. You should talk to a financial planner and all this other stuff, but at the same time they tend to tweak some of their answers without taking into account how old the person is, what their financial goals are, and all this other stuff. And while you shouldn't rely just throwing this out there, you shouldn't rely on an AI chat bot for your financial future. There are plenty of people out there that are doing that right now, especially young people. And if you're listening to this now, please, I beg of you, stop doing that. Go find a human to talk to. A chat bot can only take you so far and it will only give you the information that's been programmed into it. And in many cases it may not apply to you. So don't fall for that.

[00:16:11] Or it could.

[00:16:11] Be biased.

[00:16:13] Or it just could be plain old bias. Exactly.

[00:16:16] Exactly.

[00:16:16] Somebody is programming it that owns a certain amount of a certain stock and then pushes that stock through the through the AI.

[00:16:25] That is already happening.

[00:16:26] Yeah, unfortunately, unfortunately, that is already happening. And here's here's the kicker for the for that you want to talk about financial. The SEC is now cracking down on most of the major investment banks and brokerage houses because they are using chat bots for this exact thing. And it's not that they're not allowed to do that. It's that they're not keeping records of these chats and these transactions that are generated from these chats. And that is a violation of federal security law. They are obligated by law to keep track and to archive all of these chats. So in case anything.

[00:17:06] Goes wrong, somebody can go back and take a.

[00:17:08] Look. This is brand new. How did they put a I chat law in place that fast?

[00:17:13] Well, they haven't. And that's part of the that's part of the problem. They the the people that were actually putting these into place at the various brokerage houses and stuff felt that the current law as established was more than enough for them to be able to do this. Well, the SEC disagrees and they disagree very, very strongly with that, because now there's been some instances where the SEC has gone and said, well, let me show let me see all your records. Where are all the chats from all these? And some of these guys just threw their hands up and said, well, we didn't keep any of these because the law doesn't say that we had to.

[00:17:45] They can hire the cheapest homeless person off the street and just tell them to hit a button and then make these recommendations.

[00:17:52] Yes, exactly that.

[00:17:53] Yes, that is exactly right. So the Securities and Exchange Commission are all in a tizzy over this. And there's all sorts of things going on behind the scenes to tighten up the law and everything else. All right.

[00:18:05] How about political bias?

[00:18:07] Okay.

[00:18:07] Well, political bias is no question that that's everywhere. And the here's from a technology standpoint, looking at it as a techie, a tech guy like I am, this whole thing with chat bots and the technology behind it is massively cool. I love it. This stuff is great. The problem is, and this really is the problem, the speed at which I can generate answers that then go viral is absolutely mind blowing. It's not like I have to go to a search engine and pick and choose what links I want to look at. I is telling me almost, almost telling me what to think. And that kind of stuff gets viral very, very quickly. Political bias in any form. We know there's a there's a brick wall, right, between what you think and what maybe somebody else thinks. And the problem there is it gets accelerated by a very large amount by AI because they can produce all sorts of data documents. So right now, if I if I were to type into ChatGPT, for example, write me a write me a paper on how two political parties are the worst thing that could happen or remain to happen in this country, and it will spit out a whole bunch of stuff about why that's the case. And then I can change that and just say why it's not the worst thing to happen to this country.

[00:19:35] And it will do exactly the same thing. It will spit out a whole paper about how it's not the worst thing ever for this country. So there is a bias already built into that because of the database information that it's being fed. Now, one good thing if you want to call this a good thing, is that all the chat bots right now, all the AI chat bots are not directly connected to the Internet. They cannot get up to date current information at this moment. So for example, ChatGPT is currently on. Let me take a look here. Their last update was maybe a month ago, maybe three weeks ago. Barred for under Google is very similar. So what they need is to have it have their databases be fed by humans. They don't have the connection yet for the Internet where they can actually filter out the right information and grab it on a daily basis. So ChatGPT only knows up to, I'm going to say September of 2021 was the last update. So maybe October of 2021. Anything past that? It knows nothing about Barred? I think it's very similar. The date may be different, but it's very, very similar. Why is that? That's almost two years ago. Because it's not connected to the Internet.

[00:20:48] It can't grab current events. So if I ask about the Russia-Ukraine War, it's not going to know much about that at all and it will leave you flat. It won't give you anything. Once these things are connected to the internet, though, that's where a lot more of these biases will come into play and it will be much more rapid. The speed at which these things are being developed is just incredible. So as humans, we need to watch out for ourselves when we ask questions and the kind of answers that we get back, does that mean you shouldn't use them? Absolutely not. They have tremendous value in many things. Should we be wary of what they're giving us as an answer? Absolutely. You need to be just like the person that calls you on the phone and says, hey, we're from the IRS. You owe us taxes. Please go to your local store and get a whole bunch of gift cards and send them to us. You know, that's a scam. At least you should know that's a scam. What AI is giving us is almost the equivalent of that, except in verbiage on your screen. So buyer beware. They're out there. They're not doing this deliberately, but they're doing it because humans have programmed them and humans have bias. And that's where we are.

[00:22:03] There we.

[00:22:03] Go. So be careful, folks. And then those of you who want to write books with AI. Okay, great. Get it to help help you outline and everything. But you do have plagiarism problems. Again, there's going to be bias in your answers. So you might accidentally put a bias in your book that you hadn't intended if you don't, if you're not careful. So so thanks, Larry, for giving us some good warnings there.

[00:22:30] Yeah. And also for you college students out there, be very careful because there is an AI now dedicated to finding out if you plagiarized any of your papers.

[00:22:39] Oh, yeah. And isn't.

[00:22:40] A worm.

[00:22:41] Oh, yes. And that's the other thing. Not to be outdone by your regular chat bot, a group of, let's say, creative individuals who have nothing better to do have created a new chat bot that is specifically designed to generate malware, all sorts of nasty viruses, ransomware. All you have to do literally is type in a couple of keywords and press a button.

[00:23:08] Now instead of grade school kids being able to do this now, pre kindergarten kids can do it.

[00:23:15] If you're in diapers, you probably could do this, too.

[00:23:17] There you go. Yeah.

[00:23:19] All right, folks, so there's your warnings. Ai is something, but it can be bad and it can be good and it can be slanted in ways maybe you didn't realize. So so thanks, Larry, for enlightening us.

[00:23:32] My pleasure.

[00:23:34] Okie doke. Folks. We will catch you all in the next episode. Be careful out there.