Pedigree to Disagree

The Battle of Voices: AI vs. Human Charm

Eric Seaborg Season 1 Episode 2

We'd love to hear your ideas on a future topic

Can artificial intelligence capture the essence of a human voice? That's the question we grapple with as Eric and Jacqui face off in a spirited debate about AI-generated voiceovers for podcast introductions. Eric, advocating for sleek branding and professionalism, stands in stark contrast to Jacqui's call for authenticity and the irreplaceable charm of human imperfection. Through their passionate discourse, we venture into broader definitions of AI, drawing from historical and modern sources like ChatGPT and the Oxford English Dictionary, and even touching on AI's unexpected early 20th-century connections.

Think of generative AI as your personal assistant, effortlessly bringing your creative ideas to life with remarkable speed and efficiency. In this episode, we explore the daily applications of generative AI, weighing its potential to revolutionize creativity against the risk of losing the essential human touch. From aiding a child's entrepreneurial dream to the consequences of over-reliance on AI, we discuss how to strike the right balance between embracing technological advancements and nurturing individual skill development and critical thinking.

AI isn't just transforming creativity; it's reshaping entire industries. We dive into AI's sweeping impact, from business innovations to artistic creations, and the urgency of transparency and explainability in AI tools. Referencing Goldman Sachs' predictions about job automation and the ethical concerns of algorithmic transparency, we underscore the need to maintain human oversight and societal parameters. Through personal anecdotes and reflections, we aim to deepen our understanding of AI's multifaceted role in our lives. Join us on Pedigree to Disagree as we bridge gaps in knowledge and perspectives, ultimately fostering a more nuanced grasp of AI and its implications for the future.

Speaker 1:

Welcome back to Pedigree to Disagree, the podcast where family intersects with society. I'm Eric Seaborg and on this episode my daughter Jacqueline Palialanga, and I tackle the issue of AI. Because we are a long, long way from really understanding what AI actually means, we decided it would be a good topic for us, since we never really took the time to learn how we both felt about it After listening, I'm sure you'll agree we struggled at times to find common ground, demonstrating our discomfort with the topic we know very little about. So thank you again for stopping by and we hope you enjoy our chat. All right, episode number two Yep, and we are talking about AI, artificial intelligence.

Speaker 2:

Yeah, and why do we think this would be?

Speaker 1:

be because you were the one in that conversation that said this should be our second topic right, it started with you doing the introduction in the first one, reading it, which was great, and then I wanted to do just a, a voiceover intro, using, you know, one of my ai software voice platforms to just introduce the title of the podcast and the names, like most podcasts.

Speaker 2:

Well, I wanted to just interrupt on that, because you said, like most podcasts and that's where there's an assumption that your experience with podcasts that you listen to and experience with podcasts that I listen to are the same and they're not, because most of the podcasts that I listen to, they don't do an AI generated introduction. It's usually a clip from the podcast itself. Then they go into their actual thing or they do a live introduction right on the actual video. So I think it was that debate that brought us to this topic, correct?

Speaker 1:

Absolutely yeah. So that's actually a good starting point. With me, it's all about branding and marketing and just having it sound more professional. With you, it was deeper than that. It was the artificially introduced title rather than real people doing it, and that's what got us onto this.

Speaker 2:

On something like this. My logic was why do I need a fake voice to introduce something that is literally my voice, speaking for the next, however long it is?

Speaker 1:

Yeah.

Speaker 2:

Just doesn't make sense, because people would essentially be coming in to listen to a podcast with me speaking or you speaking, so why do they need to have an extra voice in there Seems like it overcomplicates in a way that to me felt superfluous, and so it brought us to like the question of what is AI? What is it? So I was curious, dad, what is AI to you, or how does that match for you?

Speaker 1:

Yeah, it's funny because, sorry, I got the cat here on my lap. Of course, this is what Bossy does, right, Just gets yeah.

Speaker 2:

I love it. That's my favorite part.

Speaker 1:

And then lays down in front of me oh goodbye. So AI. You know it's funny because whenever I think about AI, the first thing that comes to mind with me is that it's been around for a long time, and in the way that it's presented today it's different, because I think of AI and more of a simplistic thing, which is a data set of information that the computer actually thinks through that data set to provide solutions to whatever you're asking. That's a simple way of looking at it. The AI is only as good as the human element that populates the data set. Now, that was like an older version. That's the way I looked at it. So AI to me has always been about taking whatever data set you have and allowing the machine to do the forecasting or the projections, and each time it does that, it learns something new. So that, to me, is AI in a very, very basic framework. What about you?

Speaker 2:

Did you know, in the early 20th century AI was more commonly used for artificial insemination, so now-.

Speaker 1:

No, I didn't know that.

Speaker 2:

Yeah, that's what it meant. And then it morphed sort of in the mid 20th century to now you know commonly mean artificial intelligence.

Speaker 1:

Feeding it in the chat GPT. I just simply asked the question. You know what is AI. It says AI refers to the development of computer systems that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem solving, understanding, natural language perception and even creativity. It gets very technical in the definition and very long, and I did also pull it up on Claude, which is the Bing version.

Speaker 2:

Is that Bing's AI?

Speaker 1:

Yeah, yeah.

Speaker 2:

And there's quite a few. They all have names.

Speaker 1:

Yeah, and this was shorter, but it touches on the same thing Learning, reasoning, problem solving, perception and language processing. Finally, I just did a Google search, which is kind of almost redundant to chat GPT, and that just says artificial intelligence is a branch of computer science that uses algorithms, data and computational power to create machines that can perform tests that typically require human intelligence. All the commonality in these three definitions was also about speed, you know, the ability to pump it out quickly and accurately.

Speaker 2:

So that's that. So what's?

Speaker 1:

your definitions.

Speaker 2:

So you know, I did start with the Oxford English Dictionary definition. The first half of this definition made a lot of sense. It says the capacity of computers or other machines to exhibit or simulate intelligent behavior, and that's where I kind of learned that this term artificial intelligence cropped up in the 1950s. So I thought that was interesting. But I wanted to look at the word origin. I don't know if you ever do this, but I am always fascinated by where these words come from, because in this case we know that this McCarthy, who was a researcher I forget his first name, I want to say Joseph, but that's not the right McCarthy in the 1950s, but this guy, he's the one that coined the term artificial intelligence, and so I kind of really wanted to know that. So I think there's lots of etymology, like places to look, but I just kind of did a search, like on a most basic level, and so I don't know if you knew this, but artificial means not natural or spontaneous, and it's from the Latin word for art, which is of or belonging to art, and then the facial part, that where the F starts in that word, is part of that word. Like facir, facade, which is to make, so like a maker of art. So I thought that was really interesting, given the music background and like transitioning into this new career.

Speaker 2:

But when we started to talk about AI, I was having a real big emotional response to the use of AI in something like the podcast here. To let that soak in. They're not natural or spontaneous. I thought that was really interesting and then the connection of that to art. Then I wanted to look up what is the origin of the word art in general and these all generally come from Latin origin and it's a skill as a result of learning or practice and in the Middle English sense it became human workmanship as opposed to nature or naturally formed.

Speaker 2:

So it's almost like if we generate it, even though we might be a product of nature, we're not considered natural. So like if we generate it, even though we might be a product of nature, we're not considered natural. So like if my cat generates a poop in the work of art, that would not be artificial. But if I generate something and call it a work of art, it's essentially under this artificial definition. It's almost like. It's like we are like opposite of nature in a way. But that's not how I feel on the inside, like I feel like I want to be as close to natural things as possible.

Speaker 2:

So for me that really gets at the core of like what I think was bothering me about using an AI voice to start the podcast. That feels like I'm adding something that is not naturally needed. I think I equated it to like if I were to give you a concert, you know right now on ukulele or something, and I was coming in as musician and you were only hearing my voice to sing or to play ukulele, then I could justify having an AI voice at the start of it, because it's not like you were coming to listen to my voice, but to have that's not a natural voice, introduce what is as close to a naturally occurring conversation between us, which is where most of our topics come from. It felt quite opposite for me and created an emotional response which is interesting quite opposite for me and created an emotional response which is interesting.

Speaker 1:

Yeah, I think in the industries that it's used, there's times when I don't like AI and then there's other times where I prefer it and that's so subjective with everybody.

Speaker 2:

Right. So that was all just for the definition of artificial. So I wanted to get your take on this one. I wanted to then look up intelligence, right, what does that mean? And so kind of same source it says. I'm going to do a direct quote here it means highest faculty of the mind, capacity for comprehending general truths, and another secondary part of that understanding knowledge and power of discerning. So intelligence means the highest capacity of your mind understanding truth, what truth is, and you're able to discern, maybe, what is truth and what is not truth, or what is real and what is fake, or what is natural, what is not natural, or whatever the things that you're trying to discern, you know. So that was very interesting. Do you have any comment on that part of no?

Speaker 1:

I think that's a pretty comprehensive definition, without it being so detailed.

Speaker 2:

Then I was like curious, because I feel like the term that has been tossed around that's maybe gotten me more triggered in the last couple years, like you know, like having that emotional response, and I'm kind of glad we're talking about it because it's helping me work through what it is with me. But I think it's the term that I'm hearing now, the generative AI. I wanted to understand that a little bit. So generative derives from the word generate and I don't know, I don't know why I didn't think about this and maybe other people do. But like it it literally means offspring, or it's like the product of something you know.

Speaker 2:

So I'm thinking about the word gene and I'm like, oh my gosh, that makes total sense. Or a generation and the product of something. But it implies that it's the product of a natural being, like my offspring is my daughter or my cat. Were to have kittens, right, that would be the next generation of her. So it implies that there's some sort of living being or humanistic, maybe component to this. So that maybe kind of helps me with that part from the artificial where I was. I feel like we're in direct opposite, but this is supposed to be almost apply, some sort of connection to the human world. There a little bit.

Speaker 1:

Yeah, and I also saw a very interesting one-liner. It's a generative AI is recycling what is already done, and I can argue that that would be possibly appropriate, and then I can certainly argue that it wouldn't. So when I think of generative AI, I'm always thinking of something that can be built very quickly just based on your words or your ideas, and it grows from there. You can keep going in and tweaking it until it matches what your imagination is trying to present. So it replaces true talent to be able to draw, to be able to be a musician and produce something that you normally would not be able to do unless you practiced at it or developed the skill over time. So it's saving time. It's like sitting down with a personal assistant and just saying, okay, I'm thinking about this, this and this, and they're dictating it and dictating it and dictating it. Except the difference is that assistant then would put it all together and spit it back out to you. Oh yeah, that's what I wanted, that's what I need. That's really what I was thinking in regards to generative.

Speaker 2:

Yeah, I get that, except I'm kind of thinking like if the word generative literally comes from the word for offspring, especially because this podcast is about generational differences. So I think this is really hitting at a core. But, like, I keep thinking about the model of, like my child, right, my daughter. She's not and she's an adolescent, right. So she's trying to find her own way. She's not a facsimile of me, she's not a duplicate of me, she's her own being, her own thinking. She has her own emotions, her own interpretation of experiences.

Speaker 2:

That I feel like we're trying to take away all the things that make us human, that make our life unique and interesting, for the sake of saving time. But you know, I always tell my students, like the process, like when we're trying to, you know, get to a concert or get to a performance, it's the process to the end result, that is the meat and potatoes. It's the process to the end result. That is the meat and potatoes. It's the work that you have to do. It's not the end product, that is the thing, even though that's the thing that we always celebrate in society is the end product, but it's the work and how that transforms you over that time. But if you're having a machine do that for you, then you're losing.

Speaker 1:

Yeah, I mean, I get it, I understand it. I also look at it from the perspective of for a person like me. I would love to be more innovative and be able to create more, but a lot of times I don't know where to start. To me, it builds the foundation. When you talk about generative, generative is generating, generating off of that foundation, and that's where I was coming from. That's the angle that I always see as a positive angle is, if I want to write something, I'm not going to take something that's been generated by a chat and plagiarize it, but it'll start to put things together and for me, it helps me organize my thoughts and go down a path that makes more sense.

Speaker 2:

I'm thinking about Patricia last year. She wants to do this slime business and I have very little business experience. It is like something I know very little about and it would be a ridiculous amount of work for me to try to learn that so I could then teach her. So she came to you and you guys use generative AI, right? Didn't you use a chat, gpt?

Speaker 1:

We asked the first basic question or, as they say, prompted it. So you ask a prompting question, something basic, and then you build off of it and if you use chat, gpt or Clyde or any of the other ones, usually they'll say is there anything else? And you don't have to reinvent the phraseology and all, you just keep adding to the conversation. So that's what we did. I love the fact that you're actually having a conversation with the software.

Speaker 2:

Right. But that, in theory, becomes problematic to me is, if the language of the AI is limited, then your output is only going to be based on that language, right, which is true in any situation. But I'd rather spend that time trying to find common language with another human instead of with a machine.

Speaker 1:

That's where we differ, because to me, your language options are greater than if you're sitting with another human, because the data points or the data set is larger than your brain, in a sense, has more in it, because my perspective it's a combination of millions of brains of people who have utilized it and it's feeding it. Now, the argument against that, of course, is it can sway you in a direction that possibly you didn't even know existed. I mean, there's good and bad with everything.

Speaker 2:

It's that discernment piece right.

Speaker 2:

So when we were doing that or you guys were doing that business model for the slime business, and it kept coming up with like a term for cloud slime that she didn't agree with, and what I appreciated from my, you know, at the time 11-year-old was that she could discern what they meant with that, but she didn't like the definition she used then her own thinking to say, well, I want to tweak this here and I want to tweak that there, and whatever.

Speaker 2:

What I'm concerned about is that young people need to learn the skill of discernment, which I think was in that definition of intelligence, and my concern is, if AI is being used with people who are not in a mode of discerning, then they can't make that decision. And so then sometimes I think AI and not just AI, but this concept of putting your information into any sort of algorithm that then outputs more of what you're looking for, which is what it's supposed to do, and it can be helpful, but I worry that then you kind of get into this feedback loop where there's not new learning or new understanding coming, because you don't have a contrast to then make a more informed decision on something. And so I think, like that discernment skill is something that I think we need to consider. How widespread is this AI? And like what are some of the headlines you're noticing in your work and your personal life and like some of the implications for you.

Speaker 1:

Well, everything, and it kind of ties with your occupation too. I mean everything that I do is higher education oriented with research. Every day and a lot of the publications, the term AI will pop up, so it's not going away In higher ed. In the very beginning, it was associated with essay writing and cheating and plagiarism, and, okay, we're developing softwares that detect that and everything else. But now when you read articles about AI in higher education, it spreads out into many, many different things Facilities management, finance, enrollment and that's where we started the podcast today, which means different things facilities management, finance, enrollment and that's where we started the podcast today, which means different things to different people.

Speaker 1:

But it is such a common buzz term or phrase that is used now for everything. I mean every commercial now you're seeing on TV. There's something about AI, or you know. They throw it in there because, as Zuckerberg calls it, a connector. We were watching a documentary the other day and that's the one thing that he was emphasizing with his new company is that it is a connector of other platforms, of other ideas, and so he's obviously very strongly engaged with AI, strongly engaged with AI.

Speaker 1:

I just think that AI is something that will advance through media headlines quicker but will confuse a lot of people as to what does it really mean? I mean, how's it really used? And that happened not so long ago when it was popping up in business like, well, how's it used in our industry? If you just sit back and say, well, it can write a paragraph for you that connects with everything and anybody so, and that's just a very simplistic way AI is used. Now it's used in so many strategic ways. It is embedded in education. I don't know, you know you'd have to tell me down on your level, in the primary sector of you know of education. Will it become, if it's not there already? Will it becoming more incorporated? Because again, we say I use AI, that somebody can say, well, explain to me what that means. What do you mean? You're using AI or you know, because, again, it's, it's utilized so differently.

Speaker 2:

Yeah, I think you're getting at. One of the issues I think is that we have to have a clearer definition of what it is and the scope of it. The way I see it, since I've started this grad program, I kept thinking about when I would write a paper and I would use spellcheck, or Clippy would show up.

Speaker 2:

Remember clippy from microsoft yeah, whatever happened, the good old clippy those are like a version of ai in a sense, because you put in that's my rules of grammar and spelling and you allow it to help guide your editing process, which correct. It has come a long way in that sense and and I think that that to me makes sense because the scope is narrow I think the rules for something like yes, our oral language changes, but essentially our written language is slower to evolve, so it's easier to change parameters. Like we were to all of a sudden have a new word that becomes part of a written acceptable definition, like even AI. Right, when that word became part of whatever, no longer would it get the red squiggly line, but it would get yes, this is part of definition. So, like it's easier to manipulate the parameters of an algorithm, that is a slower moving thing.

Speaker 2:

My concern is that, especially in the music world, I have been concerned about AI related things for many, many years. I remember, in like 2008, seeing this documentary. I think it was on YouTube or I saw a clip on YouTube, and I think the documentary is called Before the Music Dies. The section that I'm remembering is this little section that they called how to Create a Sexy Pop Star, and this like 2008, like, I think, maybe 05 or 06 was like what was his name T-Pain, I think, was the one that used the auto tuner and he would sing with it or like back in the late 90s, Cher used auto tune and it was like a huge hit and what that documentary did was show just how pervasive something like auto tuning again taking parameters of where a correct pitch would be and adjusting that for a human and then different ways that you might like they.

Speaker 2:

In this example, they took some girl who was like maybe an average to below average singer, singing a little bit off tune at different spots, but was a model like that was her primary art form and because they could craft this world around her in a music video, that made her look very appealing and they changed some of the notes in the song to you know, to match the pitch a little better. All of a sudden she sounded like you know she was a hit and the music was a hit and really very little of it was actually human, created, generated. I think they had some songwriter write a little snippet of a song in five minutes and that's how they created it. And it really was eye-opening to me about how pervasive something like AI is in a music industry. So I think I'm a lot more fearful or wary of where it is going in that realm, because the arts industry, although it may seem like it's a booming industry with lots of money, I would say if you were to look at people that are business owners versus people that are artists, you probably have a higher percentage of business owners or people in an upper level of a business, like of a company, doing fine financially, than you do artists.

Speaker 2:

And I would say that could be related to athletes too. Right, Like that. You have to be in that like super top percent, Otherwise it's not a lucrative profession in a sense, and we could get into a whole other semantics with it. But I just, like you, know what is it that draws us to like an artist, you know you. You introduced me to the Beatles as a kid. What drew you to them, you know? And it wasn't their auto-tuned performances, what was it about them?

Speaker 1:

Right, it was their talent, of course, their writing, their singing, their all kinds of things that were generated by them and blended together after you know, hours and hours and hours and hours of work and labor and failures. You know, that's the other thing. We have a tendency to forget that innovation is based on risk and failure.

Speaker 2:

Right, maybe we need to see more failure in things. Well, we are.

Speaker 1:

I think we are. You know, one of the things I didn't show you before we spoke was this article. It was written by a guy named Mike Thomas and published on the 25th of July of this year, and the title of the article was 14 risk and dangers of artificial intelligence. He does a great job of spelling out a lot of things that I think people are fearful of, so let me go down and just go through a couple and have you comment on it. First one on here is lack of AI transparency and explainability. This, I thought, was an interesting thing that I never really thought too much about, which was the lack of explaining what AI algorithms are being used right, which is in this society. We all want transparency. Now it's a big thing. He's got that right up there as number one. Have you ever thought of AI and the lack of transparency and basically how it generates its information? I think that's really what everybody talks about in a roundabout way.

Speaker 2:

When the explainability like what is the purpose of this thing? Because it's like I've said this to you before, I have this real thing in my core about if there's not a reason for a rule or a way to hold somebody accountable for a rule, why does it exist? That's one thing. It's the same like I used to get this way when I'd have to fundraise in some previous aspects of my career. If I can't say what that money is directly going towards, what is the purpose of raising the money?

Speaker 2:

And it's the same thing. If I can't say what the purpose of the tool in this case, the AI tool is. If I can't say what the purpose of the tool that, in this case, the AI tool is, if I can't say what I need it for specifically, then why use it? Like, how is that? And I can't say how it's going to enhance or better or speed up. So, like, whatever it is, you know something that I need to do that I don't find it useful in my life in that moment, you know, in that.

Speaker 1:

Well, and I agree, and there's where we agree and I think that the danger of AI. You know, you go on a website and you may go on the same website all the time, but what's the first thing you see? Accept our cookies, yep.

Speaker 2:

I always say reject, accept yeah.

Speaker 1:

Reject or modify right, because you know I'm glad that we can have a say on that.

Speaker 2:

I hope that they follow it, but it's like when I hit unsubscribe on an email and then it's still.

Speaker 1:

And to me that's deception. And there are obviously stories of Facebook and Google and all these other companies where employees whistleblowers you know will say that they're collecting data for this, the main purpose. It's advertised, but they're really using it for that, and so that's that transparency that has everybody worried, understandably. Yeah, you could put any kind of software in place to think for you, I'm sure, but what's it pulling from? Where's it coming from? Job losses due to AI, ai automation. We can go through any industry and pinpoint where AI could replace this, replace that. This article says that Goldman Sachs predicts that 300 million full-time jobs could be lost to AI automation by 2030.

Speaker 2:

If they take over, but at some point there's a ceiling for that, whereas if you use humans, there's not a ceiling in creativity, because we continue to grow and evolve. A machine can only grow and evolve based on the parameters that it's already been given. We haven't hit the ceiling of our creativity yet.

Speaker 1:

But you can argue that if machines now are capable of learning machine learning, which is the concept then the more innovation we feed it, the more it will innovate itself.

Speaker 2:

It may, but I also think you're missing a whole component about emotions.

Speaker 1:

Well, yeah, no, I agree with the emotional side of it.

Speaker 2:

Right, but all those things that I think are not in it and so we could say in theory, I think, if you look under that lens, yes, but I think the ceiling is you're going to lose the emotions. Did you ever know about this, eliza, this robot? In the 60s or 70s it was like a form of therapy and I think they kind of spoof it on young Sheldon and it like talks back to you and I think in young Sheldon it was like my parents are fighting and at some point she ran out of words to say because she wasn't really listening to what he needed help with and responding to that. There's a whole reactive component that I haven't seen replicated in AI, that we have as humans. It gives me hope that, like we can really find those parameters, we could still work with it. It seems new because it's got more capabilities. We've had this concept of machines helping us with tools, helping us for almost the entirety of mankind, I think.

Speaker 1:

You know I go back to. One of the early versions of human AI is the movie Moneyball. The guy plays with Brad.

Speaker 2:

Pitt.

Speaker 1:

He's the thinking guy, right. He comes in and people would say, well, no, that's a little different.

Speaker 2:

But in essence, it's the same thing, it's what he represents is a machine right.

Speaker 1:

So in that industry it works. Industries and occupations that you like to cite, like you just cited there, I agree a hundred percent.

Speaker 2:

I was thinking. When you said Moneyball, it was making me think about. Did you ever see the movie and I think it was a book Hidden Figures about the groups of women that were human computers in NASA in the 1950s there? No, okay, well, that is literally about the earliest what they were called human computers and that's where this AI kind of started.

Speaker 2:

And I was just thinking like a lot about if you, if you were arguing that humans were doing that, and so then when they created computers to sort of help make that process go faster, and then those people maybe became supervisors of that machine or whatever, that seems to have some real benefits, right in analyzing, like you talked about analytics, in seeing trends, in predicting, potentially predicting. So I think you're right, there is some real benefit to that kind of morphing. To our third topic, which is like what happens next? What do we do with all this? Because we kind of know where we stand on it. Yeah, and I just kept thinking like about what are, like, the larger parameters that we really need to be talking about in this. So I was curious do you have like things that you think we as society or we in the younger generations than you need to be thinking about as this AI rolls forward, because it's going to roll forward. We can't stop it, it's going to happen.

Speaker 1:

Can't stop it, and there are great industries or industries that have been using it forever effectively and you want that, like the aviation industry, right? You need automatic pilots, you need all that you know. It's interesting, I think you and I would be on the same side of the fence. Regarding social issues, lack of privacy, transparency we talked about all that stuff. I think that where we are running into problems, obviously we all know we're running into problems because of social media now and the inability to control it and keep it in line with what it should be utilized for effectively. You know, what are we teaching now? I mean, when my concern anytime with technology is it takes out that human element, I don't think that's going to go away that argument. I do believe this argument will continue and continue, and it should rightfully so, because what we need is regulation. In some way. Now that makes everybody shiver when you talk about regulation.

Speaker 2:

It actually made me shiver and you know I love regulations.

Speaker 1:

Yeah, because you're talking about leaving it up to our government. Because Can I push back on?

Speaker 2:

that Does regulation have to be? Always from the lens of government? I keep thinking if, in the education of others, it could be about what society are we talking about? And it might be the larger society, but it might just be our family, our house, our neighborhood, our. Whatever those conversations have to happen with the other stakeholders in that society and how could this tool help us? And then, what are the perceived challenges in terms of regulating? Should it not help us? Those are the conversations that we're missing across the board in all topics. I worry that with technology, that we lose that human connection and that human conversation and collaboration, cooperation, finding common ground. And if we don't do that, that's where an AI could be harmful, you know, but a lot of tools could be harmful in that moment. Ai is just one tool of many, because I worry that's like kind of the big takeaway for me in terms of, you know, humanistically on this, no, you're right.

Speaker 1:

I mean, how do you mitigate the risk? Right, and the fact of the matter is AI can't even be defined yet. So it's very hard to mitigate the risk when you know anybody can make an argument for some really good practices that are in place. But then you start thinking about what geez AI can control our weaponry, our missile silos. And if you go all the way back to war games, matthew Broderick right, that was an AI disaster because, as they said in the movie, taking the men out of the silos who could not be counted on to push the button when instructed to launch the missiles and replace them with a computer. And what did the computer do? It almost started World War III. Now, great, entertaining movie and I often think about is that a good movie for young people to watch? Because it can be awful scary, but on the other hand, there's a lot of good messages in that movie.

Speaker 2:

When I saw it as a kid you probably showed it to me as a kid I absolutely was not thinking about it on the deeper level at all, of like.

Speaker 1:

We just want it. Yeah, the good outcome, the writing outcome, which was to stop the computer right, stop it it just wasn't, you know, I just was like it was very linear, right.

Speaker 2:

I think in this world, with AI, what I keep going back to, that word discerning, and I think that that skill has to be really taught and I don't think that I was taught that as a kid because we lived in like a simpler time when I was a kid, you know. But like this concept of there is not going to be someone to tell you the right answer to something, like you have to figure it out, but you have to really gather as much data you know or as much evidence yourself and then find the meaning from that for you and your life. I was laughing before because when you were saying, oh, better definitions, or like we need to have a definition of it's not clearly defined. I made a list for this like third, like the what happens next with AI, and like one of them was this human to human interaction, encouraged more of that so that AI moves forward in a way that works for everybody, which we kind of touched on. And then the second one was better definitions of terms surrounding AI and the intended purposes. Maybe that means that the scope has to be a little more narrow so that in each individual situation.

Speaker 2:

And then my last one was teach humans to be more discerning and self-aware. So, literally, I think we kind of came to a similar conclusion on it, even though we kind of started in very different spots. I was having a very emotional reaction to using an AI voice, so let me ask you now, at the end of this how do you feel about using an AI voice on the start of something like a podcast where we're having a conversation? Has your thoughts changed at all on?

Speaker 1:

that? Not really, because it's not as important to me. To me it's a consensus. You know, if we both decide, hey, it sounds better this way, that's great. A voiceover in this sense was just simply about branding. This is just about us talking, so it certainly wouldn't be a backbreaker to keep it the way it is.

Speaker 2:

I think this conversation about AI has really helped me tease out some nuances on what it is and how that is for me and take some of the fear away. So I appreciate that conversation.

Speaker 1:

We're asking people to open their minds up to the benefits for them and the detriments for them. It's a hard topic to nail down. I thought our first topic on suicide was actually as difficult as it could be, was actually more defined for me and being able to talk about it, I kind of feel like I stumbled through this kind of topic because I'm just not up to speed on really what it is.

Speaker 2:

I have a better sense of where you are coming from with it and I have a better sense of what's triggering my emotional response to it, and I have a better sense of what the parameters would be for me with it.

Speaker 1:

Yeah, and I was kind of shocked how impactful it was to you and again, just because we stumbled on the conversation of having a voiceover Right and that's actually kind of a cool thing.

Speaker 2:

Thank you for listening to our episode this week on AI. We learned a lot about how each other's perspective on this topic is informed by our experiences and where we still need to gain understanding of both each other and AI. We hope you will tune in for our next episode of Pedigree to Disagree to hear more about our discussions and our thinking. On behalf of my father, eric Seaborg, I am Jackie Palialonga and thank you again for listening to Pedigree to Disagree. You.

People on this episode