302: [HT Special] Epitome Anatomy: Artificial Intelligence - The Cat's Out of the Bag
FEATURED WORDS:
ARTIFICIAL INTELLIGENCE /ˌɑːrtɪfɪʃl ɪnˈtelɪdʒəns/ (n.): the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
Jeroen relates AI to words: misunderstood (adj.), evolutionary (adj.) critical thinking (n.)
Ray relates AI to words: tool (n.), progress (n.) and adoption (n.)
Essential words from this episode: ethics (n.), forward (v.), humans (n.)
SPECIAL EPISODE SUMMARY:
"I would say that one way or another, society and life is moving faster and faster just because everything's becoming more interconnected. It might be true that we are not keeping up at all, but at the same time, we've got no choice. The cat's out of the bag with these technologies." - Ray.
Welcome to Human Thesaurus EPITOME ANATOMY. This time, we tackle a word or several keywords that affect our lives to shape, distort, attract, dispel, and meditate, among other facets of human emotions and observations.
Our keyword for this first special episode is ARTIFICIAL INTELLIGENCE. I will have a casual yet satisfying conversation about our AI perspectives while taking it to practical, layperson's terms with my returning guest from the last season, our whiz kid, the human AI Ray Chang. Also joining us is my guy BFF Jeroen Baron from Arnhem, The Netherlands, who is a full-time teacher at HAN University of Applied Sciences. We will lay them out based on our career and life experiences and note that this will focus on practical everyday people application. This is my first multi-guest on this podcast, and I am adrenalised to do this. Grab your drink of choice, and join the eavesdropping.
MAIN TOPICS:
00:00 - Episode intro
00:27 - An experimental podcast intro, written by ChatGPT
02:40 - Epitome Anatomy intro
05:51 - Episode Disclaimer
06:17 - Ray & Jeroen's synonyms according to Wish in connection to AI 06:58 - Jeroen's synonyms for AI
07:10 - Ray's synonyms for AI 07:24 - AI 101
11:11 - Teaching everyday humans & young minds about AI
12:18 - AI & guardrails
17:10 - Are we keeping up with AI and its legal & ethics?
19:30 - Pausing AI development thoughts
28:52 - Different kinds of AI, explained
31:28 - AI fears
38:59 - Moral Machine
47:24 - The current boom of chatbots
50:28 - Silliest things we heard about AI
58:48 - What we picked up from our confab
1:03:48 - Words we picked up
1:08:08 - Final words 1:11:32 - Jeroen's new job
1:13:15 - Wish's important words to share
1:15:27 - Outro
ADDITIONAL RESOURCES:
QUOTABLE QUOTES
JEROEN
"But we've seen before that if you feed it information from trolls on the internet, it will very quickly become a troll itself. It has no idea of good or bad and is very confident. It's not like humans where we sort of might have some retrospection about our information and think, oh, might be wrong."
"How are you going to teach the AI what is moral for everything? I think we could easily say, well, don't kill, don't steal, some biblical notes. But further than that, it's going to be hard.'"
"I think the responsibility is with one who's using it. That is, the one who is accountable for using the AI and the outcomes of the data."
"And maybe for artificial intelligence, a good start [is] to become more aware of the misinformation there is."
"So don't be too afraid. And it's just a machine."
RAY
"The tool is very powerful when used correctly, but equally, it can be something that can often fail on you without you sort of realising. So it's still a trade-off and a balancing act that you must perform carefully when using a tool like this."
"Time will tell on where all of this goes, but ultimately these are powerful tools, so people will need to work out how to wield them properly, and that includes dealing with these challenges, even if it's on an individual-by-individual basis. So it's something that's tailored to you, but that's probably a little while away."
"in some ways, we may not be able to put this thing, we may not be able to confine it in some ways, but hopefully, by the time we develop something sufficiently complex, we'll have enough wrangle on it that in the worst case scenario, you can manage to pull the plug and then try again."
WISH
"So I think the next step for all of us as human beings, not just the techies, is to really just make these companies accountable and give them feedback."
"...the cultural and moral aspect of it. I think they're all relative, and how we will solve it in the future by feeding data to AI would be essential. And then I think one of my hopes while we're developing this moving forward is the legal aspect, the ethical aspect of it."
"Underestimating our own intelligence means humility. That means it goes back to humans who are feeding and training all of these machines towards what direction that we're going to take them."
REFERENCES FROM TODAY'S CONVERSATION
A Very Short Story About Artificial Intelligence- Forbes.com - by Gil Press
HAN University of Applied Sciences
Pause Giant AI Experiments Open Letter
Different Types of AI Moral Machine
LISTEN, RATE, REVIEW, SUBSCRIBE & SHARE WITH YOUR FRIENDS!
Follow me:
-
Wish 00:00:00
This special episode is a very exciting new addition to Human Thesaurus. This time around, I am twisting the conversation the other way. We will deep dive into a word or words that we humans relate to in our lives as much as we like it or not. This is a remarkable elemental experiment to offer you a wider variety of my podcast and what I wanted to get out of it just for you.
Wish 00:00:27
Before we begin, I asked ChatGPT to write me a podcast intro. I followed its input down to its music suggestion. Here's the result: "Are you ready to embark on a remarkable journey through the depths of the human experience? Welcome To Human Thesaurus where I, Wish Ronquillo Peacocke, your Host And Guide, take you on an extraordinary adventure. In each episode, I'll share captivating stories, personal insights, and thought-provoking perspectives. We'll deep dive into the profound moments that shape who we are, explore, relatable rants that make us feel alive, and uncover the intelligent opinions and fascinating facts from a multitude of industries. Human Thesaurus is a celebration of the incredible diversity within each of us. Just like words with multiple meanings, we all possess unique views and personalities that make us who we are. So join me, sit down, relax, and let's indulge into the joy of eavesdropping and conversations that spark inspiration and ignite curiosity. Through this podcast, we'll expand our horizons, deepen our empathy, and celebrate the beauty of our shared humanity. Don't miss out on this extraordinary podcast experience. Tune in to Human Thesaurus now and embark on a journey of discovery, laughter and connection. I'm Wish your companion on this adventure through Human Thesaurus, where every word brings us closer to understanding ourselves and others."
Wish 00:02:40
Welcome to Human Thesaurus Epitome Anatomy. This time, we tackle a word or several keywords that affect our lives to shape, distort, attract, dispel and meditate, among other facets of human emotions and observations. In this episode, our keyword is ARTIFICIAL INTELLIGENCE. It is a noun meaning the theory and development of computer systems able to perform tasks usually requiring human intelligence, such as visual perception, speech recognition, decision making and translation between languages. Referenced in the Oxford Dictionary on August 30, 1955, the term artificial intelligence was coined in a proposal for a two month, ten man study of artificial intelligence submitted by brilliant American scientists John McCarthy from Darsmouth College, Marvin Minsky from Harvard University, Nathaniel Rochester from IBM, and mathematician Claude Shannon from Bell Telephone Laboratories. The workshop, which took place a year later in July and August of 1956, is generally considered as the official birth date of the new field. I referenced this by a Forbes.com article, A Very Short History of Artificial Intelligence by senior contributor Gil Press. I will have a casual yet satisfying conversation about AI perspectives while taking it to practical, layperson's terms with my returning guest from the last season, our whiz kid, the human AI Ray Chang. Also joining us is my guy BFF Jeroen Baron from Arnhem, the Netherlands. I miss him very much. He is a full time teacher at okay. I don't know if my Dutch is still good, but I will try at Hogeschool van Arnhem en Neijmegen /HAN University of Applied Sciences. Did I do it right, Jeroen?
Jeroen 00:05:00
Yes, you did.
Wish 00:05:02
Oh, my God. Yes. Okay. My Dutch is still okay.
Jeroen 00:05:06
Luckily, the new name is the University of Applied Sciences. HAN University of Applied Sciences. So it's more international than Hogeschool. It's more difficult to pronounce Hogeschool. Yes.
Wish 00:05:18
I love Dutch though. But I haven't been practicing because I haven't been talking to you. I still love it. Anyway, we will lay them out based on their career and life experiences and note that this will focus on practical, everyday people application of AI or AI understanding. This is my first multi-guest on this podcast, and I am adrenalized to do this. Just a disclaimer this episode represents individual experiences and personal views. If you find our conversation sensitive or opposed to your thoughts, please remember that we are all uniquely ourselves. Nothing is intended to personally attack or cancel anyone by expressing their own opinions on matters of life. So stay in your own lane. We'll stay in our lanes. I want to give you short intros about my guests. My synonyms for Ray in connection for our keyword for today, artificial intelligence, are robot as a noun, computing as a verb, and responsive as an adjective. As for Jeroen, my synonyms for him about our keyword for today artificial intelligence are neuron as a noun, evaluating as a verb, and adventurous as an adjective. So let's welcome Yuron and Ray.
Jeroen 00:06:58
Hi. My name is Jeroen. The keywords in the artificial intelligence relates to me, to the words misunderstood, evolutionary and critical thinking for me.
Ray 00:07:10
Hello, friends. My name is Ray. The keywords artificial intelligence relate to the words tool, progress and adoption for me.
Wish 00:07:24
Perfect. Thank you so much for both of you. We're in three different time zones and I'm glad it worked out. So thank you for that, first of all. And now I invited different kind of, I think, perspectives when it comes to AI, there are so many things. It's so funny because I think it's so in for us for a long, long, long time ago for us, for the three of us. But most normal, everyday people won't. I mean, it's hovering around, but not everybody's as aware as now, because of the likes of ChatGPT coming out a few months ago, end of last year. I think that kind of blew up about chatbots, which was funny to me personally, because I was just like, oh, chatbots is just in right now. It's been around us for quite some time. So my first question to open this conversation up is how would you explain AI to normal, everyday people in a nontechnical kind of explanation?
Jeroen 00:08:33
You want to give that a go, Ray?
Ray 00:08:36
So look, in my mind, it's another tool in the toolbox. Right. The way I would explain it is it's something that you can use to produce an effect. The tool is very powerful when used correctly, but equally it can be something that can often fail on you without you sort of realizing. So it's still a trade off and a balancing act that you need to perform carefully when you're using a tool like this.
Jeroen 00:09:02
This is interesting because that's exactly why it related to me to the keyword critical thinking. Like it's even more important to whatever the AI gives you as information that to look at, it critical. Like, is this true more than using a search machine or books? People might more easily believe the thing that an AI comes up with, right? Yeah. But it is a tool you need to know how to use.
Ray 00:09:35
Yeah, I think I can add to that. So I guess one of the problems we've seen with ChatGPT specifically, it's that it's confidently incorrect and it can produce what people call hallucinations, where it's confident that certain things exist or certain things behave the way that it thinks exists, but it simply doesn't. And this is a problem with, I guess, generative AI in some ways where it's just pulling things out of thin air and just going with it. And for anyone wielding these kinds of tools, you need to be aware of it. But once you're aware and you rationalize and you check what it's doing, it can be an immensely powerful tool for sure.
Jeroen 00:10:14
Yeah. We've seen a lot of in school, a lot of students using ChatGPT to come up with answers to questions or like in the IT department, a lot of students use it to come up with code or for anything actually.
Ray 00:10:30
I find that it's a good use by the students, to be frank, but I think it's probably an opportunity for maybe teachers to reframe how that sort of works as well. Right. Like, people are using it because it's a useful tool, but they may not be challenging what they're being presented with from the AI. So if you had the opportunity to sort of reframe it as a challenge instead. So here is a bunch of stuff that I use ChatGPT to produce. Tell me what's wrong with it. That provides a lot of variance in how students need to tackle some of these problems. They might try and plug it back into ChatGPT, but ChatGPT won't be able to solve its own kinds of problems when there's obvious flaws in there.
Jeroen 00:11:09
Yeah, definitely.
Wish 00:11:11
So how would you kind of frame that in terms of teaching everyday human beings or young minds?
Jeroen 00:11:20
Especially for young minds these days when it comes to all the information, be critically about what you hear, if it's the truth, if there's another side to it, or if someone might have a different funal things like an AI. What it does quite well. But we've seen before that if you feed it information from trolls in the internet that it will very quickly become a troll itself. It has no idea of good or bad and it's very confident. It's not like humans where we sort of might have some retrospection about our information and think, oh, might be wrong. It's just very confident about it is the absolute truth, what it knows. Be aware of that. So you should always check the source. Like, okay, is it something I can that's the critical thinking part for me. Is it something that I can check? You could see for yourself that it's the truth.
Wish 00:12:18
Right. So I think the keyword that I'm picking up here is truth. So truth in AI, what's the train of thought here when it comes to guardrailing? Because I think always this is always like when Ray and I had projects together, I always am concerned about those guardrails. How are you like, people in tech right now? How confident are we that we're keeping up with all the guardrails to protect people from fearing AI?
Jeroen 00:12:51
Okay, there's something in ChatGPT 4 that is better than in the previous version. It has a chain of thoughts. So normally when you prompt something ChatGPT and you say, okay, I give someone five apples, someone else takes four apples. How many apples are left? It gives you an answer like, okay, you've got four or five left, five left. But in the chain of thought method, it sort of explains how it got to that actual number. So it would say like, okay, so now he has that many apples and this guy has that many apples, so he is left with so it's more like I think that's something with computers that if you do not know where the answer is coming from, you cannot really trust it. So it should have some chain of thought. Like, I use this information to get to this conclusion because that sort of sets the answer in the right light.
Ray 00:13:48
Or in the right framework.
Jeroen 00:13:50
Yeah, in the right framework, exactly.
Wish 00:13:53
Yeah.
Ray 00:13:53
It's it's really interesting because people have been working on what I'll probably call computational intelligence for a long time. It's that logical thinking and framework and ability to structure logical arguments like Wolfram Alpha, for example, has been doing that for decades now. And the collaboration there is really giving it a whole lot more power. When hunting for truth, what it really comes back to is the human ability to sort of do that critical thinking.
Wish 00:14:20
Right, right.
Ray 00:14:20
I think if anything, it should make people sort of start questioning all the sources like they should already because you turn on the TV or you look online, you don't just believe everything you read. You should be parsing that carefully and looking through the content and sort of understanding that for people still building their education, that's an important skill to pick up. And given the rise of these kinds of technologies, it's just more important that they pick them up or sooner.
Wish 00:14:49
Right. But there are lots of misinformation and people seem to be gullible what they hear or posted on Twitter or what, Truth Social and all of these things, right? Yes. It's still the human factor. That's what I keep on explaining about this. It's always the human factor that still is the most reliable on how we're going to safeguard the use and the further development of AI in our lives and how are we going to embed them. Because at the end of the day, whatever we're feeding the machine, if the results that's coming out of it is coming from us, not just from the machine that's learning all of these things.
Jeroen 00:15:35
You could sort of see it like, well, this might be a weird, but you could see it like a dog. A dog has no sense of good or bad. If you teach him it's good to bite someone and you reward it for it, then, yeah, it will think it's good to bite another dog or a human. If you reward it for being kind or helping people, it will do that. So it's not really a good comparison. But if you tell the AI it's good to discriminate. Yeah, sure it will. It doesn't have any sense of what's good or bad.
Wish 00:16:09
We teach it what's good and bad.
Jeroen 00:16:12
So if the wrong people teach it, it will tell you, like it's an absolute truth, it will tell you wrong information.
Ray 00:16:19
I'm going to throw out a bit of an interesting one there, and I would actually say that humans aren't actually special in any way. We have exactly the same problem with people when they're raised in particular ways. If you teach them that bad things, doing bad things, reward them in some way, that's positive reinforcement in that direction, it's no different. And I would even say that we may not be all that special in terms of how we do that, processing and thinking eventually, right. We haven't been able to replicate it yet, but in some ways it could be a matter of time. So I wouldn't even say that it's unique to AI, it's just that in the systems and cultures and the way that we are brought up, that a lot of that is stripped away because society itself reinforces those values.
Wish 00:17:10
But are we in terms of all of these rights? Is the world keeping up? Other industries are keeping up with the advancement of the technology, in terms of ethics, in terms of legal implications, are we keeping up really? Because I've been reading about this a lot. This is my passion, just like keeping watch at this advancement. But where are we when it comes to that? How confident are both of you in your observation?
Jeroen 00:17:39
Well, we're not really keeping up with the ethics. I mean, when I was younger, there was no things as smartphones or cameras. I could do stupid thing and it would just no one, a few people saw it and would remember. But nowadays everything is recorded, put in the Internet and it will be there the rest for you of your life. I mean, ethically, it's not okay to be confronted with stupid mistakes you make as a kid or as I agree. Yeah. Don't think we're really keeping up.
Ray 00:18:12
I would say that one way or another, society and life is moving faster and faster just because everything's becoming more interconnected. It might be true that we are not keeping up at all, but at the same time, we've got no choice. The cat's out of the bag with these technologies.
Wish 00:18:30
Yes.
Ray 00:18:31
It's actually more of a matter of, well, if here is some technological progress, the rest of the world will have to catch up and adopt and work out how to deal with it. Even if it means coming up with creative solutions or taking things away so that you can go back to basics on how you teach people or whatever else it is. A lot of the ethical challenges lie in how some of these models are trained. So all the data that you can get your hands on, whether they really had the permission or not, is a big open question. But at the same time, even if someone went and tried to stop that from happening cats out of the bag. We know it's possible now. So it really is a matter of the world catching up and going back to Wish's point about guardrails, putting in whatever guardrails we feel is most appropriate, but then still progressing so that we can work towards one of those bigger, brighter futures where people can do less work and have more time to themselves.
Jeroen 00:19:27
That would be nice. Yeah.
Ray 00:19:29
Would be nice.
Wish 00:19:30
It would be nice. That's right. So in relation to in line with this in line with picking this one up, I was just reading about these powerful tech guys having an open letter to pause giant AI development for us to catch up with testing and making sure that, again, guardrails are in place. Are both of you agreeing to this? Because I still have a not at all mixed thought about this.
Jeroen 00:20:03
Because I was like, how would that be possible?
Wish 00:20:05
But how would these powerful people saying it, can you give me your thoughts about this so far?
Jeroen 00:20:13
You shouldn't stop this at all. It's not that scary. I mean, what we what we see at the moment, it's not really it's it's what they call like small AI. It's like it's narrow AI. That's that's the term they're using. It's it's it's really like they're showing this AI. They're feeding a data of dogs and then after a certain amount, like a million photos, this a understands, this is what the dogs look like. It's the same as for humans. We see a dog and we think, okay, that looks like a dog. This is probably a dog. If I see a Chihuahua, I might even be thinking that might be a big rat. So there is going to be some confusion. But the AI showed a picture of a wolf and it would say it's a dog. So it's really still quite narrow the way we're using it. And then you have like AI would get well, at the moment, it's just at that phase. It's exciting, but it's not self aware yet. It's not self aware yet. I mean, if you would think about super AI, like, maybe we have that in 2050. I don't know. It could be going quickly. You will have an AI like you see in movies, like in the movie her or Space Odyssey, like those kind of AIs that are able to make decisions on their own based on what is happening real time instead of happening based on what they have learned.
Wish 00:21:50
Right.
Ray 00:21:51
Ray I find the whole concept of some kind of open letter pleading for people to suddenly pause development to be completely ridiculous and a bit of a joke. Because no matter how much they plead, people are going to plow on with whatever else they can come up with. And lives are going to continue because well, lives depend on it their livelihoods and its opportunity. Why would anyone in their right mind and somehow justify, what was it, a six month pause? If anything, it gives opportunity for doesn't give any opportunity whatsoever, really. No one's really going to pause. So I think the noise around that one really is just people trying to drum up media more than anything else. Really. It's just making noise.
Wish 00:22:36
It's not just making noise. I think I'm quite cynical about this. Ray has seen me being a sweet to being cynical when it comes to these things around us. I think they wanted to pause it because there are some organizations who wanted to keep up with OpenAI what they're doing because some of them are failing miserably at the moment. So I think it's for not just marketing or media reasons or exposure reasons. I think it's really more for their own gain. I mean, this is my cynicism about this. That's why I can't fully react. It's like, why do you want to pause technology? Why do we want to pause it? I mean, haven't you seen it before? Because we've been moving so fast. It's quite disappointing to me, actually, that they did this and these bigger companies. I'm not an Elon fan, but all of these guys are wanting to pause this. I was like, you should be ahead, and now you're disappointing me. You're kind of telling me that you're not thinking ahead, you're not ahead, and you're the most powerful people in this world. Am I being harsh?
Jeroen 00:23:53
No. The thing is that we've been in technology for a while, so we know this is not really something new. I bought a Game 20 Q. I don't know. Do you know the game 20 Questions?
Wish 00:24:06
Yes.
Jeroen 00:24:07
I bought, like, this small little box, runs on a few batteries, and it was invented by Robin Bergener in 1988. So that was the first artificial intelligence for the people that don't know it. It's like put something in your mind, you think of an object or an animal, and the computer will ask you a series of questions and you can like it's it the statement is it will guess what you're thinking of in 20 Questions. And it works fairly, fairly good, actually. So you answer like, yes, no, and it will ask you questions like, is it bigger than an elephant? Is it a human? And it comes up with this through this network. It's in what they call the neural network, like through this network of how do you end up at a certain answer? It asks you these questions and usually it's correct. I mean, I've been asking it. I have thought of really strange things like a bicycle tire. How do you call it? Ventieldopjes...
Wish 00:24:07
pressure?
Jeroen 00:24:07
Yeah. It's like the gauge where you put in the air.
Wish 00:25:28
Yeah. So it's the tire pressure.
Jeroen 00:25:31
Yeah. Is that the English word? Yeah. Fair. We call it the ventieldopjes. Well, anyways, it guesses it in 20 Questions. So it's nothing that special or new. It's just that because it's in the media, and I think even the latter is because a lot of people see the possibilities. Like with generative AI, you can think of a company name and you can let it create a logo and you can say like, okay, what are the...
Wish 00:25:31
Bye bye, Fiverr!
Jeroen 00:25:31
Exactly. But you can ask it questions like, what are the best companies in the world doing to make money? It will come up with an answer based on the information it has. So it's a way for a lot of people to make money or start their own company or sort of like it's bringing power back from the big companies to the normal people. So I think that's also one of the reasons why they want to put a hold on it, because it's a lot of information and the data for the normal people to use.
Wish 00:26:42
Yeah, but pausing it is not about us everyday people, they want to by the way, just to segue, because earlier I was using ChatGPT a little bit and I asked, explain why we need to post the development of giant AI. So there are like five that it explained, like safety concerns, ethical concerns, accountability, bias, lack of understanding. And then overall, passing the development of giant AI would give us the opportunity to address these concerns and develop AI systems that are safe, ethical, accountable and biased and beneficial to society. And I was just like, okay, no, you don't need to pause it because you should be working on this while you were while the big techs are working on this. Big and small techs are working on this. You have to work alongside this. So it's like, why are you pausing it? It's your fault to begin with because you didn't especially if you pause this, then there are implications especially to the smaller companies whose lives are dependent on investments. Investments and funding. If you post this, it's like you're going to affect those small people. Now my opinion is coming out. Well, anything to add?
Ray 00:28:07
I would probably add that the practicalities of implementing such a ban are just simply unfeasible. You wouldn't be able to stop it even if you tried. So in that case, it simply won't happen. What is good and probably, I guess, positive, is that now that we know it's possible, there's a lot of open source effort towards implementing these things out in the open instead of owned by a large capitalist company. So that's really promising as that sort of comes along. And I mean, ultimately what people should be looking at is, well, how do these tools help me do better and or be more efficient? And how can we adopt these things into our ways of working so that we can focus on the meaningful stuff that we actually need to do and deliver?
Wish 00:28:52
That's right. But also coming back, what's the difference between just to make it clear, what is giant AI versus generative AI versus responsible AI? Everybody would just hear this, but what are the differences? The easiest way to explain them, what are they?
Jeroen 00:29:14
Generative AI is everything that is generating something like text or images or you even see that, you can feed AI a movie and a photo and it will generate you in the movie. And general AI, it has more reach than the narrow AI. What we talked about, the narrow AI is specifically for a set of tasks and general AI is not for a specific set of tasks. So it's more a chatbot or also kind of narrow general is pretty.
Wish 00:29:57
Much like different sets of data scalable. Like you can teach it more and more.
Jeroen 00:30:05
Yeah, the narrow AI is more the sample of the dog and the wolf. And general AI would know all the animals.
Wish 00:30:15
There you go. Yeah, I like that explanation. Why is it giant AI, right? Why that term data?
Ray 00:30:25
So it is about the amount of data that needs to be processed to train and run these models. So, for example, for ChatGPT, these are what they refer to as large language models. It's actually not entirely feasible for, I guess, an individual, the average individual, to build a sufficiently large machine and actually run this at home because of the amount of resources that it usually consumes, just is quite expensive as a process. So unfortunately, because of the computation requirements, it makes it the kind of thing that only big corporations can get access to. But slowly and surely we're whittling away at this so that in time, maybe we can run these kinds of things at home, which will put it into the hands of the individual. And that'll be interesting times, right? Like, hypothetically, you could have a personalized ChatGPT that has access to all of your things in your control. So it has your emails. You can compose emails using this tool and it can help you do some of this, but you're still in control, not someone else.
Wish 00:31:28
Google has been doing that already. Google is in charge of my life, mostly.
Jeroen 00:31:34
Yeah, but it doesn't not feel like it. It's more like it's still an assistant.
Wish 00:31:42
True, but that's the brilliant thing about it, right? Because it's assistant again, it's all about semantics. Right?
Jeroen 00:31:50
Yeah.
Wish 00:31:50
But I know that it's taking over my life. I mean, it has taken over my life, and I let it, actually, because we know how it works.
Jeroen 00:31:57
It's convenient.
Wish 00:31:58
Yeah. Semantics.
Jeroen 00:32:03
That's what a lot of people, I think, are scared of these big tech companies. They get to know too much information about us, but well, yeah, technically, they could know where you are, with whom you are. They could record every conversation, what you eat, what you do, but it's not really that interested. And it's interesting for those companies and it's too much data to do it for all the people in the world.
Wish 00:32:33
Yes. So that's why I think we have to go towards the fear. Right? So, for me, I know that Google took over my life, but it was a conscious decision for myself. I put my own guardrails, I put my passwords are so fucking long and so different from each other. I really went to a journey on doing that. But I put my own for my own security, so many guardrails for myself. So I consciously made a decision that, yes, a part of my life could be taken over by Google because it's easier. I'm still taking charge as a human being on doing that. But when it comes to this fear, my question here, which is very interesting for me to hear from you, is, like, what's the real logical thing that we have to validly fear about AI?
Jeroen 00:33:31
Well, I think the biggest fear at the moment have you seen that? Was two months ago that all these artists on social media went wild on AI. Like, okay, AI has a learning model. It's getting all the data from the Internet, but it's getting, like, my images and my styles or my music, and they're feeding it that data. And you can literally say, like, I want an image in the style of artist. Like, I want a painting in the style of Van Gogh. Well, he's dead, so he probably doesn't matter. But for those kind of AI, it could kill creativity. If I'm a company and I'm not creative, and I need something creative, I could pay an artist and I could say, well, there's a generative AI. Design my logo in the style of that artist. Well, that would be nasty.
Wish 00:34:31
That is nasty. I am kind of offended by that one.
Jeroen 00:34:36
So that's why if you feed it data, and I know one of the AI says we're only feeding it our own data or own data set, but if you use data freely available on the internet, which are usually showcases of artists, you should pay them for feeding that information into the machine. And if you sell anything or create anything using that data, they should get a fee for it.
Wish 00:35:04
Yeah, they should get royalty from it.
Jeroen 00:35:06
Exactly. So that's one thing. I think if you're not an artist, you shouldn't have to fear this, but if you're an artist, that's something you should fear at the moment.
Wish 00:35:14
Right. Ray, how about you?
Ray 00:35:17
I think simultaneously, I mean, fear is one thing because some people don't understand it or can't leverage it as the tool, or it's taking away how people are currently making a living. And that's a point of concern. Equally, when used as a tool, it can help you be more creative because it can incorporate other things, other ideas think like generative music, for example. Have you thought about these kinds of instruments paired with something else that might not be viable unless you had these tools in your reach? I think what people are experiencing is a challenge. It's a challenge to how they currently do things or the way that they operate. And that's scary, right? Because when people are comfortable, they like doing what they're currently doing, but when something like this disruptive comes along, it challenges them on, well, how are you going to make this happen now? Or what are you going to have to do different? And that's something that people have to overcome. I think that's where the fear comes from. I agree that ethically, like, sucking up everything on the Internet and using that as training data is a bit dubious. But at the same time, as we were saying earlier, cat's out of the bag. People have done it now.
Jeroen 00:36:28
Yeah, true.
Ray 00:36:29
It's already you can try and reconcile that or you have to try and work that out. Right?
Jeroen 00:36:35
Yeah. You have to unlearn it.
Wish 00:36:37
Not necessarily unlearn. Right. Like further development. Okay, so we've got this extreme and we're stealing it's now. The mass opinion, it's like you're stealing from artists. I kind of felt affected by this because my book's cover artist, her work has been regenerated differently and she's not credited. And she really felt so bad for it, and she really emotionally felt bad for it, and I felt bad for her. So I think it's really more of furthering how okay, now we went overboard. How are we going to make it fair? How are we going to fix it? But how ethical are these developers developing it? That's the question. But I think these artists should make them accountable as well as a whole as a society. So I think there are more than enough outspoken people out there who could make these companies accountable for what they're doing. But what you're saying right? Cat's out of the bag. So I think the next step for all of us as human beings, not just the techies, is to really just make these companies accountable and give them feedback. Right. Because how are they going to know? Sometimes power could cloud your judgment or you've been developing it and you don't know anymore. Like, where's the limit? Right? So other people, like, if cat's out of the bag, then you can't really put it back in the bag, but just like, hone this, feed the cat, pet the cat, cuddle the cat to make it a good cat. Right?
Ray 00:38:27
Yeah, I agree.
Wish 00:38:27
Oh, my gosh.
Jeroen 00:38:32
Because you don't like cats. That's cute.
Wish 00:38:35
Yeah, I freaking love cats.
Jeroen 00:38:38
I know, it's a good reference.
Ray 00:38:41
No, I think that's right. Right. It's a matter of reconciliation and saying, okay, well, we did this, my bad. But then what can we do to sort of give credit where it's due and everything else is a big open question that needs to be tackled sooner or later, but yeah, otherwise I'll just get away with it.
Wish 00:38:58
Right?
Jeroen 00:38:59
Yeah. And it's really difficult. Have you tried the Moral Machine?
Wish 00:39:07
What's the moral machine?
Jeroen 00:39:08
It's an experiment where it's a website where they ask you questions for you see, it's an artificial intelligence for self-driving cars. And it has to make check that out. Because you have to make decisions. Like, there's a car driving and you see a street on one side. The car has to avoid some people walking on the road. And then you get like these questions, okay, it's a self-driving car, so there's a dog in the car and there's two children walking on the road. But if the car sways, it will crash and the people in the car will die, or anything in the car will die. So you have to make these decisions. Okay, two children, one dog. What would you do? And it's a whole list of questions. And the demographic of the results were actually very awesome. That especially in the Asian cultures or Middle East, people would choose for the older persons, the older people to survive. Like, there's two children in the car and one in an older person, an elderly person on the road. It would sway, kill the children in the car. In European countries and American countries, they would choose to save the children. So in that part, they say, like, okay, you would say if you have a self-driving car, it should have some other morals in one country or in another part of the world, actually. Or another part of the world. So morals are not the same all over the world. So everyone has different morals.
Wish 00:40:49
Wow.
Jeroen 00:40:51
How are you going to teach the AI what is moral for everything. I think we could easily say, well, don't kill, don't steal, some biblical notes. But further than that, it's going to be hard.
Wish 00:41:05
There's a cultural aspect. So this is another level. Right. So emotional. There's cultural oh, this is very interesting. You're giving me more things to think about.
Ray 00:41:23
In my mind. Yeah, absolutely. There's a lot of layers to it and we're just at the surface of what this could do in the future. Time will tell on where all of this goes, but ultimately these are powerful tools, so people will need to work out how to wield them properly, and that includes dealing with these challenges, even if it's on an individual-by-individual basis. So it's something that's tailored to you, but that's probably a little while away.
Jeroen 00:41:52
Well, yeah, it is. I mean, before guns, there were other weapons, so that was the tool. We had to come up with rules like it's not really fair to shoot someone who doesn't have a gun. That might be a weird comparison, but even with these tools, you have to come up with some moral rules for it.
Wish 00:42:13
Yeah. And that's kind of far away. Is it safe to say now that because everybody is creating a hype, that when ChatGPT came out, that it's a little bit on the sentient side, but it looks like we're really far, far away from these machines to be sentient. Right, yeah.
Jeroen 00:42:36
Far, far away. I mean, even with artificial intelligence, accountability is a big thing. A self-driving car, if it would drive someone of if you would drive into someone like this self-driving car yeah. Then who is responsible? I mean, you are the one driving the car, right? Not really. It's the artificial intelligence, but that has no sense of morality. So I think the responsibility is with one who's using it. That is, the one who is accountable for using the AI and the outcomes of the data.
Wish 00:43:13
Wow. And they're right and wrong, right? There's just so many because yeah, you added the cultural and moral aspect of it. I think they're all relative and how we're going to solve it in the future in feeding data of AI would be very essential. And then I think one of my hope while we're developing this moving forward is really the legal aspect, the ethical aspect of it. I think there will be new jobs that's going to be created for this. There's going to be new departments in the government sector. I think there are more now, more sectors that are opening up, but I think it's going to create new jobs to tackle these things. But I just hope these big techs or even the small techs alongside, when they're developing this quite quickly, that they're going to put people there who's going to make themselves accountable and can keep up with the security of how everything is being built, for us to also, as human beings, to feel safe right?
Ray 00:44:22
Yeah, I mean, having the right skills and expertise to sort of curate some of this is actually incredibly difficult and would be quite a niche skill set just because, as we said, it's tailored to individuals, to cultures, to everything else. So in some ways we may not be able to put this thing, we may not be able to confine it in some ways, but hopefully by the time we develop something sufficiently complex, we'll have enough wrangle on it that in the worst case scenario you can manage to pull the plug and then try again.
Jeroen 00:44:58
True. Well, if it ever something I've heard in this other podcast is what is said, like that might be a scary end, but if a machine would ever become conscious, like it would have a sense of self, it would never let us know because correct, we would be able to shut it down. So it would never say like, hey, can you please shut me because I know you will shut me down if I say that if I'm self-aware, so it would never tell us. And if it would, well, that's still stupid.
Wish 00:45:30
But that's the thing, right? Because I think I could safely say I've playing around the new Chatbots and I still find them as robots, I still find them as non-humans, smart robots.
Jeroen 00:45:46
And they might, they're smart deterring test. But I tried to play 20 questions with ChatGPT, so I said I'll think of something, you guess that works. And then I asked it like I want to play 20 questions with you. Can you think of something like a subject or an animal? Then it would respond with I'm thinking of a table. No, I want to play 20 questions with you. And it just doesn't understand the concept if I ask it, it doesn't understand the context of the question. Like I want to play a game with you. It's 20 questions. So you have to put something in your random memory. Do not tell me what it is. Maybe CHATGPT 4 might be working.
Wish 00:46:28
Maybe. But that's comforting in a way because I'm asking it weird questions too, but it can't answer me and I'm happy that it won't answer me that way. So that's comforting because even if you say for me it's not a minus, that it can't be all that we wanted it to be. And that's good. That means that there's conscience with people who are making it better. Especially like OpenAI, it's highly funded, but it's still independent in its own way. So yeah, I find comfort in that, that I still feel that I'm talking to a robot.
Jeroen 00:47:14
Yeah, definitely.
Ray 00:47:16
I actually think that shortcoming is just some scientist or engineer on the other side still scratching their head on how to get it to work.
Wish 00:47:24
And we know that that's what I love. Yeah, the funny thing about chatbot as well, this is a stupid question, so please indulge me the hell did chatbot became famous in the past few months when we've had it for so long? I mean, Ray and I have been developing a lot for different clients and that's what's good about a lot of online shopping that you do. Hi, how can I help you today? On the bottom right side there's always chatbot, but it blew up so much. What resonated with the mass market this time?
Jeroen 00:48:04
Lot of news articles, social media attention. Yeah, it became a brand at its own ChatGPT. I guess maybe in a few years we won't be saying we're googling something, but we're ChatGPTing it I don't know, I don't hope that we'll no, please cut this out.
Wish 00:48:29
Cut us out of the bag.
Jeroen 00:48:32
No, cut it out of the podcast before anyone picks it up. One big difference in ChatGPT is that you can ask follow up questions and that chatbots usually do not allow that. That well, data helps, you know, and you have to start your rephrasing, what you said. And that's one thing I like about ChatGPT. I mean, I've been generating code with it and then you just ask, I want that algorithm and it shows you an algorithm. No, I want it in another programming language and it translated. No, you can just add things to your previous questions so there's a follow up. So it feels more like a human conversation than with other chatbots.
Ray 00:49:19
Historically, other chatbots were actually basically wired by humans. Right? The AI chatbot was in the natural language processing, in interpreting what a human was saying into words to be able to run through some decision trees on where the conversation was going. ChatGPT's significant innovation was tremendous amounts of data use of, you know, newer models to try and generate coherent sentences that still took interesting turns and was sufficient that it had a breadth of knowledge. You can go and query it on niche details about soil science, for example, and it will try to generate a coherent response. And its breadth of knowledge across many, many topics made it quite the what's what I'm looking for beginner expert in that it can, it can find definitions and provide responses to things, but it still gets things wrong. And that goes back to the issues with it being confidently incorrect, but it is very interesting as an output, which is probably what helped blow it up.
Wish 00:50:28
Yes, I like that. Confidently incorrect. I'm taking note of that. Yeah. Okay, let's have some fun. What are the common stupid opinions about AI that you've ever heard from everyday normal people? We're not making fun of people, but it's just a nice anecdotal thing for us to do.
Ray 00:50:53
I mean, at a naive level, the thought that it might be able to immediately take away jobs by, I don't know, generating all of the code required to do something that still needs someone to wrangle what it's doing and try to make the right kinds of judgments and so on. It's a tool. Whoever is driving it is ultimately responsible for its outputs and where it's going. There are tools today that will just invoke ChatGPT based on a question that's been asked and then just run that code and see what happens. So there's a lot of interesting, fun experiments in that side of things.
Jeroen 00:51:30
It's quite it as well. But it's it's not not for complex programs.
Ray 00:51:35
That's right. Well, you can get it to do complex things given the right kind of queries and contexts and so long as you narrow the problem space. But the assumption that it takes away what it is that we do in some ways is probably an exaggeration.
Jeroen 00:51:53
Yeah. You could not use a prompt like I'm starting my own bank, write me some safe software.
Ray 00:52:06
I would say what it does though, is it challenges the status quo and how a lot of people do their work. And artists are at risk. Right. Like people spend painstaking hours and days and weeks developing art in their own style and so on. And machines will readily gobble that up and spit out variants of it. The question will be how do you then adopt AI into your workflow to push that forward? If the question is about stupid responses, I think overcoming that fear is probably the serious thing to do. And then working out how to sort of adopt that into your ways of working.
Wish 00:52:49
Yeah, even that 20 questions that Jeroen is saying. I asked my international school teacher friends how they teach young minds and they said the way they teach now is upside down. They ask a lot of they would give the students a prompt that this is the answer, but give me how I came to this conclusion. So now the 20 questions are coming from the young minds, not the other way around, like the way we were taught when we were kids. So that's really interesting input and it does make sense. And the way it is, it looks like that's how AI is being built. If it's that responsible, I think I mean the narrow ones at least.
Jeroen 00:53:44
Yeah.
Ray 00:53:44
Thinking about it, I think one of the stupid reactions to AI is hey, let's put a pause on this.
Jeroen 00:53:50
Yeah.
Wish 00:53:53
I love.
Jeroen 00:53:57
I had some, some colleagues saying like yeah, but what if a complete report is written by ChatGPT? Like an essay that a student has to make? Well, I mean, we're going in some education is changing where teachers are becoming more like coaches. We want to get rid of the PowerPoint presentations of 3 hours. So we just take say okay, this is your assignment and the part of the assignment is you have to find your own information.
Wish 00:54:28
Yes.
Jeroen 00:54:28
And then you just help them. If they have questions. You are not giving any answers, but you're steering them in the right direction. So have you looked at this? Have you tried that? What have you done? What is the point where you get stuck? So you're more like helping them to find the answers themselves. That's really interesting. And maybe for artificial intelligence, a good start to become more aware of the misinformation there is.
Wish 00:54:59
Yeah. The training.
Jeroen 00:55:00
Yeah. And I think because some of my colleagues are afraid, like, what if a student has become the whole essay has been written by ChatGPT, then my answer would be, well, if you have been coaching this student for over half year, you should know if it's their own work. So maybe it's somewhere I wouldn't say your fault, but you should also be aware of the journey a student is making in their learning process. Yeah. It would be a different way of teaching.
Wish 00:55:34
That's true, because I've been reading some of the answers to me by ChatGPT, and I could see it's like, oh, okay. It repeated that it's not perfect. But you just needed attention because I'm a writer. Right. So I could see it's like, okay. I could see that it's not even a human mistake. Nobody would make that kind of mistake, even if they're novice writer or whatever. I could see that it came from a generated text.
Jeroen 00:56:08
Yeah. And if it would be perfect, maybe that's a mistake as well. I mean, humans make we make mistakes a lot.
Wish 00:56:16
Yes.
Jeroen 00:56:17
So a perfect might be a sign of brilliance or
Wish 00:56:17
plagiarism.
Jeroen 00:56:17
Plagiarism. Thank you.
Ray 00:56:25
Yes.
Jeroen 00:56:28
Do we actually really care? Okay. Until now or actually before, when I had to write an essay, I went to the library. I got some books. I got the information out of the books. I didn't come up with the information. I just got it out of the books. I made my own story of it, and I understood sort of what happened. Then there was Google, so I Googled the shit so I didn't have to go to the library. And it's not something that I invented myself. Nobody did. Somebody invented it. But then I'm just taking the information.
Wish 00:57:03
Recreating it in your own understanding.
Jeroen 00:57:07
Exactly. So I think that will become more important. That how would you make students show you that they're understanding it? It never really was their own idea or their own invention. They always copied it. They copied it out of from Google. They copied it from books...
Wish 00:57:07
Inspired by.
Jeroen 00:57:07
Exactly. So yeah. I think ChatGPT should be seen as some inspiring information that you hope it will connect to the knowledge you already have.
Wish 00:57:42
Yeah.
Ray 00:57:43
And that's exactly it. I think these generative AIs are useful to explore ideas and try to help you explore something you haven't thought about just yet. But ultimately, at least right now, the legwork lies with you. We currently make students pile through reams of books, reports, whatever else it is. How much do you actually remember of reports you've written and things that you've studied? But somewhere along the way, humans are picking up the essence of understanding in that and maybe morals of stories or whatever else it is. Currently, the machines can't produce that yet, but it has some, maybe even some idea of words that are related to other words. For now, it'll be interesting to see where it goes. But for now, the essence of understanding is still something that we have. Hopefully we'll get there, but slow and steady.
Jeroen 00:58:35
Yeah, I think if we would say that an artificial intelligence will be smarter than a human very soon, then we really underestimate our own intelligence.
Wish 00:58:48
But that's another good thing. Underestimating our own intelligence means humility. That means it goes back to humans who are feeding and training all of these machines towards what direction that we're going to take them. So I think it's safe to say, I mean, in my own mind it's safe to say picking up from this conversation is that everyday normal people should not really fear this. It's really more of watching it and monitoring it and making the people who are making it accountable to where it's going at the end of the day. So having said that, I'm asking you both, what wisdom did you pick up from this conversation relating to artificial intelligence?
Jeroen 00:59:38
Oh, wow, I didn't see that question.
Wish 00:59:42
I mean, you've already unpacked at the end of it. What did you pick up from this?
Jeroen 00:59:48
For me, it's interesting that you and Ray sort of like we share the same ideas and so I had some interesting samples that confirm my bias when it comes to AI.
Ray 01:00:05
I think the problem that we have is that the ethical dilemmas are still unresolved and there's actually no horizon where we see that being solved anytime soon. Yeah, it's far off in the future. Six months won't do it. In fact, people have been working on AI ethics and explainability for years and it's still not being resolved today. Even from this conversation, it's clearly that a lot of more work needs to be done, but the problem is what kind of mechanisms will be put in place to be able to drive effort towards this. Otherwise without public sentiment or consensus or any kind of drive, it's just not going to happen.
Wish 01:00:47
Okay, I love that.
Jeroen 01:00:49
So maybe that should be the latter not stopping the development, but starting a bigger development on the ethics of using AI and developing AI.
Wish 01:01:01
Yeah, but it still irks me that this is not set as part of the package by the big tech. The f*ck?
Ray 01:01:16
Yeah, it's a problem of priorities and I mean, ultimately companies enter their answer to their boards and management and everything else. And unless it's actually something that results in negative public sentiment or some other external drive that corporations will align to, it's unlikely that people will put much more thought or effort towards it. But eventually, if there's enough or a big enough fallout or some other catastrophe. Maybe things will change, but in the current course, it's going to be quite difficult to get these companies to be accountable.
Wish 01:01:51
Well, how about that pause? Right? It's such a big excuse. Yeah. I don't know. I don't have anything to say about big people requesting for a pause because they needed to keep up with checking it out.
Jeroen 01:02:14
They should allow the pause, but only for big companies.
Wish 01:02:17
OOH.
Jeroen 01:02:22
Like small developers. Yeah, go ahead. Big companies. No, they would just start a new small company, split it up, whatever.
Wish 01:02:32
And whoever signed up there, you can pause.
Jeroen 01:02:37
Yeah. And who's going to control the pause? They're going to go there's no stopping it. And the public is not going to see it enemy. They're not going to go public with their new discoveries. So that's the only thing that you're gaining from.
Wish 01:02:53
It's true. If they're going to pause it, there are a few not companies, there are a few organizations like IEEE's Ethically Aligned Design Framework or the Partnership on AI. So they have set of ethical principles. So if they're going to pause and these organizations will take on it, that's a huge job. And yeah, six months is not going to cut it. Why can't it just be concurrent, right? I don't know. Really? There's something more to this story. Maybe it's going to come out, right.
Jeroen 01:03:35
Yeah, let's hope so.
Wish 01:03:37
Well, what else? In the end, it's what else?
Jeroen 01:03:41
Yeah, let's not start the conspiracy theories about AI.
Wish 01:03:48
No, we're not going to get into that anyway. Okay. So let's still be on facts. In closing, what are the three main words that you picked up in this conversation?
Jeroen 01:04:04
Well, ethics definitely big part of things. Trust or the critical thinking. Like, don't believe everything you hear in the media. And definitely don't believe everything or any other artificial intelligence, because it's still not because intelligence is not a skill itself. It's the efficiency of learning new, being able to learn new things. So don't be too afraid. And it's just a machine.
Wish 01:04:42
Yeah.
Ray 01:04:43
Want to say something around cat out of bag? It's the nature of human progress and development, right? Like, we're moving forward and new things, new technologies are coming out day by day. So it's going to be challenging. And I start wondering about, well, what's it going to be like a year from now, two years from now, ten years from now? And the only constant is change. And our ability to adapt to change is currently second to none. So keep being humans. That's kind of what it comes down to, right? We can adapt, we can adopt, and we can use these tools to our advantage. The question is whether or not we use it in positive or negative ways. And well, we're in control of that, so we just need to drive in a particular direction.
Wish 01:05:42
Totally. For me, it's really the truth, challenge and adapting. And at the end of the day. There's always a human factor to all of these things that we're creating. At the end of the day, it's humans who will be held accountable for all of these things. Not the machines, it's us. How we create this world and how we created the world. Because if you look back ten years ago versus now, there are so many things that developed for the good and for the bad. And sometimes what's bad that I see may be good for other people. Like, TikTok or something.
Jeroen 01:06:27
Yeah.
Wish 01:06:30
It's still the human factor.
Jeroen 01:06:32
Yeah, about that. Have you seen the court in America about they want to ban TikTok? Like, what the like Facebook? Is every any better? Or.
Wish 01:06:49
That'S politics. Politics versus technology too.
Jeroen 01:06:54
Yeah, it's a lot of politics. But something interesting that there's this futurist Ray Kurtzweil, of course it's a Ray. And he said, by 2040, non-biological intelligence will be a billion times more capable than biological intelligence, aka the human race.
Wish 01:07:16
Okay. Coming from humans.
Jeroen 01:07:20
Exactly. And then he has another prediction, which is interesting. He said, by 2045, we will multiply our intelligence billion fold by linking wirelessly from our Neocortex to a synthetic Neocortex in the cloud.
Wish 01:07:36
All I care about is that in the near future, at least in the next 50 years, that we can defy physical aging because I want to live a little bit longer.
Jeroen 01:07:50
Yeah, well, that's in a whole different podcast. I know they're working on it. They're working on it. They've had some rats living longer than expected. Might be testing on humans soon.
Wish 01:08:08
Yes. So any final words?
Jeroen 01:08:11
Try the Moral Machine if you think morality in AI will be easy. And don't be afraid.
Wish 01:08:19
Yes, afraid, be not.
Jeroen 01:08:21
Be not, just enjoy new technology. It will come. New technologies will exist. They will develop. That's what we do.
Ray 01:08:34
I have been meaning to work on neuroscience, so that could be pretty interesting.
Jeroen 01:08:38
Neuroscience. Oh, wow. Yeah.
Ray 01:08:43
The thought about moving forward and adapting to change I think is essential. Right. Like driving forward and making progress and trying to drive towards that bigger, brighter future is something that hopefully everyone's sort of striving towards. That might be a day at a time and just trying to make ends meet, or that might be planning ahead decades so that you can have the anti-aging technologies to live just that little bit longer. I would say that there are many billionaires currently looking at that kind of technology and wondering how they could make that work. So maybe that'll work in your favor there Wish.
Wish 01:09:23
Yes.
Jeroen 01:09:23
Yes.
Ray 01:09:24
But optimistically as a species, you would hope for a bigger, better, brighter future. Not so much the Dystopian stuff.
Wish 01:09:32
That's true. And also, last thing is upskill, upskill, upskill. Carve a niche and think about future, the future of work. It's not about fearing that they're going to take away your job soon. It's more of what else you can upskill for yourself because. It will come in the future. For example, let's say, what is atmospheric civil engineering? Like carving the highways up above for flying cars, or as simple as yeah, I mean content moderating, data feeder for the future of chatbots. There are so many things that you can think of. They may sound crazy, but think ahead. If you wanted to carve out new jobs in the future.
Jeroen 01:10:36
I don't know, in the year 1000, no one would ever think a desk job was a thing. What are you going to do the whole day? Sit behind a desk? You need to plow a land or, I don't know, take care of the horses.
Wish 01:10:53
I still plow a land.
Jeroen 01:10:55
Or fight, like be a warrior or something, catch food. So things change. Things change. A desk job is something...
Wish 01:11:09
Thing of the past.
Jeroen 01:11:09
Not sure
Wish 01:11:09
techies are still doing that. It's literally a thing of the past.
Jeroen 01:11:16
But yeah, it's not a thing of the past, but it's just something.
Wish 01:11:26
Yeah. You have a future sitting on a desk again.
Jeroen 01:11:31
Wow. Awesome.
Wish 01:11:32
Right. Come on. What's your new job?
Jeroen 01:11:37
I'm going to be a backend web developer, so I'm going to go back to technology. So hopefully improve some of the search engine part within our company.
Wish 01:11:52
Yeah. Congratulations. So see, that's a desk job, but the definition of a desk job right now is different. Right. It's just really working on a desk.
Jeroen 01:12:04
I'm going to be using a lot of ChatGPT and artificial intelligence for sure.
Wish 01:12:10
Then we can talk about it some other time, some more, like probably two years from now. We have to review this, the three of us.
Jeroen 01:12:20
I want to automate my complete job.
Wish 01:12:23
Yes, autocorrect. Autocomplete your job.
Jeroen 01:12:28
Yeah, exactly. Just don't tell anyone.
Wish 01:12:33
Yeah. Yes. The cat's out of the bag Jeroen.
Jeroen 01:12:39
Yeah.
Wish 01:12:42
Okay. Thank you so much for sharing your thoughts with me and everybody else who's listening. Thank you very much for your time. This is such a very productive, very wonderful conversation. So thank you so much.
Jeroen 01:12:59
Yes, thank you as well for hosting. Thanks, Ray, for your insights.
Ray 01:13:03
Thank you both very much.
Wish 01:13:15
Some important words relating to artificial intelligence are ethics, forward, and humans. As one of our guests, Ray said, "keep being humans", end quote. While Jeroen said, "don't be afraid, it's just a machine", end quote. Also, I would like to apply another quote from a character HAL9000 from the 1968 film 2001 a Space Odyssey. "I know everything hasn't been quite right with me, but I can assure you now very confidently that it's going to be all right again." We all have our own fears, conclusions and expectations about every single element that comes to us as change or evolution of our lives or even the society. AI has been around for 60 years and a pace of change will never, ever slow down. All that matters is our humanity, the values and morals that we keep to drive positive change rather than any nefarious means. Well, I hope so. Whatever your opinions are about the subject, it's always good to be a part of the conversation. To keep all of us in check, this world has to be always open to that.
Wish 01:13:15
Thank you for listening to Human Thesaurus. Please help me rate and subscribe because your support means a great deal. Join me again next week for another episode. And while waiting, why not listen to my past few episodes? You may find one of them compulsive. I'm your host, Wish Ronquillo Peacocke. Have a fantastic day and thanks for listening.
-
Licensed Music: Ketsa
Transcript: Voicequill.com