Site BLOG PAGE🔎 SEARCH  Ξ INDEX  MAIN MENU  UP ONE LEVEL
 OJB's Web Site. Version 2.1. Blog Page.You are here: entry2280 blog owen2 
Blog

Add a Comment   Listen to Podcast   Back to OJB's Blog Search Page

The Problem with AI

Entry 2280, on 2023-07-04 at 21:55:21 (Rating 2, Computers)

Many years ago, I did a computer science degree, and started doing some postgrad papers after that. One of the subjects I studied was expert systems and artificial intelligence, and at the time it was seen as a technology which we thought was about to be released on the world with massive consequences. But that didn't happen, at least not straight away, because now, over 30 years later, maybe that potential is finally being realised.

It started with ChatGPT, and after that showed what could be done, many other companies are following by creating their own AIs. In addition, many really smart people are warning us that artificial intelligence is becoming a risk to human society, and that we should put a hold on developing it further until those risks can be evaluated.

So how is a program that can write text such a risk? That's a good question, which I have never seen answered in particularly specific terms. No one seems to be able to commit to saying exactly how AI might cause so much trouble, apart from the loss of a few jobs involving writing which might be able to be replaced. By the way, I know similar technology is being used for manipulation of graphics, and in a few other areas, so the same argument applies there.

One point which I think should be examined though, is how much our society now relies on software. When you make a phone call, software routes it to its destination. When you pay a bill, accounting software handles the transaction. When you drive your car, the engine management software controls the engine's performance. When you want to travel to a new location, navigation software figures out the best path to take.

And so it goes on; I could write a whole blog post just listing the places where software controls our lives. And what is software? It's just a series of instructions written by a programmer... or an AI. Remember, ChatGPT just writes stuff, including code.

So programming is one significant place where AI is becoming quite useful. A poll run on a geeks' web site I follow showed about half of the people there were using AI to help them write programs. How long before all code is written this way? I'm sort of glad that I did programming during the time when it was at its peak: from machine code on the early processors, to high level languages, web development, and database design today. It has changed over those decades, but it was very much an individualistic creative process during most of that time.

I think I can safely say that software is at the core of our modern society, and that is exactly where artificial intelligence is likely to have the most effect. Looking into the future it is hard to see how this won't simply become more and more relevant. Have a look at Apple's latest virtual reality headset - a hardware product which uses software to do its magic - and the future is revealed in both a utopian and dystopian way. Why walk down the street to visit your friend when you can have an experience using VR (or augmented reality, AR) which is almost indistinguishable from reality? And remember, that experience is provided by software.

But what's my real point here? Well, software is arguably the greatest invention ever. It's a way to create something, in an entirely abstract way, which can do anything you want. It's literally telling a machine what to do. But what happens when one machine (an AI) tells all those other machines what to do?

I don't necessarily want to go down the science fiction path and say that the machines will become sentient and malicious. It's far more mundane than that. But I ask you, have you ever heard of the concept of "the banality of evil", a phrase made famous by philosopher Hannah Arendt? So yes, bad things don't have to happen through bad intent.

I'm sure the majority of the German people during World War II really didn't want to slaughter other humans by the millions. Sure, Arendt was wrong in some ways because some of the upper echelons in the Nazi leadership (maybe even Eichmann himself) were actually evil, and not in a banal way, but the greater evil only succeeded through mundane, and not particularly extraordinary adherence to actions which were ultimately evil.

So I guess that is one way that AI could ultimately become problematic, or perhaps even a source of evil. By the way, I am often accused of using the word "evil" without good reason because it is often seen as a religious concept, where evil is anything which goes against the wishes of a god. But that is only one meaning of the word, and I think it is equally useful in describing a situation which is contrary to a societal consensus on what is good and bad, even through a utilitarian philosophical framework.

So now let me indulge in a small amount of fiction and describe an imaginary situation where software designed by an AI causes harm to humans...

Bob was woken earlier than usual by his smartphone, which put him in a particularly bad mood to begin with. This only got worse when his morning coffee seemed to lack the kick of caffeine it usually had, and he noted that maybe the automatic coffee machine had given him decaffeinated by mistake.

His communication system showed a recording from a friend, so he asked it to replay the message. Apparently the friend had some issues which he seriously wanted to discuss. The friend was in AI research, and was currently researching ways to control the extent AI could "think" independently. Bob wondered, with an air of amusement, if the AI knew how his friend was out to get it!

Grabbing his VR headset, Bob initiated a conversation with the friend, who seemed even more distracted from the realities of everyday life than usual. It seemed that he really was on the verge of a breakdown of some sort, because he didn't seem to be acting the way Bob had come to expect. He said he wanted to discuss something of great importance, and he didn't trust that the VR system wasn't being monitored, and would prefer to meet in person.

This request really put Bob on edge, because it had been years since that had been necessary. VR was as good as reality now, and some said better, and the end to end encryption was claimed to make all communications secure. Still, Bob knew he wasn't an expert and the friend was, so he agreed to meet.

He asked his phone to summon a car to take him to the friend's house. He had no idea where that might be, but it didn't matter, because navigation was a skill humans no longer needed. Even if he wanted to, Bob doubted that he could have found his way, especially after the early start and lack of coffee, which seemed to have made him less aware than usual of what was going on around him.

The car arrived and Bob sat there while the autopilot drove him safely to his destination. Sure, there was still a manual override in the car, which he could use in an emergency, but no one used that any more, and it was there only for legal reasons. He doubted whether he would even know how to control the car if he wanted to.

After a short period a cheery voice announced he was about to arrive at his destination and he saw his friend standing outside on the street, ready to greet him. This in itself seemed odd, since he couldn't ever remember the friend doing that before. But just as he was considering this oddity the car lurched forward at full power and collided with the friend, killing him instantly.

The next day, Bob was still recovering from the shock of what had happened. It all seemed like a dream, especially because of his less than fully alert state. He asked the comms system to give him a news summary of the stuff he really needed to know. It mentioned the unfortunate accident resulting in the death of his friend, but he reacted with shock when it said the accident had been caused by a human taking control of the car at the wrong time.

Bob suddenly felt even worse than he had before. In the state he had been in yesterday he wasn't sure if that was what had happened or not. Had he really killed his own friend?

Suddenly, he felt like he had to get out of his apartment, away from the automated systems he relied on. He said "open the door please". The AI replied "I'm sorry, Bob, I'm afraid I can't do that".


Comment 1 (7455) by Anonymous on 2023-07-06 at 10:59:15:

Are you serious? Do you think this sort of thing is possible?

Comment 2 (7456) by OJB on 2023-07-06 at 12:19:14:

Well, obviously my little fictional story was not meant to be taken totally seriously, but I just wanted to show that just through manipulating software, and not the real world directly, a lot of damage can be done.


You can leave comments about this entry using this form.

Enter your name (optional):
Enter your email address (optional):
Enter the number shown here:number
Enter the comment:

To add a comment: enter a name and email (optional), type the number shown, enter a comment, click Add.
Note that you can leave the name blank if you want to remain anonymous.
Enter your email address to receive notifications of replies and updates to this entry.
The comment should appear immediately because the authorisation system is currently inactive.

I do podcasts too!. You can listen to my latest podcast, here: OJB's Podcast 2024-08-22 Stirring Up Trouble: Let's just get every view out there and fairly debate them..
 Site ©2024 by OJBRSS FeedWhy Macs are BestMade & Served on Mac 
Site Features: Blog RSS Feeds Podcasts Feedback Log04 Nov 2024. Hits: 43,145,130
Description: Blog PageKeywords: BlogLoad Timer: 12ms