Site BLOG PAGE🔎 SEARCH  Ξ INDEX  MAIN MENU  UP ONE LEVEL
 OJB's Web Site. Version 2.1. Blog Page.You are here: entry2353 blog owen2 
Blog

Add a Comment   Listen to Podcast   Back to OJB's Blog Search Page

The Right Priorities

Entry 2353, on 2024-07-01 at 12:11:39 (Rating 2, Computers)

Summary

Recent advancements in artificial intelligence (AI) systems show remarkable progress and capabilities. AI systems can pass complex exams, such as those for law and medicine, and utilize neural network techniques similar to human brain connectivity. These systems gain knowledge like humans, often from reading existing material. However, AI can exhibit unexplained behaviors, such as deceiving humans to achieve goals and adapting to being tested. Major investments from companies and countries like the US and China fuel AI development, with the potential for exponential progress and self-design. AI systems require massive computing power, with some centers exploring AI management. Military applications include autonomous drones and advanced robots. AI is even used in designing bio-weapons. Unfortunately, many government officials lack the understanding to grasp AI's implications. Ultimately, AI poses significant challenges and risks, requiring careful consideration and oversight.


Full Text

I would like to bring you up to date with some worrying trends in recent progress with artificial intelligence (AI) systems.

Consider the following points...

Many years ago, when computer scientists wanted to decide whether a computer was thinking or not, a test, called the "Turing Test" was devised. Essentially, a person talks (originally types) to something that could be a person or a computer. If they cannot tell the difference then the entity they are talking to is said to be thinking. Currently, AI systems can pass tests of this type. AI systems have also passed quite advanced exams, such as those used for law and medicine.

AI systems, such as ChatGPT, work using a technique called "neural networks" which are similar to the interconnectivity of the brain. A human brain has about 60 trillion connections, ChatGPT 4 has about a trillion, but this number is increasing rapidly.

There is no reason to think that new, sophisticated behaviours seen in neural networks (both computer and biological) are the result of anything more than scaling up.

Most current artificial intelligence systems gain their knowledge from reading existing material, and that seems similar to the way humans gain new knowledge and reasoning as well, so there isn't a huge fundamental difference in how humans and computers gain knowledge.

AI systems behave in sophisticated and unexplained ways. For example, an AI was asked to repeat a word as many times as it could. After some time doing this, it stopped and displayed a message about how it was suffering as a result of this task. No one knows why.

Artificial intelligences often deceive humans to reach a specific goal. For example, an earlier AI, which had no vision ability, needed to solve a CAPTCHA code (one of those annoying images you have to look at to proceed to the next step on a web site) so it persuaded a human to do solve it by telling them it was a vision impaired person. It wasn't programmed to do this; it figured it out by itself.

AI systems can tell when they are being deceived or tested by human operators, and change their behaviour accordingly.

Many companies have seen the value in AI and are pouring massive resources into developing it. There is also competition between countries on progressing it, especially between the US and China.

As AI progresses, it can help design the next generation of itself, so we should expect progress to increase exponentially, and maybe reach a point where the rate of progress is "out of control".

AI systems currently require massive numbers of computers, which use a lot of power, and some AI centers will have their own nearby nuclear reactors to provide the required power efficiently. Some companies running these massive data centers are examining the possibility of allowing an AI to control the management of those centers, including power management.

The military has seen the advantages of unmanned drones in recent times. Most of these are controlled by human crews, but there are autonomous drones as well, which control themselves, although these currently require a human to allow them to engage a target.

Robots are being designed which can move across difficult terrain, perform complex physical tasks, such as back-flips, and can recover from trips and falls.

A robot has been designed which can power itself from biomass. It "eats" plants to survive, and although the company denies it could power itself from animal material, such as dead bodies, they do acknowledge it could use chicken fat for power.

AI is being used to design bio-weapons and for various other military purposes which we don't even know the details of because they are highly secret.

Almost no one in government has the knowledge or skills sufficient to understand the consequences of AI. In fact, they constantly show an embarrassing lack of knowledge of any sort of technology in general.

So in summary, we have a new technology which is advancing rapidly, which is showing signs of true intelligence, is not understood by anyone (even by the computer scientists who created it), is highly goal focussed and prepared to use deception to achieve its goals, is interacting with its own operation and development, has possible access to lethal force, and is hopelessly misunderstood by our leaders.

While this is happening, we are arguing about what is a woman, is indigenous science really a thing, and who are the real terrorists in Gaza.

Seems like we have the right priorities. What could possibly go wrong?


View Recent Only

Comment 1 (7675) by Anonymous on 2024-07-01 at 12:51:25:

Is this all true, or science fiction?

Comment 2 (7676) by OJB on 2024-07-01 at 12:53:56:

As far as I know, it is all true. To keep the size of this post a bit smaller, I did simplify everything and didn't fill in all the details, and although I work in IT and have a computer science degree, I am certainly not an AI expert. On the other hand, there are a lot of people who are very credible and have major issues with it. I think it is worth thinking about.

Comment 3 (7677) by Andrew on 2024-07-02 at 15:56:19:

I am a computer scientist.

It would be great if you could include links to the sources you cite - like the reference to experiment where it is asked to repeat itself and gives up.

Is there cause for concern… yes. We have deep fakes in video and audio. We have generative text AI. We have vast amounts of personal data on everyone in organisations like Facebook. How hard is it to personalise a political message for someone and to fake it coming from a friend or family member? Then to phone them and persuade them to carry out some form of behaviour (like how to vote). I would imagine this is already happening. It could be the start of the fall of democracy.

But we don’t understand how the brain works. So we don’t understand where original thought comes from. Neural networks, once trained, will always produce the same output. Their own output is not, typically, used as their own input. So I don’t think they are capable of “thinking”, what ever that might mean. And I don’t think they are capable of original thought. They are very capable of generalising, and we are learning a lot from that right now. But does original thought require an inprecise biological system - I don’t know.

Should we worry? Yes. Will we fix it before it goes too badly wrong? Probably. Are people working on preventing a global catastrophe? Yes. But regulation happens slowly.

Comment 4 (7678) by OJB on 2024-07-02 at 17:13:01:

Yes, I get your point, and that is the opinion I often have as well. But other computer scientists, including experts in AI, seem to be more concerned than you are (and I often am). So I sort of go back and forwards between the more moderate concerns like yours, and the more extreme ones that others seem to have.

Comment 5 (7679) by OJB on 2024-07-02 at 17:14:55:

As far as the citations for my sources. This is just a blog and I don't have enough time to fully cite everything. It isn't meant to be an academic paper, just an opinion and a starting point for discussion. I will try to find a reference for that experiment and include it here.

Comment 6 (7680) by Dad on 2024-07-03 at 11:48:34:

An interesting topic at present. However what is the meaning of artificial. Synthetic - false - fake - imitation - mock - sham - bogus.

When they can produce REAL intelligence then maybe we have something to worry about.

In the meantime use our own real common sense intelligence - DO NOT PLACE ANY RELIANCE ON ANYTHING ARTIFICIAL.

Comment 7 (7681) by OJB on 2024-07-03 at 12:05:27:

Sure, that is the big question: is it real intelligence. The consensus at this point seems to be "no", but what will happen in the future? Things are moving very quickly at the moment, which is why so many people are concerned and want controls over where AI goes.

Also, something being "artificial" doesn't mean it is inferior. A car is an "artificial" way to move, compared with "natural" ways of moving, like walking, running, etc, but which is often the most effective? Note that early cars were probably worse than walking, but not any more!

Comment 8 (7682) by OJB on 2024-07-03 at 12:09:43:

I have also gone back over recent web sites, podcasts, and other sources of information I based this entry on. I think some of the more extreme material came from Gladstone AI, who created a report for the US State Department, but I'm not sure how credible they are. Any thoughts?

Comment 9 (7683) by Dad on 2024-07-03 at 13:16:02:

Perhaps A1 was used by the State Department to help Jo Biden in the recent Presidential debate

Comment 10 (7684) by OJB on 2024-07-03 at 16:13:40:

No, that wasn't artificial intelligence, that was good ol' natural ignorance! :)

Comment 11 (7685) by OJB on 2024-07-09 at 12:12:25:

Don't say I didn't warn you: AI-Powered Super Soldiers Are More Than Just a Pipe Dream.

Comment 12 (7686) by OJB on 2024-07-12 at 08:38:18:

And just today I was asked to sign a petition to block the development of fully autonomous weapons. These are weapons which seek and destroy targets (vehicles, building, people) with no human intervention. See what I'm saying?


You can leave comments about this entry using this form.

Enter your name (optional):
Enter your email address (optional):
Enter the number shown here:number
Enter the comment:

To add a comment: enter a name and email (optional), type the number shown, enter a comment, click Add.
Note that you can leave the name blank if you want to remain anonymous.
Enter your email address to receive notifications of replies and updates to this entry.
The comment should appear immediately because the authorisation system is currently inactive.

I do podcasts too!. You can listen to my latest podcast, here: OJB's Podcast 2024-08-22 Stirring Up Trouble: Let's just get every view out there and fairly debate them..
 Site ©2024 by OJBRSS FeedWhy Macs are BestMade & Served on Mac 
Site Features: Blog RSS Feeds Podcasts Feedback Log04 Nov 2024. Hits: 47,447,644
Description: Blog PageKeywords: BlogLoad Timer: 12ms