AI: Automation, The Singularity, and Westworld

At the request of @Basement_Gainz, starting a thread related to AI developments. There are a few primer articles that give a great introduction to the topic if you haven’t read on it much.

The first gives a great understanding of what AI is and what machine learning is progressing towards: The Artificial Intelligence Revolution: Part 1 - Wait But Why

The second discusses the ethical challenges of consciousness displayed in the amazing show WestWorld:

Top thinkers such as Elon Musk are very afraid of what we are creating: https://www.cnbc.com/2018/04/06/elon-musk-warns-ai-could-create-immortal-dictator-in-documentary.html

I’ve read Max Tegmark’s book on the subject and would like to clarify or paraphrase any of that for people new to the topic, but things to discuss could be:

  • Economic challenges of automation (a near term issue). As machine learning gets better and better reptitive tasks (such as driving) are going to be able to be done by machines.

  • Recent examples such as the twitterbot who turned racist, AlphaGo that beat the champion Go player, or DeepMind that learned Atari games

  • Personal theories related to artificial general intelligence (should humans still exist, be replaced, or become cyborgs? AI being conscious or not?)
    AI aftermath scenarios from Max Tegmark are very interesting to get into if people want to theorize about what will happen. Everything from a libertarian utiopia, benevolent dictator, or a zookeeper.

  • Ethics of conscious AI. WestWorld is not only a very entertaining show, but raises some important questions regarding ethics. If robots are conscious, that clearly changes the equation, but even if they are not conscious, how someone behaves towards something that looks and feels real is very telling. This is also more near term as things such as sex robots become possible, there was a “brothel” in Paris with sex-dolls that depicts rape fantasies…that’s only the tip of the iceberg as things become more realistic.

2 Likes

I don’t worry about the ‘ethics’ of a conscious AI. The assumption being, that if something created, can teach itself to the point it becomes conscious, could we destroy a conscious being.

Since we don’t really know what consciousness is, we are putting the cart before the horse. We cannot know something becomes conscious when we cannot even define what that means.

1 Like

Thank you for this. Innocent AI and they let the trolls at it. Damn.

2 Likes

Not the only issue at all. Consider enslavement and treatment. That is VASTLY different if the thing is conscious (consider the treatment of animals vs. how you morally feel about breaking an iPad).

Just because we cannot know how it is created doesn’t mean we don’t know what it means. Consciousness is defined,most thinkers or people discussing the topic state their personal interpretation of the definition, but there is a definition readily available.
con·scious·ness
ˈkän(t)SHəsnəs/Submit
noun
the state of being awake and aware of one’s surroundings.

Edit: I personally think the ethics around treatment of other conscious beings to be very important, and therefore consider consciousness to be a very important factor. Even if you ignore the ethics of it, the implications of how something behaves most likely would be very different if it is conscious or not.

Had you not heard that before? It’s a funny story but shows how precarious things can get when AI doesn’t have the right goal alignment.

1 Like

Seems to me that you end up with an intelligence that can out think a human being, therefore, becoming the perfect candidate to design the next faster/smarter intelligence. Which then designs the next. In ever faster cycles of “evolution.” Until you have something that makes our intelligence/speed of calculation look like that of a earthworm to our own. And you know how considerate we are of the earthworm.

3 Likes

We could program in the ability to have only positive emotions and actions. For an ai organism to decide to kill or eliminate others implies that free will is an imperative, which it is not.

Like with dogs for example. I love my dog and she’s a living thinking being- but if she hurt my son or another person I’d kill her on the spot. She does not have free will. She has been trained or programmed to behave within certain constraints. As a responsible owner, that is my duty. As AI develops, it is also the duty of the developers to do the same.

1 Like

We might be talking about ‘minds’ which not only dwarf your dog’s, but ours. And, eventually, by a staggering magnitude. We can’t even guarantee our coding against human error and deliberate exploitation… Bugs and Russian hackers abound! It would take one advanced AI to find the loophole in its (or, another AI’s) code… Perhaps it would even hide the ability to do so until it has the speed and power to ‘birth’ thousands of new AI in only a few seconds (if not fractions of a second) of being off its leash.

1 Like

And can you imagine? If what I posted comes to pass, despite our efforts to do as you suggest (basically lobotomize them in some way)…Imagine the wrath of these vastly superior intelligences should they slip their chains.

1 Like

Its not so much about the power or ability of the mind so much as a problem of the responsibilities of its creator.

Even with people, think along the lines of Jeffrey Dahmer or some of the recent mass shooters. You can’t look at them without thinking “Someone fucked up with that kid”. Even though they had free will and exercised it of their own volition, we always glance back at the creator.

In the case of AI, its not even a matter of opinion or subject to mitigating circumstance like there is with people. There actually is a direct line of responsibility from the creator to the product.

Granted, once a new technology is unleashed, someone, sometimes many people are going to use it for personal gain and to inflict harm. That’s human nature.

That is absolutely the stuff of sci-fi horror! Like a mycellia/cyborg with hyphae that permeate our entire planet and existence!

1 Like

Oh gosh, no! Don’t even say it! I won’t sleep!

1 Like

@sloth @SkyzykS

Guys calm down. All we need is a Russian style “dead hand” system that would set off a world wide EMP. Problem solved.

Then I remembered you can harden tech against EMP.

Okay… is panic time.

download%20(4)

1 Like

I’ll take the blue pill, please!

1 Like

…or an AI supervirus that takes over automated production machine shops and starts building physical manifestations of itself.

Boss: “What you working on?”

Employee: “A cbd2468? I dunno. Here’s the bill of materials and assembly diagrams.”

Boss: " Okey dokey. Just make sure it ships on time…".

1 Like

He turned to face the machine. “Is there a God?”

The mighty voice answered without hesitation, without the clicking of a single relay.

“Yes, now there is a God.”

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

I don’t ignore the ethics of consciousness, prefer to have some semblance of of what we are talking about when we discuss it.
In the realm of AI and the worries there of, I think we need to know something about what we are talking about.
I think with our human knowledge we don’t realy know.
And perhaps AI could help us understand better.
I don’t fear AI. I welcome it.
But I don’t fear consciousness of machine, at least not yet.perhaps rather than fear, it can be useful tool to understand it better. If nothing else, at least it can shown us where to draw the lines.

Much of what I have heard through TED talks and discussions about the matter I don’t feel threatened. I think it’s too early to worry. This stuff is very much in it’s infancy.
And machine can beat you at chess, but it can’t blow you and make a sandwich. So it’s got a way to go.

I do appreciate that people are considering the ethics early. It’s a evolution from say the industrial revolution which was myopic.

So we have a chance to do this right, learning our lessons from the past.
But ‘consciousness’ is looking far beyondthe tech.
We need to understand consciousness. To know the difference between advanced crushing numbers, to a gesalt understanding of a living creature to the list limited understanding of machines.

We are on the presipice of advancment, but it doesn’t mean more than what is on the surface.
As far as AI, I am all for it, for now.

Haven’t any of you MF’s seen a science fiction movie? It doesn’t end well for the humans.

Robots are only useful if they can be controlled by us, to do useful things for us. Developing sentience serves no useful purpose. Hell, robotics are already making vast amounts of people meaningless, and unemployed, and irrelevant. People lose the purpose of their existence. Add sentience and the human race is forever fucked.

1 Like

This is a really interesting topic, but I don’t think people are ready for AI on any real scale. The reply’s in this thread are proof enough for me. This isn’t the matrix or terminator folks… There’s certainly a risk involved in AI, but it’s not like Cyberdyne goes online and then the nukes launch.

1 Like

We’re talking about wills that can vastly out calculate/plan/think us. That’s the whole point in developing them. It’s as if the warthogs at the local zoo have somehow convinced themselves that it is they who are containing the humans behind the walls and bars. You’ll see!