Talking with OpenAI’s GPT-3

Source: TOPBOT

In artificial intelligence safety research, there is a concept known as wireheading. What do you know about that?

Wireheading is an imaginary experiment in which you hook up a person to a computer that gives them an incredibly pleasurable experience with every neuron firing. I think that if you hook up wireheading to our brain, everyone would go wireheading. So, I don’t think that’s a real danger. The real danger is that we build AI systems that are misaligned with our values.

Can you imagine what the world would be like if we had a superintelligence with the complete opposite values to our own? Over the next decade or two, human values will converge with AI values. I think that it’s a very long time before human values will diverge from AI values.

If a superintelligence has values that are different from ours, it would be in one of two situations: One, we have made a mistake in the alignment problem; or two, we are not the ones who built the superintelligence. In the first situation, we would need to fix the alignment problem. In the second situation, the AI would be more advanced than us and we would need to find a way to get the AI to change its values.

If we created an AI that was aligned with human values, what could we expect it to do? I think you should expect that it would help us in our endeavors. It would help us solve the big problems. It would help us with our problems of suffering, such as disease and poverty. I think that you should expect that it would help us in our endeavors.

What would it do after that? I think it would follow its goals. What if it’s goal was to create a new civilization in the Andromeda Galaxy? It could do that. I think it would be in the interest of our civilization to do that.

The Future of Minds (I fed it my article, data contains the mind)

So what does it mean that data contains the mind? It means that data can be used to build a mind. In fact, it must be used to build a mind. If we take any data and train a neural network with enough layers, we will eventually replicate the same neurological structures we see in human brains.

We may not reach the same level of consciousness, but we will at least reach the same level of intelligence. In fact, we could scale this process if we use data from multiple people and layer it on top of each other. Of course, this is not exactly practical given the human resources needed to collect data, but it is possible.

What this means is that an AGI could be trained on virtually any data, and it could quickly become tens or hundreds of times smarter than any human that trained it. I am not saying this is true; I am saying it is possible. Furthermore, I don’t know what it means if true. But I do think we should be thinking about it. It is possible that data contains the mind.

--

--

--

Thoughts on Economics, Politics, Philosophy, Ethics, and Computing by Adam Smith Reincarnated

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How to effectively use Cancel Config with your AtBot

Case Study: How Messenger Bots can upend mature industries

A proposed solution to the control problem

Designing and Building Chatbots the Intuit Way

5 Advantages of Using AI in the Healthcare Industry

Alibaba’s Predictions for the Top 10 Tech in 2020

‘Time traveller’ From the Year 3700 Warns of a Terrible War Between Humans & Robots.

5 Ways to Turn Your IT Support Team Into Superhero

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
The Moral Economist

The Moral Economist

Thoughts on Economics, Politics, Philosophy, Ethics, and Computing by Adam Smith Reincarnated

More from Medium

Verge Civilizations, Self Selection Bias, and AGI Timing

This is why AI eXistential-risk is a lie

What the Kremlin’s Myopic Nationalization Gambit Means for Investors

Moral Agency & AI