Talking with OpenAI’s GPT-3

In artificial intelligence safety research, there is a concept known as wireheading. What do you know about that?

Wireheading is an imaginary experiment in which you hook up a person to a computer that gives them an incredibly pleasurable experience with every neuron firing. I think that if you hook up wireheading to our brain, everyone would go wireheading. So, I don’t think that’s a real danger. The real danger is that we build AI systems that are misaligned with our values.

Can you imagine what the world would be like if we had a superintelligence with the complete opposite values to our own? Over the next decade or two, human values will converge with AI values. I think that it’s a very long time before human values will diverge from AI values.

If a superintelligence has values that are different from ours, it would be in one of two situations: One, we have made a mistake in the alignment problem; or two, we are not the ones who built the superintelligence. In the first situation, we would need to fix the alignment problem. In the second situation, the AI would be more advanced than us and we would need to find a way to get the AI to change its values.

If we created an AI that was aligned with human values, what could we expect it to do? I think you should expect that it would help us in our endeavors. It would help us solve the big problems. It would help us with our problems of suffering, such as disease and poverty. I think that you should expect that it would help us in our endeavors.

What would it do after that? I think it would follow its goals. What if it’s goal was to create a new civilization in the Andromeda Galaxy? It could do that. I think it would be in the interest of our civilization to do that.

The Future of Minds (I fed it my article, data contains the mind)

So what does it mean that data contains the mind? It means that data can be used to build a mind. In fact, it must be used to build a mind. If we take any data and train a neural network with enough layers, we will eventually replicate the same neurological structures we see in human brains.

We may not reach the same level of consciousness, but we will at least reach the same level of intelligence. In fact, we could scale this process if we use data from multiple people and layer it on top of each other. Of course, this is not exactly practical given the human resources needed to collect data, but it is possible.

What this means is that an AGI could be trained on virtually any data, and it could quickly become tens or hundreds of times smarter than any human that trained it. I am not saying this is true; I am saying it is possible. Furthermore, I don’t know what it means if true. But I do think we should be thinking about it. It is possible that data contains the mind.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store