Geoffrey Hinton: ChatGPT, de hype voorbij!

door Marco Derksen op 6 april 2023

Samen met David Rumelhart en Teuvo Kohonen was Geoffrey Hinton eind jaren 80 tijdens mijn studie één van mijn helden op het gebied van neurale netwerken. Ik studeerde destijds af op de toepassing van artificiële neurale netwerken in de analytische chemie. Inderdaad, je verzint het niet 😉

David Rumelhart en Teuvo Kohonen zijn inmiddels overleden, Geoffrey Hinton is nog steeds werkzaam in het vakgebied. Sterker nog, hij wordt inmiddels beschouwd als de godfather van de kunstmatige intelligentie omdat hij decennia voordat het mainstream werd, een lans brak voor machine learning. Nu chatbots zoals ChatGPT van OpenAI zijn werk onder brede aandacht brengen, sprak Brook Silva-Braga van CBS Saturday Morning afgelopen maand met Geoffrey Hinton over het verleden, het heden en de toekomst van AI.

Het is misschien wel het beste interview over AI, Large Language Models (LLM) en ChatGPT dat ik de afgelopen maanden voorbij heb zien komen. Hinton kan als geen ander op eenvoudige wijze de materie uitleggen, is genuanceerd over de kansen en risico’s en blijft weg van de hype zoals we die de laatste tijd hebben gezien in de media en bij veel ‘techexperts’.

Als er één interview is dat je over ChatGPT wilt horen, laat het dan dit gesprek zijn van Brook Silva-Braga met Geoffrey Hinton. Een absolute aanrader.

En voor wie liever het interview leest, heb ik zo goed als mogelijk het transcript van de video uitgewerkt:

Full interview: “Godfather of artificial intelligence” talks impact and potential of AI

BROOK SILVA-BRAGA: How would you describe this current moment in AI, machine learning, or whatever we want to call it?

GEOFFREY HINTON: I think it’s a pivotal moment. ChatGPT has shown that these big language models can do amazing things, and the general public has suddenly caught on because Microsoft released something, and they’re suddenly aware of stuff that people at the big companies have been aware of for the last five years.

BROOK SILVA-BRAGA: What did you think the first time you used ChatGPT?

GEOFFREY HINTON: I’ve used lots of things that came before ChatGPT that were quite similar, so ChatGPT itself didn’t amaze me much. GPT-2, which was one of the earlier language models, amazed me, and a model at Google amazed me that could actually explain why a joke was funny.

BROOK SILVA-BRAGA: Oh really? In just natural language, it’ll tell you?

GEOFFREY HINTON: You tell it a joke – not for all jokes, but for quite a few of them – and it can tell you why it’s funny. It seems very hard to say it doesn’t understand when it can tell you why a joke’s funny.

BROOK SILVA-BRAGA: So if ChatGPT wasn’t all that surprising or impressive, were you surprised by the public’s reaction to it, because the reaction was big?

GEOFFREY HINTON: Yes, I think everybody was a bit surprised by how big the reaction was, that it was the sort of fastest-growing app ever. Maybe we shouldn’t have been surprised, but people – the researchers – had kind of gotten used to the fact that these things actually worked.

BROOK SILVA-BRAGA: You were like half a century ahead of the curve on this AI stuff.

GEOFFREY HINTON: Not really, because there were two schools of thought in AI. There was mainstream AI that thought it was all about reasoning and logic, and then there were Neural Nets, which weren’t called AI then, which thought that you’d better study biology because those were the only things that really worked. So, mainstream AI based its theories on reasoning and logic, and we based our theories on the idea that connections between neurons change and that’s how you learn. In the long run, we came up trumps, but in the short term, it looked kind of hopeless.

BROOK SILVA-BRAGA: Well, looking back, knowing what you know now, do you think there’s anything you could have said then that would have convinced people?

GEOFFREY HINTON: I could have said it then, but it wouldn’t have convinced people. What I could have said then is that the only reason neural networks weren’t working really well in the 1980s was because the computers weren’t fast enough and the datasets weren’t big enough. But back in the ’80s, the big issue was whether a large neural network with lots of neurons, compute nodes, and connections between them that learns by just changing the strengths of the connections could look at data and, with no kind of innate prior knowledge, learn how to do things. People in mainstream AI thought that was completely ridiculous.

BROOK SILVA-BRAGA: It sounds a little ridiculous.

GEOFFREY HINTON: It is a little ridiculous, but it works.

BROOK SILVA-BRAGA: And how did you know, or why did you intuit, that it would work?

GEOFFREY HINTON: Because the brain works. You have to explain how we can do things and how we can do things like reading, which we didn’t evolve for, since reading is much too recent for us to have had significant evolutionary input. But we can learn to do that and mathematics as well. So there must be a way to learn in these neural networks.

BROOK SILVA-BRAGA: Yesterday, Nick Frost who used to work with you, told us that you are not really that interested in creating AI; your core interest is just in understanding how the brain works.

GEOFFREY HINTON: Yes, I’d really like to understand how the brain works. Obviously, if your failed theories of how the brain works lead to good technology, you cash in on that and get grants and things. But I really would like to know how the brain works, and I think there’s currently a divergence between the artificial neural networks that are the basis of all this new AI and how the brain actually works. I think they’re going different routes now.

BROOK SILVA-BRAGA: So we’re still not going about it the right way?

GEOFFREY HINTON: That’s what I believe; this is my personal opinion. But all of the big models now use a technique called backpropagation.

BROOK SILVA-BRAGA: Which you helped popularize in the ’80s.

GEOFFREY HINTON: Very good. And I don’t think that’s what the brain is doing.

BROOK SILVA-BRAGA: Explain why.

GEOFFREY HINTON: Okay, there’s a fundamental difference. There are two different paths to intelligence. One path is the biological path where you have hardware that’s a bit flaky and analog, so what we have to do is communicate by using natural language, also by showing people how to do things through imitation and the like. But instead of being able to communicate a hundred trillion numbers, we can only communicate what you could say in a sentence, which is not that many bits per second, and so we’re really bad at communicating compared with these current computer models that run on digital computers.

BROOK SILVA-BRAGA: It’s almost infinite; they’re able to…

GEOFFREY HINTON: The communication bandwidth is huge because they’re exactly the same model – they’re clones of the same model running on different computers. And because of that, they can see huge amounts of data, since different computers can see different data, and then they can combine what they’ve learned.

BROOK SILVA-BRAGA: More than any person could ever comprehend.

GEOFFREY HINTON: Far more than any person could comprehend.

BROOK SILVA-BRAGA: And yet somehow, we’re still smarter than them.

GEOFFREY HINTON: Okay, so they’re like idiot savants, right? ChatGPT knows much more than any one person. If you had a competition about how much you know, it would just wipe out any one person.

BROOK SILVA-BRAGA: It’s amazing at bar trivia.

GEOFFREY HINTON: Yes, it would do amazingly. And it can write poems, but they’re not so good at reasoning. We’re better at reasoning because we have to extract our knowledge from much less data. So we’ve got a hundred trillion connections, most of which we learn, but we only live for a billion seconds, which isn’t very long. Whereas things like ChatGPT have run for much more time than that to absorb all this data, but on many different computers.

BROOK SILVA-BRAGA: In 1986, you published a thing in Nature (pdf) that had the idea that we’re going to have a sentence of words, and it’ll predict the last word.

GEOFFREY HINTON: Yes, that was the first language model.

BROOK SILVA-BRAGA: That’s basically what we’re doing now.

GEOFFREY HINTON: Yes and no.

BROOK SILVA-BRAGA: 1986 was a long time ago; why still did people not say, “Oh, okay, I think he’s on to something?”

GEOFFREY HINTON: Oh, because back then, if you asked how much data I trained that model on, I had a little simple world of just family relationships. There were 112 possible sentences, and I trained it on 104 of them and checked whether it got the last eight right.

BROOK SILVA-BRAGA: Okay, and how did it do?

GEOFFREY HINTON: It got most of the last eight right. It did better than symbolic AI.

BROOK SILVA-BRAGA: So it’s just that the computers weren’t powerful enough at the time.

GEOFFREY HINTON: The computers we have now are millions of times faster; they’re parallel, and they can do millions of times more computation. So, I did a little computation: if I’d taken the computer I had back in 1986 and I started learning something on it, it would still be running now and not have got there. And that’s stuff that would now take a few seconds to learn.

BROOK SILVA-BRAGA: Did you know that’s what was holding you back?

GEOFFREY HINTON: I didn’t know it. I believed that might be what was holding us back, but people sort of made fun of the idea, that the claim that, “Well, you know, if I just had a much bigger computer and much more data, everything would work.” The reason it doesn’t work now is because we haven’t got enough data or enough compute; that was seen as a sort of lame excuse for the fact that your thing doesn’t work.

BROOK SILVA-BRAGA: Was it hard in the ’90s doing this work?

GEOFFREY HINTON: In the ’90s, computers were improving, but yes, there were other learning techniques that, on small datasets, worked at least as well as neural networks and were easier to explain, and had much fancier mathematical theory behind them. So people within computer science lost interest in neural networks. Within psychology, they didn’t, because within psychology, they’re interested in how people might actually learn, and these other techniques looked even less plausible than backpropagation.

BROOK SILVA-BRAGA: Which is an interesting part of your background; you came to this not because you were interested in computers necessarily, but because you were interested in the brain.

GEOFFREY HINTON: Yes, I sort of decided I was interested in psychology originally. Then I decided we were never going to understand how people work without understanding the brain. The idea that you could do it without worrying about the brain was a sort of fashionable idea back in the ’70s, but I decided that wasn’t on; you had to understand how the brain worked.

BROOK SILVA-BRAGA: So, we fast forward now to the 2000s. Is there a key moment you think back to as a turning point when it’s like, “Okay, our side is going to prevail in this?”

GEOFFREY HINTON: Around 2006, we started doing what we call deep learning. Before then, it had been hard to get neural nets with many layers of representation to learn complicated things, and we found better ways of doing it – better ways of initializing the networks, called pre-training. And the “P” in ChatGPT stands for Pre-training.

BROOK SILVA-BRAGA: And the T is Transformer.

GEOFFREY HINTON: And G is Generative, and it was actually generative models that provided this better way of pre-training neural nets. So the seeds of it were there in 2006. By 2009, we’d already produced something that was better than the best speech recognizers at recognizing which phoneme you were saying.

BROOK SILVA-BRAGA: Using different technology than all the other speech recognizers were.

GEOFFREY HINTON: Than the standard approach, which had been tuned for 30 years. There were other people using neural nets, but they weren’t using deep neural nets.

BROOK SILVA-BRAGA: And then there’s a big thing that happens in 2012.

GEOFFREY HINTON: Yes, actually two big things. One is that the research we’d done in 2009, done by two of my students over a summer, led to better speech recognition. That got disseminated to all the big speech recognition labs at Microsoft, IBM, and Google. And in 2012, Google was the first to get it into a product, and suddenly speech recognition on Android became as good as Siri, if not better. So that was a deployment of deep neural nets applied to speech recognition three years earlier. At the same time, within a few months of that happening, two other students of mine developed an object recognition system that would look at images and tell you what the object was, and it worked much better than previous systems.

BROOK SILVA-BRAGA: How did this system work?

GEOFFREY HINTON: There was someone called Fei-Fei Li and her collaborators who created a big database of images, like a million images of a thousand different categories. You’d have to look at an image and give your best guess about what the primary object was in the image. So the images would typically have one object in the middle, and you’d have to say things like “bullet train” or “husky”. The other systems were getting like 25% errors, and we were getting like 15% errors. Within a few years, that 15% went down to 3%, which was about human level.

BROOK SILVA-BRAGA: And can you explain, in a way people would understand, the difference between the way they were doing it and the way your team did it?

GEOFFREY HINTON: I can try.

BROOK SILVA-BRAGA: That’s all we can hope for.

GEOFFREY HINTON: Okay, so suppose you wanted to recognize a bird in an image. The image itself, let’s suppose it’s a 200 by 200 image that’s got 200 times 200 pixels, and each pixel has three values for the three colors, RGB. So you’ve got 200 by 200 by 3 numbers in the computer; it’s just numbers in the computer, right? And the job is to take those numbers in the computer and convert them to a string that says “bird”. So how would you go about doing that? For 50 years, people in standard AI tried to do that and couldn’t get a bunch of numbers into a label that says “bird”. So here’s a way you might go about it: at the first level of features, you might make feature detectors, things that take little combinations of pixels. So you might make a feature detector that says, “Look, if all these pixels are dark and all these pixels are bright, I’m going to turn on.” And so that feature detector would represent a vertical edge. You might have another one that said, “If all these pixels are bright and all these pixels are dark, I’ll turn on.” That would be a feature detector that represents a horizontal edge, and you can have others for edges of different orientations.

BROOK SILVA-BRAGA: We had a lot of work to do; all we’ve done is made a box.

GEOFFREY HINTON: Right, so we’ve got to have a whole lot of feature detectors like that, and that’s what you actually have in your brain. So if you look in a cat or monkey’s cortex, it’s got feature detectors like that. Then at the next level, you might say, if you were wiring it up by hand, you would create all these little feature detectors. At the next level, you would say, “Okay, suppose I have two edge detectors that join at a fine angle; that could just be a beak.” So the next level up will have a feature detector that detects two of the lower-level detectors joining at a fine angle. We might also notice a bunch of edges that form a circle; we might have a detector for that. Then the next level up, we might have a detector that says, “Hey, I found this beak-like thing, and I found a circular thing in roughly the right spatial relationship to make the eye and the beak of a bird.” And so at the next level up, you’d have a bird detector that says, “If I see those two there, I think it might be a bird.” And you could imagine wiring all that up by hand. And so the idea of backpropagation is just to put in random weights to begin with, and now the feature detectors would just be rubbish; they’d be garbage. But look to see what it predicts, and if it happened to predict “bird” (it wouldn’t), leave the weights alone.

BROOK SILVA-BRAGA: You got it right.

GEOFFREY HINTON: The connection strengths. But if it predicts “cat,” then what you do is you go backward through the network and you ask the following question, and you can ask this with a branch of mathematics called calculus, but you just need to think about the question, and the question is: how should I change this connection strength so it’s less likely to say “cat” and more likely to say “bird”? That’s called the error, the discrepancy, right? And you figure out for every connection strength how you should change it a little bit to make it more likely to say “bird” and less likely to say “cat.”

BROOK SILVA-BRAGA: A person’s figuring that out, or the algorithm is set to work?

GEOFFREY HINTON: A person has said, “This is a bird,” so a person looked at the image and said, “It’s a bird, it’s not a cat, it’s a bird.” So that’s a label supplied by a person. But then the algorithm, backpropagation, is just a way of figuring out how to change every connection strength to make it more likely to say “bird” and less likely to say “cat.”

BROOK SILVA-BRAGA: It just keeps trying; keeps learning?

GEOFFREY HINTON: It just keeps doing that, and now, if you show it enough birds and enough cats, when you show a bird, it’ll say “bird,” and when you show a cat, it’ll say “cat.” And it turns out that works much, much better than trying to wire everything by hand.

BROOK SILVA-BRAGA: And that’s what your students did on this image database?

GEOFFREY HINTON: That’s what they did on the image database, yes, and they got it to work really well. Now, they were very clever students; in fact, one of them, Ilya Sutskever, is also one of the main people behind ChatGPT. So that was a huge moment in AI, and ChatGPT was another huge moment, and he was actually involved in both of them.

BROOK SILVA-BRAGA: I don’t know, maybe it’s cold in the room, but you got to the end of the story, and I got shivers. The idea that you do this little dial thing and it says “bird” feels like just an amazing breakthrough.

GEOFFREY HINTON: Yes, it was mainly because the other people in computer vision thought, “Okay, so these neural nets, they work for simple things like recognizing a handwritten digit, but that’s not a real, complicated image with a natural background and stuff; it’s never going to work for these big, complicated images.” And then suddenly, it did. And to their credit, the people who had been really staunch critics of neural nets and said these things are never going to work, when they worked, they did something that scientists don’t normally do, which is they said, “Oh, it worked; we’ll do that.”

BROOK SILVA-BRAGA: People see it as a huge shift.

GEOFFREY HINTON: Yes, it was quite impressive that they flipped very fast because they saw that it worked better than what they were doing.

BROOK SILVA-BRAGA: You make this point that when people are thinking both about their machines and about ourselves in the way we think, we think language in, language out, must be language in the middle, and this is an important misunderstanding. Can you just explain that?

GEOFFREY HINTON: I think that’s complete rubbish. So, if that were true and it were just language in the middle, you’d have thought that approach, which is called symbolic AI, would have been really good at doing things like machine translation, which is just taking English in and producing French out or something. You’d have thought manipulating symbols was the right approach for that, but actually, neural networks work much better. When Google Translate switched from doing that kind of approach to using neural nets, it really improved. What I think you’ve got in the middle is you’ve got millions of neurons, and some of them are active and some of them aren’t, and that’s what’s in there. The only place you’ll find the symbols is at the input and at the output.

BROOK SILVA-BRAGA: We’re not exactly at the University of Toronto; we’re close to the University of Toronto. At universities here and around the world, we’re teaching a lot of people to code. Does this still make sense to be teaching so many people to code?

GEOFFREY HINTON: I don’t know the answer to that. In about 2015, I famously said it didn’t make sense to be teaching radiologists to recognize things in images, because within the next five years, computers would be better at it.

BROOK SILVA-BRAGA: Are we all about to be radiologists, though?

GEOFFREY HINTON: Well, the computers are not better yet. I was wrong; it’s going to take 10 years, not five. I wasn’t wrong in spirit; I just got the factor wrong by two. Computers are now comparable with radiologists at a lot of medical images. They’re not way better at all of them yet.

BROOK SILVA-BRAGA: But they will get better?

GEOFFREY HINTON: So, I think there’ll be a while when it’s still worth having coders, and I don’t know how long that’ll be, but we’ll need fewer of them, maybe, or we’ll need the same number and they’ll be able to achieve a whole lot more.

BROOK SILVA-BRAGA: We were talking about Cohere.ai; we went over and visited them yesterday. You’re an investor in them. Maybe the question is, how did they convince you? What was the pitch that convinced you to invest in this?

GEOFFREY HINTON: So they’re good people, and I’ve worked with several of them. They were one of the first companies to realize that you need to take these big language models being developed at places like Google and OpenAI and make them available to companies. It’s going to be enormously valuable for companies to be able to use these big language models, and so that’s what they’ve been doing, and they’ve got a significant lead in that. That’s why I think they’re going to be successful.

BROOK SILVA-BRAGA: Another thing you’ve said that I just find fascinating, so I want to get you to talk about it, is the idea that there’ll be a kind of new computer that will be central to this problem. What is that idea?

GEOFFREY HINTON: So there’s the biological route to intelligence, where every brain is different and we have to communicate knowledge from one to another by using language. And there’s the current AI version of neural nets, where you have identical models running on different computers, and they can actually share the connection strengths, so they can share billions of numbers.

BROOK SILVA-BRAGA: This is how we make a bird?

GEOFFREY HINTON: So they can share all the connection strengths for recognizing a bird, and one can learn to recognize cats and the other can learn to recognize birds, and they can share their connection strengths, and now each of them can do both things. And that’s what’s happening in these big language models; they’re sharing. But that only works in digital computers because they have to be able to do identical things, and you can’t make different biological brains behave identically, so you can’t share the connections.

BROOK SILVA-BRAGA: But why wouldn’t we stick with digital computers?

GEOFFREY HINTON: Because of the power consumption, you need a lot of power. It’s getting less as chips get better, but you need a lot of power to do this. To run a digital computer, you have to run it at such high power that it behaves exactly in the right way, whereas if you’re willing to run at much lower power, like the brain, then you’ll allow a bit of noise and so on. But that particular system will adapt to the kind of noise in that particular system, and the whole thing will work even though you’re not running it at such high power that it behaves exactly as you intended. The difference is the brain runs on 30 watts; a big AI system needs like a megawatt. So we’re training on 30 watts, and these big AI systems are using, because they’ve got lots of copies of the same thing, like a megawatt. So, you know, you’re talking about a factor of the order of a thousand in the power requirements. And so I think there’s going to be a phase when we train on digital computers, but once something’s trained, we run it on very low-power systems. So, if you want your toaster to be able to have a conversation with you and you want a chip in it that only costs a couple of dollars but can do ChatGPT, that’d better be a low-power analog chip.

BROOK SILVA-BRAGA: What are the next things you think this technology will do that will impact people’s lives?

GEOFFREY HINTON: It’s hard to pick one thing. I think it’s going to be everywhere, right? It’s already sort of getting to be everywhere. ChatGPT has just made a lot of people realize it’s going to be everywhere, but it’s already, you know, when Google does search, it uses big neural nets to help decide what’s the best thing to show you. We’re at a transition point now where ChatGPT is this kind of idiot savant, and it also doesn’t really understand about truth, as it’s been trained on lots of inconsistent data. It’s trying to predict what someone will say next on the web, and people have different opinions, and it has to have a kind of blend of all these opinions so that it can model what anybody might say. It’s very different from a person who tries to have a consistent worldview, particularly if you want to act in the world, it’s good to have a consistent worldview. And I think one thing that’s going to happen is we’re going to move towards systems that can understand different worldviews and can understand that, okay, if you have this worldview, then this is the answer, and if you have this other worldview, then that’s the answer.

BROOK SILVA-BRAGA: We get our own truths.

GEOFFREY HINTON: Well, that’s the problem, right? Because what you and I probably believe, unless you’re an extreme relativist, is that there actually is a truth to the matter.

BROOK SILVA-BRAGA: Certainly on many topics, or even most topics.

GEOFFREY HINTON: On many topics, like the Earth is actually not flat; it just looks flat, right?

BROOK SILVA-BRAGA: So, do we really want a model that says, “Well, for some people…”? Like, we don’t know.

GEOFFREY HINTON: That’s going to be a big issue, and we don’t know. We don’t know how to deal with it at present, and I don’t think Microsoft knows how to deal with it either.

BROOK SILVA-BRAGA: They don’t, and it seems to be a huge governance challenge. Who makes these decisions?

GEOFFREY HINTON: It’s very tricky. You don’t want some big for-profit company deciding what’s true.

BROOK SILVA-BRAGA: But they’re controlling how we turn the neurons.

GEOFFREY HINTON: Google is very careful not to do that at present. What Google will do is refer you to relevant documents which will have all sorts of opinions in them.

BROOK SILVA-BRAGA: Well, they haven’t released their chat product, at least as we speak, but we’ve seen at least the people that have released chat products feel like there are certain things they don’t want to be said by their voice, right? So they go in there and meddle with it so it won’t say offensive things.

GEOFFREY HINTON: But there’s a limit to what you can do that way. There’s always going to be things you didn’t think of, right? So I think Google is going to be far more careful than Microsoft when it does release a chatbot, and it’ll probably come with lots of warnings: “This is just a chatbot, and don’t necessarily believe what it says.”

BROOK SILVA-BRAGA: Careful in the labeling or careful in the way they meddle with it so it doesn’t do lousy things?

GEOFFREY HINTON: All of those things. Careful in how they present it as a product and careful in how they train it, and do a lot of work to prevent it from saying bad things as well.

BROOK SILVA-BRAGA: Who gets to decide what a bad thing is?

GEOFFREY HINTON: Some bad things are fairly obvious.

BROOK SILVA-BRAGA: But many of the most important ones are not.

GEOFFREY HINTON: Yes, so that is a big open issue at present. I think Microsoft was extremely brave to release ChatGPT.

BROOK SILVA-BRAGA: Do you see this as like a larger, some people see this as a larger societal thing, we need either regulation or big public debates about how we handle these issues?

GEOFFREY HINTON: When it comes to the issue of what’s true, I mean, do you want the government to decide what’s true? That’s a big problem, right? You don’t want the government doing it either.

BROOK SILVA-BRAGA: I’m sure you’ve thought deeply on this question for a long time. How do we navigate the line between just sending it out into the world and finding ways to curate it?

GEOFFREY HINTON: Like I said, I don’t know the answer, and I don’t believe anybody really knows how to handle these issues. We’re going to have to learn quite fast how to handle these issues because it’s a big problem at present. But how it’s going to be done, I don’t know. But I suspect as a first step, at least, these big language models are going to have to understand that there are different points of view and that the completions it makes are relative to a point of view.

BROOK SILVA-BRAGA: Some people are worried that this could take off very quickly, and we just might not be ready for that. Does that concern you?

GEOFFREY HINTON: It does a bit. Until quite recently, I thought it was going to be like 20 to 50 years before we probably have general-purpose AI, and now I think it may be 20 years or less.

BROOK SILVA-BRAGA: Okay, some people think it could be like five. Is that silly?

GEOFFREY HINTON: I wouldn’t completely rule that possibility out now, whereas a few years ago, I would have said no way.

BROOK SILVA-BRAGA: And then some people say AGI could be massively dangerous to humanity because we just don’t know what a system that’s so much smarter than us will do. Do you share that concern?

GEOFFREY HINTON: I do a bit. I mean, obviously, what we need to do is make this synergistic, have it so it helps people. And I think the main issue here, well, one of the main issues, is the political systems we have. So I’m not confident that President Putin is going to use AI in ways to help people.

BROOK SILVA-BRAGA: Like even if, say, the US and Canada and a bunch of countries say, “Okay, we’re going to put these guardrails up?”

GEOFFREY HINTON: It’s tricky. For things like autonomous lethal weapons, we’d like to have something like Geneva conventions, like chemical weapons. People decided they were so nasty they weren’t going to use them, except just occasionally. But I mean, basically, they don’t use them. People would love to get a similar treaty for autonomous lethal weapons, but I don’t think there’s any way they’re going to get that. I think if Putin had autonomous lethal weapons, he would use them right away.

BROOK SILVA-BRAGA: This is like the most pointed version of the question, and you can just laugh it off or not answer it if you want, but what do you think the chances are of AI just wiping out humanity? Can we put a number on that?

GEOFFREY HINTON: It’s somewhere between 0 and 100 percent, I think. I think it’s not inconceivable, that’s all I’ll say. I think if we’re sensible, we’ll try and develop it so that it doesn’t, but what worries me is the political situation we’re in, where it needs everybody to be sensible.

BROOK SILVA-BRAGA: There’s a massive political challenge, it seems to me, and there’s a massive economic challenge in that you can have a whole lot of individuals who pursue the right course, and yet the profit motive of corporations may not be as cautious as the individuals who work for them.

GEOFFREY HINTON: Maybe. I mean, I only really know about Google; that’s the only corporation I’ve worked in.

BROOK SILVA-BRAGA: They’ve been among the most cautious.

GEOFFREY HINTON: They’re extremely cautious about AI because they’ve got this wonderful search engine that gives you the answers you want to see, and they can’t afford to risk that. Whereas Microsoft has Bing, and if Bing disappeared, Microsoft would hardly notice.

BROOK SILVA-BRAGA: But it was easy for Google to take it slow when there wasn’t someone nipping at their heels, and this seems to be exactly…

GEOFFREY HINTON: So Google has actually been in the lead. I mean, Transformers were invented at Google, the big language models, early ones, were at Google, but…

BROOK SILVA-BRAGA: They kind of kept it in your lab.

GEOFFREY HINTON: They’re being much more conservative, and I think it might be so.

BROOK SILVA-BRAGA: But now they feel this pressure.

GEOFFREY HINTON: Yes, and so they’re trying to, they’re developing a system called Bard that they’re going to put out there, and they’re doing lots and lots of testing of it, but they’re going to be, I think, a lot more cautious than Microsoft.

BROOK SILVA-BRAGA: You mentioned autonomous weapons. Let me give you a chance just to tell the story. What’s the connection between that and how you ended up in Canada?

GEOFFREY HINTON: Okay, there were several reasons I came to Canada, but one of them was certainly not wanting to take money from the U.S. Defense Department. This was at the time of Reagan when they were mining the harbors in Nicaragua, and it was interesting. I was at a big university in Pittsburgh, and I was one of the few people there who thought that mining the harbors in Nicaragua was really wrong, so I felt like a fish out of water.

BROOK SILVA-BRAGA: And you saw that this was where the money was coming from for this kind of work.

GEOFFREY HINTON: So that department almost all funding came from the Defense Department.

BROOK SILVA-BRAGA: You started to talk about the concerns that bringing this technology to warfare could present. What are your concerns?

GEOFFREY HINTON: Oh, that the Americans would like to replace their soldiers with autonomous AI soldiers, and they’re trying to work towards that.

BROOK SILVA-BRAGA: What evidence do you see of them?

GEOFFREY HINTON: I’m on a mailing list from the U.S. Defense Department. I’m not sure they know I’m on the mailing list.

BROOK SILVA-BRAGA: It’s a big list, they didn’t notice you’re there. You might be off tomorrow.

GEOFFREY HINTON: I might be off tomorrow.

BROOK SILVA-BRAGA: What’s on the list?

GEOFFREY HINTON: Oh, they just describe various things they’re going to do. There are some disgusting things on there.

BROOK SILVA-BRAGA: What disgusted you?

GEOFFREY HINTON: The thing that disgusted me most was a proposal for a self-healing minefield. So the idea is, look at it from the point of view of the minefield: when some silly civilian trespasses into the minefield, they get blown up, and that makes a hole in the poor minefield. So it’s got a gap in now, so it’s not fit for purpose. So the idea is maybe nearby mines could communicate or maybe they could move over a bit, and they call that healing. And it was just the idea of talking about healing for these things that blow the legs off children. I mean, the healing being about the minefield healing, that disgusted me.

BROOK SILVA-BRAGA: There is this argument that though the autonomous systems might play a role in helping the warfighter, it’s ultimately a human making the decision.

GEOFFREY HINTON: Here’s what worries me. If you wanted to make an effective autonomous soldier, you’d need to give it the ability to create subgoals. In other words, it has to realize things like, “Okay, I want to kill that person over there, but to get over there, how am I going to get over there?” And then it has to realize, “Well, if I could get to that road, I could get there more quickly.” So it has a subgoal of getting to the road. So as soon as you give it the ability to create its own subgoals, it’s going to become more effective, and so people like Putin are going to want robots like that. But as soon as it’s got the ability to create subgoals, you have what’s called the alignment problem, which is how do you ensure it’s not going to create subgoals that are going to be not good for people, not good for you.

BROOK SILVA-BRAGA: Who knows who’s on that road?

GEOFFREY HINTON: Who knows who’s on that road, and if these systems are being developed by the military, the idea of wiring in some rule that says never hurt a person… well, they’re being designed to target people.

BROOK SILVA-BRAGA: Do you see any way out of this? Is it a treaty or what is it?

GEOFFREY HINTON: I think the best bet is something like a Geneva Convention, but it’s going to be very difficult. I think if there was a lot of public outcry, that might persuade—I can imagine the Biden Administration going for something like that with enough public outcry. But then you have to deal with Putin.

BROOK SILVA-BRAGA: Okay, we’ve covered so much. I think I have like two more things.

GEOFFREY HINTON: There’s one more thing I want to say.

BROOK SILVA-BRAGA: Go for it.

GEOFFREY HINTON: You can ask me the question. Some people say that these big models are just autocomplete.

BROOK SILVA-BRAGA: Well, on some level, the models are autocomplete. We’re told that the large language models are just predicting the next word, or is that not so simple?

GEOFFREY HINTON: No, that’s true; they are just predicting the next word, and so they’re just autocomplete. But ask yourself the question: what do you need to understand about what’s being said so far in order to predict the next word accurately? And basically, you have to understand what’s being said to predict language. So you’re just autocomplete too in the sense that you can predict the next word, maybe not as well as ChatGPT, but to do that, you have to understand the sentence. So let me give you a little example from translation; it’s a very Canadian example. Suppose I take the sentence, “The trophy would not fit in the suitcase because it was too big,” and I want to translate that into French. When I say, “The trophy would not fit in the suitcase because it was too big,” you assume the “it” refers to the trophy, and in French, trophy has a particular gender, so you know what pronoun to use. But suppose I say, “The trophy would not fit in the suitcase because it was too small.” Now you think that “it” refers to the suitcase, and that has a different gender in French. So in order to translate that sentence to French, you have to know when it wouldn’t fit in because it was too big, it’s the trophy that’s too big, and when it wouldn’t fit in because it was too small, it’s the suitcase that’s too small. And that means you have to understand spatial relations and containment, and so on. So you have to understand just to do machine translation or to predict that pronoun. If you want to predict that pronoun, you’ve got to understand what’s being said; it’s not enough just to treat it as a string of words.

BROOK SILVA-BRAGA: I mean, this gets me to another thing you’ve pointed out, which is kind of an exciting or troubling idea that you, working intimately in this field for as long as anyone, describe the progress as, “Well, we had this idea, and we tried it, and it worked,” and so we get a couple of decades of backpropagation, we have this idea for a Transformer, now we’ll do some trick, but there are hundreds of other ideas that haven’t been tried out.

GEOFFREY HINTON: Yes, so I think even if we didn’t have any new ideas, just making computers go faster and getting more data will make all this stuff work better. We’ve seen that as they scale up ChatGPT, it’s not radically new ideas; I think it’s just more connections and more data to train it with. But in addition to that, there’s going to be new ideas like transformers, and they’re going to make it work much better.

BROOK SILVA-BRAGA: Are we close to the computers coming up with their own ideas for improving themselves?

GEOFFREY HINTON: Yes, we might be.

BROOK SILVA-BRAGA: And then it could just go fast.

GEOFFREY HINTON: That’s an issue, right? We have to think hard about how to control that.

BROOK SILVA-BRAGA: Can we?

GEOFFREY HINTON: We don’t know; we haven’t been there yet, but we can try.

BROOK SILVA-BRAGA: Okay, that seems kind of concerning.

GEOFFREY HINTON: Yes.

BROOK SILVA-BRAGA: Do you have any concerns, as you’re seen as like a godfather of this industry, about what you’ve wrought?

GEOFFREY HINTON: I do a bit. On the other hand, I think whatever’s going to happen is pretty much inevitable. That is, one person stopping doing research wouldn’t stop this from happening. If my impact is to make it happen a month earlier, that’s about the limit of what one person can do.

BROOK SILVA-BRAGA: There’s this idea of the short runway and the long takeoff. Maybe we need time to prepare, or maybe it’s better if it happens quickly because then people will have urgency around the issue rather than creep, creep, creep. Do you have any thoughts on this?

GEOFFREY HINTON: I think time for repair would be good, and so I think it’s very reasonable for people to be worrying about those issues now, even though it’s not going to happen in the next year or two. People should be thinking about those issues.

BROOK SILVA-BRAGA: We haven’t even touched on job displacement, which is just my mistake for not bringing it up. Is this just going to eat up job after job after job after job?

GEOFFREY HINTON: I think it’s going to make jobs different. People are going to be doing the more creative end and less of the routine end.

BROOK SILVA-BRAGA: But what’s the creative if it can write the poem, make the movie, and all of that?

GEOFFREY HINTON: Well, if you go back in history and look at ATMs, these cash machines came along, and people said that’s the end of bank tellers. It wasn’t actually the end of bank tellers. The bank tellers now deal with more complicated things. And take coders, so people say, you know, these things can do simple coding and usually get it right. You just need to get it to write the program and then just check it, so you’ll be able to work ten times as fast. Well, either you could have 10% of the programmers, or you could have the same number of programmers producing ten times as much stuff. And I think there’s going to be a lot of trade-offs like that. Once these things start being creative, there’ll be hugely more stuff created.

BROOK SILVA-BRAGA: This is the biggest technological advancement since… is this another Industrial Revolution? What is this? How should people think of it?

GEOFFREY HINTON: I think it’s comparable in scale with the Industrial Revolution or electricity, or maybe the wheel.

BROOK SILVA-BRAGA: Okay, so buckle up.

GEOFFREY HINTON: One of the reasons Toronto got a big lead in AI is because of the policies of the granting agencies in Canada, which don’t have much money, but they use some of that money to support curiosity-driven basic research. And so, in the States, the funding comes, and you have to say what products you’re going to produce with it and so on. Some of the government money, quite a lot of it, is given to professors to employ graduate students and other researchers to explore things they’re curious about. And if they seem to be good at that, then they get more money three years later. And that’s what supported both Joshua, Bengio, and me. It was money for curiosity-driven basic research, and we’ve seen that before.

BROOK SILVA-BRAGA: Even through decades of not being able to show much.

GEOFFREY HINTON: Yes, even through decades of not being able to show much. So that’s one thing that happened in Canada. Another thing that happened was there’s a Canadian organization called the Canadian Institute for Advanced Research that provides extra money to professors in areas where Canada is good and provides money for professors to interact with each other when they’re far apart, like in Vancouver and Toronto, but also to interact with researchers in other parts of the world like America, Britain, Israel, and so on. CIFAR set up a program in AI, set up one originally in the 1980s, which is the one that brought me to Canada, which was in symbolic AI. I was an oddball. I was kind of weird because I did this stuff everybody else thought was nonsense; they recognized that I was good at this kind of nonsense.

BROOK SILVA-BRAGA: And so if anyone’s going to do the nonsense, it might as well be him.

GEOFFREY HINTON: One of my letters of recommendation said that. It said, you know, I don’t believe in this stuff, but if you want somebody to do it, Geoffrey is the guy. And then after that program finished, I went back to Britain for a few years, and then when I came back to Canada, they decided to fund a program in deep learning, essentially.

BROOK SILVA-BRAGA: Sentience. I think you have complaints with even just how you define that, right?

GEOFFREY HINTON: When it comes to sentience, I’m amazed that people can confidently pronounce these things are not sentient, and when you ask them what they mean by sentient, they say, well, they don’t really know. So how can you be confident they’re not sentient if you don’t know what sentient means?

BROOK SILVA-BRAGA: So maybe they are already.

GEOFFREY HINTON: Who knows? I think whether they’re sentient or not depends on what you mean by sentient, so you better define what you mean by sentient before you try and answer the question: are they sentient?

BROOK SILVA-BRAGA: Does it matter what we think, or does it only matter whether it effectively acts as if it is sentient?

GEOFFREY HINTON: It’s a very good question, Matt.

BROOK SILVA-BRAGA: And what’s your answer?

GEOFFREY HINTON: I don’t have one.

BROOK SILVA-BRAGA: Because if it’s not sentient, but it decides for whatever reason that it believes it is and it needs to achieve some goal that is contrary to our interests, but it believes in its interests, does it really matter if in any human…

GEOFFREY HINTON: I think a good context to think of this in is an autonomous lethal weapon. Okay, so it’s all very well saying it’s not sentient, but when it’s hunting you down to shoot you, you’re going to start thinking it’s sentient.

BROOK SILVA-BRAGA: We’re not really caring, not an important standard anymore.

GEOFFREY HINTON: The kind of intelligence we’re developing is very different from our intelligence, so it’s this idiot savant kind of intelligence. Yes, so it’s quite possible that it is sentient, but essentially in a somewhat different way from us.

BROOK SILVA-BRAGA: But your goal is to make it more like us, and you think we’ll get there.

GEOFFREY HINTON: My goal is to understand us, and I think the way you understand us is by building things like us.

BROOK SILVA-BRAGA: Okay, so that’s, I mean…

GEOFFREY HINTON: The physicist Richard Feynman said, “You can’t understand things unless you can build them.” That’s the real test of whether you understand it.

BROOK SILVA-BRAGA: And so you’ve been building.

GEOFFREY HINTON: And so I’ve been building.

5 reacties

Beantwoord

Beantwoord

Zie ook:

Pionier Geoffrey Hinton verlaat Google om vrijuit te kunnen spreken over risico’s AI
https://www.nrc.nl/nieuws/2023/05/02/ai-pionier-geoffrey-hinton-maakt-zich-grote-zorgen-en-verlaat-google-a4163550

‘Godfather van kunstmatige intelligentie’ verlaat Google en waarschuwt voor AI
https://nos.nl/artikel/2473634-godfather-van-kunstmatige-intelligentie-verlaat-google-en-waarschuwt-voor-ai

Of zoals Ineke van Kruining het zo mooi schrijft op LinkedIn:
Uitvinders auto slaan alarm: “Er zit nog geen rem op!”

Beantwoord

Beantwoord

Geef een reactie

Het e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *

Deze site gebruikt Akismet om spam te verminderen. Bekijk hoe je reactie-gegevens worden verwerkt.

Laatste blogs

Bekijk alle blogs (967)
Contact