In science, a single question can ignite gamechanging discoveries. For British-born Canadian neuroscientist and Google executive Geoffrey Hinton, that pivotal moment came in the 1960s, when as a high-school student he was contemplating three-dimensional holograms. Known today as a “godfather of deep learning,” Hinton drew connections between the human brain and holographic images. Unlike photographs, which capture only the face of an object, holograms capture every point around that object and reflect it back to create three dimensions.
Hinton saw similarities with the way the brain stored memory. Rather than capturing memories at only one juncture, a complex network of neurons spreads memory throughout, in a brain-wide process. Those many decades ago, questioning whether we could mimic the function of the brain gave rise to what today is commonly accepted as machine learning. This triggered exponential growth in the rate of new discoveries – from deep space exploration to nanotech, to new drug formulations and automated, intelligent environments.
Hinton went on to study neural networks at Cambridge and the University of Edinburgh, and later taught at the University of Toronto, where he is Chief Scientific Advisor (and cofounder) of the Vector Institute for Artificial Intelligence. In 2013, Google acquired Hinton’s neural networks startup, DNNresearch, and he joined Google as a vice president and engineering fellow, where Hinton manages Brain Team Toronto, a new division of the Google Brain Team.
For Hinton, it’s all about connections. He explains, “About 60 years ago, at the beginning of AI, there were two ideas about how you make intelligent systems. There was a logic system idea, in which you process streams of symbols using rules of inference, and there was the biologically inspired idea that you try to mimic a big network of brain cells and learn the strengths of the connections.”
At the beginning of this century, with more and more computer power and data, suddenly systems that learned things, as opposed to systems that you programmed, became more effective. And that’s what has happened in the last 10 years.”Geoffrey Hinton
He says that for a long time, the neural net paradigm based on mimicking the brain didn’t work very well. Scientists were unsure why. “In the end, it didn’t work very well because we hadn’t gotten enough data and computer power,” he explains, adding, “At the beginning of this century, with more and more computer power and data, suddenly systems that learned things, as opposed to systems that you programmed, became more effective. And that’s what has happened in the last 10 years. We’ve seen them become better at speech recognition, much better at recognizing things in images, much better at machine translation.”
Hinton was one of the researchers who introduced the backpropagation algorithm, and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. Hinton’s research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification.
Hinton is optimistic about the impact of deep learning on the future. “For example, to save the planet, we must make solar panels more efficient and, to do that, we need nanotechnology. Deep learning is now being applied to predict the properties of materials, so I think it may have a big impact there. If you can make solar panels 10 percent more efficient, that will have a huge effect.
“I think it is inevitable that driverless cars will come and they will save a lot of lives. There may be a transition in how we view transport. They will be socially owned and highly coordinated, so you can get a lot of them travelling very closely together, very fast, without problems.”
And to think, it all began with a simple question: What if machines could learn?