top of page
Screenshot_2024-07-24_173128-removebg-preview.png
Hevsel Times Logo Transparent Red.png

A Nobel Physics Prize Breakthrough: Geoffrey Hinton’s “Deep Learning”


Written by Lava Naz Bagdu



In 2024, Geoffrey Hinton, a pioneering Artificial Intelligence (AI) figure, was awarded the Nobel Physics Prize for his groundbreaking study entitled “Deep Learning.” This research not only helped us understand how the human brain stores and processes information but also plays a crucial role in the development of Artificial Intelligence by enabling the application of this information to computer systems. As a result, AI systems are getting progressively smarter, more effective, and able to simulate human-like cognitive processes. This article examines this study in-depth, discusses its connection with neuroscience and physics, and provides insights into Geoffrey Hinton’s life and other works.



Who is Geoffrey Hinton?


Geoffrey Hinton is a British-Canadian cognitive psychologist and computer scientist. He earned his BA degree in Experimental Psychology from The University of Cambridge in 1970 and later received his PhD in Artificial Intelligence from The University of Edinburgh in 1978. He also did postdoctoral work at Sussex University and The University of California San Diego. Usually referred to as the “Godfather of AI,” Hinton has made pioneering contributions to the field of Artificial Intelligence, specifically with his development on backpropagation and deep learning. Which has been influential in shaping modern AI systems. (McDonough, 2024; Vector Institute, n.d.)



What is Deep Learning?


Deep learning is a subset of machine learning and artificial intelligence modeled after the human brain. It uses multilayered networks often referred to as deep neural networks to mimic the brain’s complex decision-making process and its structure which consists of neurons that pass signals to each other. Certain forms of deep learning empower most of the Artificial Intelligence applications we use today. (Holdsworth, et. al., 2024)



Traditional Machine Learning and Deep Learning


The main difference between deep learning and traditional machine learning (Nondeep) is the structure of the fundamental neural network architecture. While traditional machine learning uses 1 or 2 neural networks to train the models, deep learning uses 3 or more generally hundreds or thousands. Another difference between deep learning and nondeep is that traditional machine learning requires manual feature extraction from data, unlike deep learning which can extract features automatically through its multiple layers of computation with each layer processing data and passing it to the next, allowing the network to learn complex patterns on its own. (Holdsworth, et. al., 2024)



In addition, Olympic sponsorship has also been a significant element of marketing for a long time. The IOC accumulates additional funds by selling the Olympic emblem licensing rights, called “The Olympic Programme (TOP)” to their sponsors (3,7). While numerous criteria must be satisfied by the sponsors chosen to sell this license, these will not be elaborated upon in detail so as not to divert from the main topic. Nevertheless, the main factor in this context is the prospective company's purchasing capacity and advertising potential (7).



How Does Deep Learning Work?


To understand deep learning clearly, we need to understand neural networks, how to train neural networks, and backpropagation:



Neural Networks


Neural networks teach machines to process data with limited human assistance using interconnected nodes or neurons in a layered structure that resembles the human brain. Each neuron takes an input, processes it (usually through a mathematical function called activation function), and then sends an output to the next layer. Thus, it creates an adaptive system that can learn from its mistakes and constantly improve leading the artificial neural networks to attempt to solve complicated problems, such as recognizing faces and summarizing documents, with greater accuracy. (AWS, n.d.)



How to Train Neural Networks?


“Neural network training is teaching a neural network to perform a task. Neural networks learn by initially processing several large sets of labeled or unlabeled data. By using these examples, they process unknown inputs more accurately.” (AWS, n.d.)


In supervised learning, the most common way in neural network training, data scientist provides artificial neural networks with large sets of labeled data. Each example in the dataset comes with a correct answer leading the network to learn patterns between the inputs and the outputs. For instance, a labeled dataset of spoken words and phrases with corresponding transcripts is provided to the neural networks in voice recognition. The network then learns to recognize the relationships between spoken word audio such as frequency, pitch, rhythm, and textual representations from the labeled dataset provided previously. After the training phase, the machine can analyze new audio inputs, and interpret them into text form. (AWS, n.d.; Shinde, 2024)


Backpropagation is an algorithm for supervised learning in artificial neural networks that reduces errors, learns from mistakes, and improves over time. Referring to “backward propagation of errors” when given an error function, this algorithm calculates how much each weight (the connection between neurons) contributed to the error, adjusts the weights to reduce future errors, and keeps repeating this process, improving the performance and accuracy. (McGonagle, et. al., n.d.)


In conclusion, Hinton’s research in deep learning demonstrates a remarkable confluence between neuroscience, AI, and principles of physics. By clarifying how neural networks can process information with a layered structure based on the human brain, Hinton’s work not only improves the field of AI, it helps us understand the behaviors occurring in complex physical systems.  Recognition of this research with the 2024 Nobel Prize in Physics highlights how insights from cognitive science can inform our understanding of fundamental physical concepts, emphasizing the interdisciplinary nature of modern scientific research. Hinton’s works will precisely inspire future breakthroughs as we keep exploring these fields.



References:


  1. McDonough, M., (2024), Geoffrey Hinton, Britannica

    https://www.britannica.com/biography/Geoffrey-Hinton

  2. Geoffrey Hinton, Vector Institute, (2024.10.19)

    https://vectorinstitute.ai/team/geoffrey-hinton/

  3. Holdsworth, J., Scapicchio, M., (2024), What is Deep Learning?, IBM

    https://www.ibm.com/topics/deep-learning

  4. What is a Neural Network?, AWS, (2024.10.19)

    https://aws.amazon.com/what-is/neural-network/

  5. Shinde, S., (2024), What is Supervised Learning in Machine Learning? A Comprehensive Guide, Emeritus

    https://emeritus.org/blog/ai-and-ml-supervised-learning/

  6. McGonagle, J., Shaikouski, G., Williams, C., Hsu, A., Khim, J., Miller, A., Backpropagation, Brilliant, (2024.10.20)

    https://brilliant.org/wiki/backpropagation/#:~:text=Backpropagation%2C%20short%20for%20%22backward%20propagation,to%20the%20neural%20network's%20weights.

  7. Deep Learning vs Machine Learning: The Ultimate Battle, Turing, (2024.10.20)

    https://www.turing.com/kb/ultimate-battle-between-deep-learning-and-machine-learning

Comments


bottom of page