Skip to main content
  1. Posts/
  2. Computing History Series/

The Deep Learning Explosion: AlphaGo Defeats Lee Sedol

sun.ao
Author
sun.ao
I’m sun.ao, a programmer passionate about technology, focusing on AI and digital transformation.
Table of Contents
Computing Through the Ages - This article is part of a series.
§ : This article

March 9, 2016, Seoul, South Korea.

World Go champion Lee Sedol sat before the board, facing Google DeepMind’s AlphaGo.

Go was considered the fortress of human wisdom. Its possibilities exceed the number of atoms in the universe; computers couldn’t exhaust them.

Before the match, most experts thought AlphaGo would lose. Go was too complex; computers couldn’t beat top human players.

But the first game, AlphaGo won.

Lee Sedol was shocked. He later said: “I was surprised. I didn’t think AlphaGo could play this well.”

In the end, AlphaGo defeated Lee Sedol 4-1.

This result shocked the world. The AI era had officially arrived.

The Difficulty of Go
#

Why was Go considered the ultimate challenge for AI?

The Go board is 19×19 with 361 intersection points. Each position can hold a black stone, white stone, or be empty.

The number of possible games is about 10^170—more than the number of atoms in the universe (about 10^80).

Chess has about 10^120 possible games; Go is much more complex than chess.

In 1997, IBM’s Deep Blue defeated chess world champion Kasparov. But Deep Blue’s approach—exhaustive search—doesn’t work for Go.

Go requires intuition and big-picture thinking, considered uniquely human abilities.

The Deep Learning Breakthrough
#

AlphaGo’s success was due to deep learning.

Deep learning uses neural networks—computing models that simulate the structure of the human brain.

Neural networks consist of multiple layers of “neurons”:

  • Input layer: Receives data (like board state)
  • Hidden layers: Extract features, progressively abstract
  • Output layer: Gives results (like next move)

“Deep” in deep learning refers to the number of hidden layers. More layers mean more complex models that can learn more abstract features.

AlphaGo used two neural networks:

Policy Network: Predicts where the next move should be played

Value Network: Evaluates the win probability of the current position

These two networks were trained through deep learning, learning millions of human games, then playing against themselves, continuously improving.

History of Deep Learning
#

Deep learning isn’t new technology; it went through a long winter.

1943: McCulloch and Pitts proposed the first neural network model.

1958: Rosenblatt invented the Perceptron—the simplest neural network.

1969: Minsky and Papert proved that perceptrons can’t solve simple problems (like XOR). Neural network research entered a slump.

1986: Hinton and others invented the backpropagation algorithm, able to train multi-layer neural networks. Neural networks revived.

1998: LeCun and others developed LeNet for handwritten digit recognition. This was the precursor to convolutional neural networks (CNN).

2006: Hinton proposed Deep Belief Networks; the term “deep learning” became popular.

2012: AlexNet won big at ImageNet image recognition competition; deep learning began to explode.

2016: AlphaGo defeated Lee Sedol; deep learning shocked the world.

Why Did Deep Learning Succeed?
#

Deep learning exploded in the 2010s for three reasons:

First, data. The internet generated massive data; deep learning needs lots of data to train.

Second, computing power. GPUs (graphics processing units) can parallel compute, greatly accelerating neural network training.

Third, algorithms. New activation functions (ReLU), new optimization methods (Adam), and new network architectures (ResNet) solved problems with training deep networks.

Deep learning’s advantage: Automatically learn features.

Traditional machine learning requires human-designed features. For image recognition, you need to design features like edges, textures, and shapes.

Deep learning doesn’t need human-designed features. Neural networks can automatically learn features from raw data.

After AlphaGo
#

After AlphaGo, DeepMind continued improving:

AlphaGo Zero (2017): Used no human games at all, learned completely through self-play. Surpassed AlphaGo in 3 days.

AlphaZero (2018): A general algorithm that can learn Go, chess, and shogi. No need to design specific algorithms for each game.

MuZero (2020): Doesn’t even need to know the game rules; can learn rules through exploration.

AlphaGo’s technology has also been applied to other areas: protein folding prediction (AlphaFold), nuclear fusion control, and chip design optimization.

Deep Learning Applications
#

Deep learning is already widely used:

Computer Vision

  • Image classification: Recognize objects in images
  • Object detection: Locate objects in images
  • Face recognition: Unlock phones, security monitoring
  • Medical imaging: Diagnose diseases

Speech Processing

  • Speech recognition: Siri, Xiao Ai
  • Speech synthesis: Virtual anchors, audiobooks
  • Speech translation: Real-time translation

Natural Language Processing

  • Machine translation: Google Translate, DeepL
  • Text generation: GPT series
  • Question answering: Intelligent customer service
  • Sentiment analysis: Analyze user reviews

Game AI

  • AlphaGo: Go
  • OpenAI Five: Dota 2
  • AlphaStar: StarCraft

Autonomous Driving

  • Tesla Autopilot
  • Waymo

Scientific Discovery

  • AlphaFold: Protein structure prediction
  • Material discovery
  • Drug development

Limitations of Deep Learning
#

Deep learning also has limitations:

Data hungry: Needs large amounts of labeled data.

Compute intensive: Training large models needs lots of GPUs, expensive.

Black box problem: Hard to explain why a model makes certain decisions.

Adversarial vulnerability: Carefully designed tiny perturbations can fool models.

Generalization problem: Models may perform poorly in new scenarios.

Next Step: The AI Wave
#

AlphaGo’s victory marked AI entering a new era.

Since 2016, AI technology has developed rapidly. Large language models, generative AI, multimodal AI… breakthroughs came one after another.

Tomorrow, we’ll discuss the AI wave after 2016.


Today’s Key Concepts
#

Deep Learning Machine learning using multi-layer neural networks. Deep learning can automatically learn features without human design. Achieved breakthroughs in image recognition, speech recognition, and natural language processing.

Neural Network Computing model simulating brain structure, consisting of multiple layers of “neurons.” Input layer receives data, hidden layers extract features, output layer gives results. “Deep” in deep learning refers to the number of hidden layers.

Convolutional Neural Network (CNN) Neural network specialized for image processing. Uses convolution operations to extract image features, performing excellently in image recognition and object detection.


Discussion Questions
#

  1. AlphaGo defeating Lee Sedol was considered a milestone in AI history. What do you think is the significance of this event?
  2. Deep learning needs lots of data and computing power. Does this mean only big companies can do AI?

Tomorrow’s Preview: The AI Wave—technological changes after 2016, how AI changed every industry?

Computing Through the Ages - This article is part of a series.
§ : This article

Related articles