|
|
{% extends "layout.html" %}
|
|
|
|
|
|
{% block content %}
|
|
|
<!DOCTYPE html>
|
|
|
<html lang="en">
|
|
|
<head>
|
|
|
<meta charset="UTF-8">
|
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
|
<title>Study Guide: Deep Reinforcement Learning</title>
|
|
|
|
|
|
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
|
|
|
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
|
|
|
<style>
|
|
|
|
|
|
body {
|
|
|
background-color: #ffffff;
|
|
|
color: #000000;
|
|
|
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
|
|
|
font-weight: normal;
|
|
|
line-height: 1.8;
|
|
|
margin: 0;
|
|
|
padding: 20px;
|
|
|
}
|
|
|
|
|
|
|
|
|
.container {
|
|
|
max-width: 800px;
|
|
|
margin: 0 auto;
|
|
|
padding: 20px;
|
|
|
}
|
|
|
|
|
|
|
|
|
h1, h2, h3 {
|
|
|
color: #000000;
|
|
|
border: none;
|
|
|
font-weight: bold;
|
|
|
}
|
|
|
|
|
|
h1 {
|
|
|
text-align: center;
|
|
|
border-bottom: 3px solid #000;
|
|
|
padding-bottom: 10px;
|
|
|
margin-bottom: 30px;
|
|
|
font-size: 2.5em;
|
|
|
}
|
|
|
|
|
|
h2 {
|
|
|
font-size: 1.8em;
|
|
|
margin-top: 40px;
|
|
|
border-bottom: 1px solid #ddd;
|
|
|
padding-bottom: 8px;
|
|
|
}
|
|
|
|
|
|
h3 {
|
|
|
font-size: 1.3em;
|
|
|
margin-top: 25px;
|
|
|
}
|
|
|
|
|
|
|
|
|
strong {
|
|
|
font-weight: 900;
|
|
|
}
|
|
|
|
|
|
|
|
|
p, li {
|
|
|
font-size: 1.1em;
|
|
|
border-bottom: 1px solid #e0e0e0;
|
|
|
padding-bottom: 10px;
|
|
|
margin-bottom: 10px;
|
|
|
}
|
|
|
|
|
|
|
|
|
li:last-child {
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
|
|
|
ol {
|
|
|
list-style-type: decimal;
|
|
|
padding-left: 20px;
|
|
|
}
|
|
|
|
|
|
ol li {
|
|
|
padding-left: 10px;
|
|
|
}
|
|
|
|
|
|
|
|
|
ul {
|
|
|
list-style-type: none;
|
|
|
padding-left: 0;
|
|
|
}
|
|
|
|
|
|
ul li::before {
|
|
|
content: "โข";
|
|
|
color: #000;
|
|
|
font-weight: bold;
|
|
|
display: inline-block;
|
|
|
width: 1em;
|
|
|
margin-left: 0;
|
|
|
}
|
|
|
|
|
|
|
|
|
pre {
|
|
|
background-color: #f4f4f4;
|
|
|
border: 1px solid #ddd;
|
|
|
border-radius: 5px;
|
|
|
padding: 15px;
|
|
|
white-space: pre-wrap;
|
|
|
word-wrap: break-word;
|
|
|
font-family: "Courier New", Courier, monospace;
|
|
|
font-size: 0.95em;
|
|
|
font-weight: normal;
|
|
|
color: #333;
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
|
|
|
.story-drl {
|
|
|
background-color: #f5f3ff;
|
|
|
border-left: 4px solid #4a00e0;
|
|
|
margin: 15px 0;
|
|
|
padding: 10px 15px;
|
|
|
font-style: italic;
|
|
|
color: #555;
|
|
|
font-weight: normal;
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
.story-drl p, .story-drl li {
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
.example-drl {
|
|
|
background-color: #f8f7ff;
|
|
|
padding: 15px;
|
|
|
margin: 15px 0;
|
|
|
border-radius: 5px;
|
|
|
border-left: 4px solid #8e2de2;
|
|
|
}
|
|
|
|
|
|
.example-drl p, .example-drl li {
|
|
|
border-bottom: none !important;
|
|
|
}
|
|
|
|
|
|
|
|
|
.quiz-section {
|
|
|
background-color: #fafafa;
|
|
|
border: 1px solid #ddd;
|
|
|
border-radius: 5px;
|
|
|
padding: 20px;
|
|
|
margin-top: 30px;
|
|
|
}
|
|
|
.quiz-answers {
|
|
|
background-color: #f8f7ff;
|
|
|
padding: 15px;
|
|
|
margin-top: 15px;
|
|
|
border-radius: 5px;
|
|
|
}
|
|
|
|
|
|
|
|
|
table {
|
|
|
width: 100%;
|
|
|
border-collapse: collapse;
|
|
|
margin: 25px 0;
|
|
|
}
|
|
|
th, td {
|
|
|
border: 1px solid #ddd;
|
|
|
padding: 12px;
|
|
|
text-align: left;
|
|
|
}
|
|
|
th {
|
|
|
background-color: #f2f2f2;
|
|
|
font-weight: bold;
|
|
|
}
|
|
|
|
|
|
|
|
|
@media (max-width: 768px) {
|
|
|
body, .container {
|
|
|
padding: 10px;
|
|
|
}
|
|
|
h1 { font-size: 2em; }
|
|
|
h2 { font-size: 1.5em; }
|
|
|
h3 { font-size: 1.2em; }
|
|
|
p, li { font-size: 1em; }
|
|
|
pre { font-size: 0.85em; }
|
|
|
table, th, td { font-size: 0.9em; }
|
|
|
}
|
|
|
</style>
|
|
|
</head>
|
|
|
<body>
|
|
|
|
|
|
<div class="container">
|
|
|
<h1>๐ Study Guide: Deep Reinforcement Learning (DRL)</h1>
|
|
|
|
|
|
<h2>๐น 1. Introduction</h2>
|
|
|
<div class="story-drl">
|
|
|
<p><strong>Story-style intuition: Upgrading the Critic's Brain</strong></p>
|
|
|
<p>Remember our food critic from the Q-Learning guide with their giant notebook (the Q-table)? That notebook worked fine for a small city with a few restaurants. But what if they move to a massive city with millions of restaurants, where the menu changes every night (a continuous state space)? Their notebook is useless! It's too big to create and too slow to look up.
|
|
|
<br>To solve this, the critic replaces their notebook with a powerful, creative brainโa <strong>Deep Neural Network</strong>. Now, instead of looking up an exact restaurant and dish, they can just describe the situation ("a fancy French restaurant, feeling adventurous") and their brain can *predict* a good Q-value for any potential dish on the spot. <strong>Deep Reinforcement Learning (DRL)</strong> is this powerful combination of RL's trial-and-error learning with the pattern-recognition power of deep learning.</p>
|
|
|
</div>
|
|
|
<p><strong>Deep Reinforcement Learning (DRL)</strong> is a subfield of machine learning that combines Reinforcement Learning (RL) with Deep Learning (DL). Instead of using tables to store values, DRL uses deep neural networks to approximate the optimal policy and/or value functions, allowing it to solve problems with vast, high-dimensional state and action spaces.</p>
|
|
|
|
|
|
<h2>๐น 2. Why Deep RL?</h2>
|
|
|
<p>Traditional RL methods like Q-Learning rely on tables (Q-tables) to store a value for every possible state-action pair. This approach fails spectacularly when the number of states or actions becomes very large or continuous.</p>
|
|
|
<div class="example-drl">
|
|
|
<p><strong>Example: An Atari Game</strong></p>
|
|
|
<ul>
|
|
|
<li><strong>The State:</strong> A single frame from the game is an image of, say, 84x84 pixels. Even with just 256 grayscale values, the number of possible states is \(256^{(84 \times 84)}\), a number larger than all the atoms in the universe. Creating a Q-table is impossible.</li>
|
|
|
<li><strong>The DRL Solution:</strong> A deep neural network (specifically, a Convolutional Neural Network or CNN) can take the raw pixels of the game screen as input and directly output the Q-values for each possible action (e.g., {Up, Down, Left, Right}). It learns to recognize patterns like the position of the ball and the paddle, just like a human would.</li>
|
|
|
</ul>
|
|
|
</div>
|
|
|
|
|
|
<h2>๐น 3. Core Components</h2>
|
|
|
<p>The core components are the same as in classic RL, but the implementation is powered by neural networks.</p>
|
|
|
<ul>
|
|
|
<li><strong>Agent:</strong> The decision-maker, whose "brain" is now a deep neural network.</li>
|
|
|
<li><strong>Environment:</strong> The world the agent interacts with.</li>
|
|
|
<li><strong>State Representation:</strong> Often high-dimensional raw data, like image pixels or sensor readings.</li>
|
|
|
<li><strong>Action Space:</strong> Can be discrete or continuous.</li>
|
|
|
<li><strong>Reward Signal:</strong> The feedback that guides the learning process.</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>๐น 4. Types of Deep RL Algorithms</h2>
|
|
|
<div class="story-drl">
|
|
|
<p>DRL agents can learn in different ways, just like people. Some focus on judging the situation (value-based), some focus on learning a skill (policy-based), and the most advanced do both at the same time (Actor-Critic).</p>
|
|
|
</div>
|
|
|
<ul>
|
|
|
<li>
|
|
|
<strong>Value-Based Methods (e.g., DQN):</strong> The neural network learns to predict the Q-value for each action. The policy is simple: just choose the action with the highest predicted Q-value.
|
|
|
<div class="example-drl"><p><strong>Analogy:</strong> This is a "critic" agent. It doesn't have an innate skill, but it's an expert at evaluating the potential of every possible move.</p></div>
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>Policy-Based Methods (e.g., REINFORCE):</strong> The neural network learns the policy directly. It takes a state as input and outputs the probability of taking each action.
|
|
|
<div class="example-drl"><p><strong>Analogy:</strong> This is an "actor" agent. It develops a direct instinct or muscle memory for what to do in a situation, without necessarily calculating the long-term value of its actions.</p></div>
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>Actor-Critic Methods (e.g., A2C, PPO):</strong> This is the hybrid approach. Two neural networks are used: an <strong>Actor</strong> that controls the agent's behavior (the policy) and a <strong>Critic</strong> that evaluates how good those actions are (the value function). The Critic gives feedback to the Actor, helping it to improve.
|
|
|
<div class="example-drl"><p><strong>Analogy:</strong> This is like an actor on stage with a director. The actor performs, and the director (critic) provides feedback ("That was a great delivery!") to help the actor refine their performance.</p></div>
|
|
|
</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>๐น 5. Deep Q-Networks (DQN)</h2>
|
|
|
<p>DQN was a breakthrough algorithm that successfully used a deep neural network to play Atari games at a superhuman level. It introduced two key innovations to stabilize learning:</p>
|
|
|
|
|
|
<ul>
|
|
|
<li><strong>Experience Replay:</strong> The agent stores its past experiences `(state, action, reward, next_state)` in a large memory buffer. During training, it samples random mini-batches from this buffer to update its neural network. This breaks the correlation between consecutive experiences, making training more stable and efficient.</li>
|
|
|
<li><strong>Target Network:</strong> DQN uses a second, separate neural network (the "target network") to generate the target Q-values in the update rule. This target network is a clone of the main network but is updated only periodically. This provides a stable target for the Q-value updates, preventing the learning process from spiraling out of control.</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>๐น 6. Policy Gradient Methods</h2>
|
|
|
<div class="story-drl">
|
|
|
<p><strong>The Archer's Analogy:</strong> An archer (the policy network) shoots an arrow. If the arrow hits close to the bullseye (high reward), they adjust their stance and aim (the network's weights) slightly in the same direction they just used. If the arrow misses badly (low reward), they adjust their aim in the opposite direction. Policy Gradient is this simple idea of "do more of what works and less of what doesn't," scaled up with calculus (gradient ascent).</p>
|
|
|
</div>
|
|
|
<p>These methods directly optimize the policy's parameters \( \theta \) to maximize the expected return \( J(\theta) \). The core idea is to update the policy in the direction that makes good actions more likely and bad actions less likely.</p>
|
|
|
<p>$$ \theta \leftarrow \theta + \alpha \nabla_\theta J(\theta) $$</p>
|
|
|
|
|
|
<h2>๐น 7. Actor-Critic Methods</h2>
|
|
|
<p>Actor-Critic methods are the state-of-the-art for many DRL problems, especially those with continuous action spaces. They combine the best of both worlds:</p>
|
|
|
<ul>
|
|
|
<li>The <strong>Actor</strong> (policy network) is responsible for taking actions.</li>
|
|
|
<li>The <strong>Critic</strong> (value network) provides feedback by evaluating the actions taken by the Actor.</li>
|
|
|
</ul>
|
|
|
<p>This setup is more stable and sample-efficient because the Critic provides a low-variance "baseline" to judge the Actor's actions against, leading to better and faster learning.</p>
|
|
|
<div class="example-drl"><p><strong>Example Algorithms:</strong> PPO (Proximal Policy Optimization) and SAC (Soft Actor-Critic) are two of the most popular and robust DRL algorithms used today.</p></div>
|
|
|
|
|
|
<h2>๐น 8. Challenges in DRL</h2>
|
|
|
<ul>
|
|
|
<li><strong>High Sample Complexity:</strong> DRL agents often need millions or even billions of interactions with the environment to learn a good policy, making them very data-hungry.</li>
|
|
|
<li><strong>Training Instability:</strong> The learning process can be highly sensitive to hyperparameters and random seeds, and can sometimes diverge or collapse.</li>
|
|
|
<li><strong>Reward Design:</strong> Crafting a reward function that encourages the desired behavior without allowing for unintended "loopholes" or "reward hacking" is very difficult.</li>
|
|
|
</ul>
|
|
|
|
|
|
<div class="quiz-section">
|
|
|
<h2>๐ Quick Quiz: Test Your Knowledge</h2>
|
|
|
<ol>
|
|
|
<li><strong>What is the primary problem with using a Q-table that led to the development of Deep RL?</strong></li>
|
|
|
<li><strong>What is "Experience Replay" in DQN, and why is it important?</strong></li>
|
|
|
<li><strong>What are the two main components of an Actor-Critic agent?</strong></li>
|
|
|
<li><strong>Which type of DRL algorithm would be most suitable for controlling a robot arm with precise, continuous movements?</strong></li>
|
|
|
</ol>
|
|
|
<div class="quiz-answers">
|
|
|
<h3>Answers</h3>
|
|
|
<p><strong>1.</strong> Q-tables cannot handle very large or continuous state spaces. The number of states in problems like video games or robotics is often effectively infinite, making it impossible to create or store a table for them.</p>
|
|
|
<p><strong>2.</strong> Experience Replay is the technique of storing past transitions `(s, a, r, s')` in a memory buffer and then training the network on random samples from this buffer. It is important because it breaks the temporal correlation between consecutive samples, leading to more stable and efficient training.</p>
|
|
|
<p><strong>3.</strong> An <strong>Actor</strong> (which learns and executes the policy) and a <strong>Critic</strong> (which learns and provides feedback on the value of states or actions).</p>
|
|
|
<p><strong>4.</strong> An <strong>Actor-Critic</strong> method (like DDPG, PPO, or SAC) would be most suitable. Policy-based and Actor-Critic methods are naturally able to handle continuous action spaces, whereas value-based methods like DQN are designed for discrete actions.</p>
|
|
|
</div>
|
|
|
</div>
|
|
|
|
|
|
<h2>๐น Key Terminology Explained</h2>
|
|
|
<div class="story-drl">
|
|
|
<p><strong>The Story: Decoding the DRL Agent's Brain</strong></p>
|
|
|
</div>
|
|
|
<ul>
|
|
|
<li>
|
|
|
<strong>Function Approximator:</strong>
|
|
|
<br>
|
|
|
<strong>What it is:</strong> Any function that can generalize from a set of inputs to produce an output, used to estimate a target function. In DRL, a deep neural network is used as a function approximator.
|
|
|
<br>
|
|
|
<strong>Story Example:</strong> Instead of a giant phone book (a table) that lists every person's exact phone number, you have a smart assistant (a function approximator). You can just ask it for "John Smith's number," and it can predict the number even if it's not explicitly in its contact list.
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>Experience Replay:</strong>
|
|
|
<br>
|
|
|
<strong>What it is:</strong> A technique where the agent stores its past experiences and samples from them randomly to train.
|
|
|
<br>
|
|
|
<strong>Story Example:</strong> This is like a student who, instead of just studying the last problem they solved, keeps a stack of all their past homework problems. To study for a test, they randomly pull problems from this stack. This prevents them from only remembering how to solve the most recent type of problem and helps them remember everything they've learned.
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>Policy Gradient:</strong>
|
|
|
<br>
|
|
|
<strong>What it is:</strong> The mathematical gradient (or direction of steepest ascent) of the policy's performance. RL algorithms use this to "climb the hill" towards a better policy.
|
|
|
<br>
|
|
|
<strong>Story Example:</strong> This is the archer's learning process. The <strong>policy gradient</strong> is the exact direction they need to adjust their aim to get closer to the bullseye, based on where their last arrow landed.
|
|
|
</li>
|
|
|
</ul>
|
|
|
|
|
|
</div>
|
|
|
|
|
|
</body>
|
|
|
</html>
|
|
|
{% endblock %}
|
|
|
|