|
|
{% extends "layout.html" %}
|
|
|
|
|
|
{% block content %}
|
|
|
<!DOCTYPE html>
|
|
|
<html lang="en">
|
|
|
<head>
|
|
|
<meta charset="UTF-8">
|
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
|
<title>Study Guide: Q-Learning</title>
|
|
|
|
|
|
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
|
|
|
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
|
|
|
<style>
|
|
|
|
|
|
body {
|
|
|
background-color: #ffffff;
|
|
|
color: #000000;
|
|
|
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
|
|
|
font-weight: normal;
|
|
|
line-height: 1.8;
|
|
|
margin: 0;
|
|
|
padding: 20px;
|
|
|
}
|
|
|
|
|
|
|
|
|
.container {
|
|
|
max-width: 800px;
|
|
|
margin: 0 auto;
|
|
|
padding: 20px;
|
|
|
}
|
|
|
|
|
|
|
|
|
h1, h2, h3 {
|
|
|
color: #000000;
|
|
|
border: none;
|
|
|
font-weight: bold;
|
|
|
}
|
|
|
|
|
|
h1 {
|
|
|
text-align: center;
|
|
|
border-bottom: 3px solid #000;
|
|
|
padding-bottom: 10px;
|
|
|
margin-bottom: 30px;
|
|
|
font-size: 2.5em;
|
|
|
}
|
|
|
|
|
|
h2 {
|
|
|
font-size: 1.8em;
|
|
|
margin-top: 40px;
|
|
|
border-bottom: 1px solid #ddd;
|
|
|
padding-bottom: 8px;
|
|
|
}
|
|
|
|
|
|
h3 {
|
|
|
font-size: 1.3em;
|
|
|
margin-top: 25px;
|
|
|
}
|
|
|
|
|
|
|
|
|
strong {
|
|
|
font-weight: 900;
|
|
|
}
|
|
|
|
|
|
|
|
|
p, li {
|
|
|
font-size: 1.1em;
|
|
|
border-bottom: 1px solid #e0e0e0;
|
|
|
padding-bottom: 10px;
|
|
|
margin-bottom: 10px;
|
|
|
}
|
|
|
|
|
|
|
|
|
li:last-child {
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
|
|
|
ol {
|
|
|
list-style-type: decimal;
|
|
|
padding-left: 20px;
|
|
|
}
|
|
|
|
|
|
ol li {
|
|
|
padding-left: 10px;
|
|
|
}
|
|
|
|
|
|
|
|
|
ul {
|
|
|
list-style-type: none;
|
|
|
padding-left: 0;
|
|
|
}
|
|
|
|
|
|
ul li::before {
|
|
|
content: "โข";
|
|
|
color: #000;
|
|
|
font-weight: bold;
|
|
|
display: inline-block;
|
|
|
width: 1em;
|
|
|
margin-left: 0;
|
|
|
}
|
|
|
|
|
|
|
|
|
pre {
|
|
|
background-color: #f4f4f4;
|
|
|
border: 1px solid #ddd;
|
|
|
border-radius: 5px;
|
|
|
padding: 15px;
|
|
|
white-space: pre-wrap;
|
|
|
word-wrap: break-word;
|
|
|
font-family: "Courier New", Courier, monospace;
|
|
|
font-size: 0.95em;
|
|
|
font-weight: normal;
|
|
|
color: #333;
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
|
|
|
.story-q {
|
|
|
background-color: #fff8f0;
|
|
|
border-left: 4px solid #fd7e14;
|
|
|
margin: 15px 0;
|
|
|
padding: 10px 15px;
|
|
|
font-style: italic;
|
|
|
color: #555;
|
|
|
font-weight: normal;
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
.story-q p, .story-q li {
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
.example-q {
|
|
|
background-color: #fff9f0;
|
|
|
padding: 15px;
|
|
|
margin: 15px 0;
|
|
|
border-radius: 5px;
|
|
|
border-left: 4px solid #ff9a3c;
|
|
|
}
|
|
|
|
|
|
.example-q p, .example-q li {
|
|
|
border-bottom: none !important;
|
|
|
}
|
|
|
|
|
|
|
|
|
.quiz-section {
|
|
|
background-color: #fafafa;
|
|
|
border: 1px solid #ddd;
|
|
|
border-radius: 5px;
|
|
|
padding: 20px;
|
|
|
margin-top: 30px;
|
|
|
}
|
|
|
.quiz-answers {
|
|
|
background-color: #fff9f0;
|
|
|
padding: 15px;
|
|
|
margin-top: 15px;
|
|
|
border-radius: 5px;
|
|
|
}
|
|
|
|
|
|
|
|
|
table {
|
|
|
width: 100%;
|
|
|
border-collapse: collapse;
|
|
|
margin: 25px 0;
|
|
|
}
|
|
|
th, td {
|
|
|
border: 1px solid #ddd;
|
|
|
padding: 12px;
|
|
|
text-align: left;
|
|
|
}
|
|
|
th {
|
|
|
background-color: #f2f2f2;
|
|
|
font-weight: bold;
|
|
|
}
|
|
|
|
|
|
|
|
|
@media (max-width: 768px) {
|
|
|
body, .container {
|
|
|
padding: 10px;
|
|
|
}
|
|
|
h1 { font-size: 2em; }
|
|
|
h2 { font-size: 1.5em; }
|
|
|
h3 { font-size: 1.2em; }
|
|
|
p, li { font-size: 1em; }
|
|
|
pre { font-size: 0.85em; }
|
|
|
table, th, td { font-size: 0.9em; }
|
|
|
}
|
|
|
</style>
|
|
|
</head>
|
|
|
<body>
|
|
|
|
|
|
<div class="container">
|
|
|
<h1>๐งญ Study Guide: Q-Learning in Reinforcement Learning</h1>
|
|
|
|
|
|
<h2>๐น 1. Introduction</h2>
|
|
|
<div class="story-q">
|
|
|
<p><strong>Story-style intuition: The Restaurant Critic's Notebook</strong></p>
|
|
|
<p>Imagine a food critic exploring a new city. Their goal is to find the best possible multi-course meal. The critic creates a huge notebook with a page for every restaurant (<strong>state</strong>) in the city. On each page, they list every dish (<strong>action</strong>) available. They then go out and, through trial-and-error, start assigning a score to each dish. This score, the <strong>Q-value</strong>, isn't just about how good that one dish tastes (the immediate reward). It's a prediction of the total "dining satisfaction" for the entire evening if they eat that dish and then continue to choose the best dishes at all subsequent restaurants. After visiting many restaurants over many nights, their notebook becomes the ultimate guide to the perfect dining experience. <strong>Q-Learning</strong> is this process of an agent filling out its "notebook" (the Q-table) to learn the value of every action in every state.</p>
|
|
|
</div>
|
|
|
<p><strong>Q-Learning</strong> is a classic <strong>model-free</strong>, <strong>off-policy</strong> Reinforcement Learning algorithm. Its primary goal is to learn the optimal policy for making decisions by estimating the quality of state-action pairs, known as the Q-function.</p>
|
|
|
|
|
|
<h2>๐น 2. The Q-Function</h2>
|
|
|
<p>The Q-function, denoted \( Q(s, a) \), represents the total expected future reward (the Return) an agent can get by taking a specific <strong>action \(a\)</strong> in a specific <strong>state \(s\)</strong> and then following the optimal policy thereafter. It's a measure of how good a particular move is in a particular situation.</p>
|
|
|
<p>Q-Learning's objective is to find the optimal Q-function, \( Q^*(s, a) \):</p>
|
|
|
<p>$$ Q^*(s,a) = \max_\pi \mathbb{E} \Big[ G_t \mid S_t = s, A_t = a, \pi \Big] $$</p>
|
|
|
<p>In simple terms, this means \( Q^*(s, a) \) is the maximum possible return you can get if you start by taking action \(a\) in state \(s\).</p>
|
|
|
<div class="example-q">
|
|
|
<p><strong>Example:</strong> In a maze, \( Q(\text{crossroads}, \text{go left}) \) would be the predicted total reward if the agent chooses to go left at the crossroads and then plays perfectly for the rest of the maze. This value would be high if "left" is on the optimal path to the exit, and low if it leads to a dead end.</p>
|
|
|
</div>
|
|
|
|
|
|
<h2>๐น 3. The Q-Learning Update Rule</h2>
|
|
|
<div class="story-q">
|
|
|
<p><strong>The Critic's Update Rule:</strong> After trying a dish (action a) at a restaurant (state s), the critic gets an immediate taste score (Reward R). They then look at their notebook for the *next* restaurant (state s') and find the score of the *best possible* dish they could order there. They combine this information to create an updated "learned value." The final updated score in their notebook is a small step from the old score towards this new learned value. The size of that step is the learning rate (ฮฑ).</p>
|
|
|
</div>
|
|
|
<p>The core of Q-Learning is its update rule, which is applied after every step the agent takes. It updates the Q-value for the state-action pair it just experienced.</p>
|
|
|
<p>$$ Q(s, a) \leftarrow Q(s, a) + \alpha \Big[ \underbrace{R + \gamma \max_{a'} Q(s', a')}_{\text{New Learned Value}} - Q(s, a) \Big] $$</p>
|
|
|
<ul>
|
|
|
<li>\( Q(s, a) \): The current, old Q-value.</li>
|
|
|
<li>\( \alpha \) (Alpha): The <strong>Learning Rate</strong>. How much we update our Q-value based on the new information. A high value means we learn fast, a low value means we are more conservative.</li>
|
|
|
<li>\( R \): The immediate <strong>Reward</strong> received.</li>
|
|
|
<li>\( \gamma \) (Gamma): The <strong>Discount Factor</strong>. How much we value future rewards.</li>
|
|
|
<li>\( \max_{a'} Q(s', a') \): The agent's estimate of the best possible future value it can get from the next state \( s' \). This is the key to learning: it looks one step into the future to inform its current estimate.</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>๐น 4. Step-by-Step Flow of Q-Learning</h2>
|
|
|
<p>The algorithm iteratively refines its Q-table until the values converge to the optimal ones.</p>
|
|
|
|
|
|
<ol>
|
|
|
<li><strong>Initialize Q-Table:</strong> Create a table with a row for every state and a column for every action. Fill it with zeros or small random values.</li>
|
|
|
<li><strong>Loop for many episodes:</strong>
|
|
|
<ol type="a">
|
|
|
<li>Start in an initial state \( s \).</li>
|
|
|
<li><strong>Loop for each step of the episode:</strong>
|
|
|
<ol type="i">
|
|
|
<li>Choose an action \( a \) from state \( s \) using an exploration strategy (like epsilon-greedy).</li>
|
|
|
<li>Perform the action \( a \).</li>
|
|
|
<li>Observe the immediate reward \( R \) and the next state \( s' \).</li>
|
|
|
<li>Update the Q-value for the original state and action, \( Q(s, a) \), using the update rule.</li>
|
|
|
<li>Set the new state: \( s \leftarrow s' \).</li>
|
|
|
</ol>
|
|
|
</li>
|
|
|
<li>The episode ends when a terminal state is reached.</li>
|
|
|
</ol>
|
|
|
</li>
|
|
|
<li>After thousands of episodes, the Q-table will contain good approximations of the optimal action-values. The agent's optimal policy is then simply to choose the action with the highest Q-value in any given state.</li>
|
|
|
</ol>
|
|
|
|
|
|
<h2>๐น 5. Exploration vs. Exploitation</h2>
|
|
|
<div class="story-q">
|
|
|
<p><strong>The Critic's Dilemma:</strong> On any given night, should the critic go to the restaurant and order the dish they already know has the highest score in their notebook (<strong>Exploitation</strong>)? Or should they try a random, new dish they've never had before to see if it's even better (<strong>Exploration</strong>)? If they only exploit, they might miss out on a hidden gem. If they only explore, they'll have a lot of bad meals.</p>
|
|
|
</div>
|
|
|
<p>This is a fundamental challenge in RL. The most common solution is the <strong>epsilon-greedy (\(\epsilon\)-greedy) strategy</strong>:</p>
|
|
|
<ul>
|
|
|
<li>With a small probability \( \epsilon \) (e.g., 10%), the agent takes a random action (explores).</li>
|
|
|
<li>With a large probability \( 1 - \epsilon \), the agent takes the action with the highest known Q-value (exploits).</li>
|
|
|
<li>Typically, \( \epsilon \) starts high and slowly decreases as the agent becomes more confident in its Q-values.</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>๐น 6. Example: Gridworld</h2>
|
|
|
<div class="example-q">
|
|
|
<p>Imagine a simple 3x3 grid world:</p>
|
|
|
<ul>
|
|
|
<li><strong>States:</strong> 9 cells, identified by their coordinates.</li>
|
|
|
<li><strong>Actions:</strong> {Up, Down, Left, Right}.</li>
|
|
|
<li><strong>Rewards:</strong> +10 for reaching the Goal cell, -100 for falling into a Lava cell, -1 for every other move (to encourage speed).</li>
|
|
|
</ul>
|
|
|
<p>Initially, the Q-table is all zeros. As the agent explores, it will eventually stumble into the Goal. When it does, the Q-value for the state-action pair that led to the goal gets a positive update. In the next episode, if the agent reaches a state next to that one, the `max Q(s', a')` term in the update rule will now be positive, causing the "goodness" of the goal to propagate backwards through the grid, one step at a time, until the agent has a complete map of the best path from any square.</p>
|
|
|
</div>
|
|
|
|
|
|
<h2>๐น Advantages & Disadvantages</h2>
|
|
|
<table>
|
|
|
<thead>
|
|
|
<tr>
|
|
|
<th>Advantages</th>
|
|
|
<th>Disadvantages</th>
|
|
|
</tr>
|
|
|
</thead>
|
|
|
<tbody>
|
|
|
<tr>
|
|
|
<td>โ
<strong>Simple and Intuitive:</strong> The core concept of updating a value table is very easy to understand and implement.</td>
|
|
|
<td>โ <strong>The Curse of Dimensionality:</strong> It is only feasible for problems with small, discrete state and action spaces. The size of the Q-table explodes as states/actions increase. <br><strong>Example:</strong> A chess board has ~10ยนยฒโฐ states. A Q-table is impossible.</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td>โ
<strong>Model-Free:</strong> The agent doesn't need to know the rules of the environment (the transition probabilities P). It learns just by observing outcomes.</td>
|
|
|
<td>โ <strong>Slow Convergence:</strong> It can take a very large number of episodes for the Q-values to propagate through the entire state space and converge.</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td>โ
<strong>Guaranteed Convergence:</strong> Under the right conditions (enough exploration, a learning rate that decays appropriately), Q-Learning is proven to converge to the optimal Q-values.</td>
|
|
|
<td>โ <strong>Cannot handle continuous spaces</strong> without modification. To handle continuous states or actions, you need to combine it with a function approximator, which leads to Deep Q-Learning (DQN).</td>
|
|
|
</tr>
|
|
|
</tbody>
|
|
|
</table>
|
|
|
|
|
|
<h2>๐น Key Terminology Explained</h2>
|
|
|
<div class="story-q">
|
|
|
<p><strong>The Story: Decoding the Critic's Jargon</strong></p>
|
|
|
</div>
|
|
|
<ul>
|
|
|
<li>
|
|
|
<strong>Model-Free:</strong>
|
|
|
<br>
|
|
|
<strong>What it is:</strong> An algorithm that learns a policy without building an explicit model of the environment's dynamics (the P and R functions).
|
|
|
<br>
|
|
|
<strong>Story Example:</strong> The critic doesn't need the restaurant's recipes or know how the kitchen works (the model). They learn which dish is best just by tasting them (trial-and-error).
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>Off-Policy:</strong>
|
|
|
<br>
|
|
|
<strong>What it is:</strong> An algorithm that can learn about the optimal policy even while it is following a different, exploratory policy.
|
|
|
<br>
|
|
|
<strong>Story Example:</strong> Even when the critic is exploring by trying a random dish, the Q-learning update rule still uses the `max Q(s', a')` term, which assumes they will take the *best* action in the next state. It learns about the perfect "exploitation" path while it is still on an "exploration" path.
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>Q-Table:</strong>
|
|
|
<br>
|
|
|
<strong>What it is:</strong> The data structure used to store and update the Q-values. It's a large matrix where rows represent states and columns represent actions.
|
|
|
<br>
|
|
|
<strong>Story Example:</strong> This is the critic's master notebook, with a page for every restaurant and a rating for every dish on the menu.
|
|
|
</li>
|
|
|
</ul>
|
|
|
|
|
|
</div>
|
|
|
|
|
|
</body>
|
|
|
</html>
|
|
|
{% endblock %}
|
|
|
|