{% extends "layout.html" %} {% block content %} Study Guide: Apriori Algorithm

🛒 Study Guide: The Apriori Algorithm

Tap Me!

🔹 Core Concepts

Story-style intuition: The Supermarket Detective

Imagine you're a detective hired by a supermarket. Your mission is to analyze thousands of shopping receipts (transactions) to find hidden patterns. You soon notice a classic pattern: "Customers who buy bread also tend to buy butter." This is a valuable clue! The store can place bread and butter closer together to increase sales. The Apriori Algorithm is the systematic method this detective uses to sift through all the receipts and find these "frequently bought together" item combinations and turn them into powerful rules. This whole process is called Market Basket Analysis.

The Apriori Algorithm is a classic algorithm used for association rule mining. Its main goal is to find relationships and patterns between items in large transactional datasets. It generates rules in the format "If A, then B," helping businesses understand customer behavior and make smarter decisions.

🔹 Key Definitions

To be a good supermarket detective, you need to know the lingo. The three most important metrics are Support, Confidence, and Lift.

Example Scenario: Let's say we have 100 shopping receipts.

🔹 The Apriori Principle

The Detective's Golden Rule: Our detective quickly realizes a simple but powerful truth: If customers rarely buy {Milk}, then they will *definitely* rarely buy the combination {Milk, Bread, Eggs}. Why waste time checking the records for a combination containing an already unpopular item? This is the Apriori Principle.

The principle states: "All non-empty subsets of a frequent itemset must also be frequent." This is the core idea that makes the Apriori algorithm efficient. It allows the algorithm to "prune" the search space by eliminating a huge number of candidate itemsets. If {Milk} is infrequent, any larger itemset containing {Milk} is guaranteed to be infrequent and can be ignored.

🔹 Algorithm Steps

The algorithm works iteratively, building up larger and larger frequent itemsets level by level.

  1. Set a Minimum Support Threshold: The detective decides they only care about itemsets that appear in at least, say, 50% of receipts.
  2. Find Frequent 1-Itemsets (L1): Scan all receipts and find every individual item that meets the minimum support. These are your "frequent items."
  3. Generate and Prune (Iterate):
    • Join: Take the frequent itemsets from the previous step (Lk-1) and combine them to create candidate k-itemsets (Ck). E.g., combine {Bread} and {Butter} to make {Bread, Butter}.
    • Prune: This is where the Apriori Principle comes in. Check every candidate. If any of its subsets is not in the frequent list (Lk-1), discard it immediately.
    • Scan: For the remaining candidates, scan the database to count their support. Keep only those that meet the minimum support threshold. This new list is Lk.
  4. Repeat Step 3 until no new frequent itemsets can be found.
  5. Generate Rules: Once you have all frequent itemsets, generate association rules (like {Bread} => {Butter}) from them that meet a minimum confidence threshold.

🔹 Strengths & Weaknesses

Advantages:

Disadvantages:

🔹 Python Implementation (Beginner Example with `mlxtend`)

Here, we'll be a supermarket detective with a small set of receipts. We need to prepare our data in a specific way (a one-hot encoded format) where each row is a transaction and each column is an item. Then, we'll use the `apriori` function to find frequent itemsets and `association_rules` to find the strong relationships.


import pandas as pd
from mlxtend.frequent_patterns import apriori, association_rules

# --- 1. Create a Sample Dataset ---
# This represents 5 shopping receipts.
dataset = [['Milk', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
           ['Dill', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
           ['Milk', 'Apple', 'Kidney Beans', 'Eggs'],
           ['Milk', 'Unicorn', 'Corn', 'Kidney Beans', 'Yogurt'],
           ['Corn', 'Onion', 'Onion', 'Kidney Beans', 'Ice cream', 'Eggs']]

# --- 2. Prepare Data in One-Hot Encoded Format ---
# mlxtend's apriori needs the data as a DataFrame of True/False values.
from mlxtend.preprocessing import TransactionEncoder
te = TransactionEncoder()
te_ary = te.fit(dataset).transform(dataset)
df = pd.DataFrame(te_ary, columns=te.columns_)

# --- 3. Find Frequent Itemsets with Apriori ---
# We set min_support to 0.6, meaning we only want itemsets
# that appear in at least 60% of the transactions (3 out of 5).
frequent_itemsets = apriori(df, min_support=0.6, use_colnames=True)
print("--- Frequent Itemsets (Support >= 60%) ---")
print(frequent_itemsets)

# --- 4. Generate Association Rules ---
# We generate rules that have a confidence of at least 70%.
rules = association_rules(frequent_itemsets, metric="confidence", min_threshold=0.7)
# Let's sort the rules by their "lift" to see the strongest relationships.
sorted_rules = rules.sort_values(by='lift', ascending=False)
print("\n--- Strong Association Rules (Confidence >= 70%) ---")
print(sorted_rules[['antecedents', 'consequents', 'support', 'confidence', 'lift']])
        

🔹 Best Practices

📝 Quick Quiz: Test Your Knowledge

  1. What is the Apriori Principle, and why is it important?
  2. If Support({A}) = 30%, Support({B}) = 40%, and Support({A, B}) = 20%, what is the Confidence of the rule {A} => {B}?
  3. A rule {Diapers} => {Beer} has a Lift of 3.0. What does this mean in plain English?
  4. What is the main performance bottleneck of the Apriori algorithm?

Answers

1. The Apriori Principle states that all subsets of a frequent itemset must also be frequent. It's important because it allows the algorithm to prune a massive number of candidate itemsets early on, making the process much more efficient.

2. Confidence({A} => {B}) = Support({A, B}) / Support({A}) = 20% / 30% ≈ 66.7%.

3. A Lift of 3.0 means that customers who buy diapers are 3 times more likely to buy beer than a randomly chosen customer. This indicates a strong positive association.

4. The main bottleneck is the candidate generation step. In each pass, it can create a very large number of potential itemsets that need to be checked against the entire database, which is slow and memory-intensive.

🔹 Key Terminology Explained (Apriori)

The Story: Decoding the Supermarket Detective's Notebook

{% endblock %}