2025 ML Learning Roadmap

🎯 Learning Goals

Above the Line: Understand Modern Recommender Systems

  • Sequential recommendation systems
  • Attention-based recommenders
  • Generative recommenders
  • Cold start problem solutions
  • Personalization techniques
  • Multi-objective optimization trade-offs

Below the Line:

Model Architectures:

  • Two-tower models
  • DLRM (Deep Learning Recommendation Model)
  • DCN (Deep & Cross Network)
  • DIN/DIEN/TWIN (Deep Interest Network variants)
  • SASRec (Self-Attentive Sequential Recommendation)
  • HSTU (Hierarchical Sequential User Modeling)

Key Concepts:

  • Transformer Architecture
  • Mixture of Experts (MoE/MMoE/POSO)
  • LSH (Locality-Sensitive Hashing) & Vector Quantization for retrieval
  • Parallelism strategies:
    • Data parallelism
    • Model parallelism
    • Pipeline parallelism

📚 Reading Materials

ML Fundamentals

  1. Transformer Architecture (Prerequisite for all topics)
  2. Mixture of Experts (MOE) (Prerequisite for MMOE, POSO, Switch Transformer)
  3. Locality-Sensitive Hashing (LSH)
    • LSH Stanford Slides
    • Pinecone LSH Guide
    • Deliverables:
      • YouTube video explanation of LSH
      • Python implementation of LSH
  4. Vector Quantization

Recommendation Systems

  1. Basic Architectures
  2. Attention in Recommendation
  3. Sequential Modeling
    • SASRec Paper
      • Deliverable 1: YouTube video explanation
      • Deliverable 2: Python implementation
    • HSTU Paper
      • Deliverable: YouTube video explanation
  4. MMOE in Recommender Systems

📅 Learning Sequence Plan

  1. Transformer Architecture
  2. Mixture of Experts (MOE)
  3. Basic Recommendation Architectures
  4. Attention in Recommendation
  5. Sequential Modeling
  6. MMOE in Recommender Systems
  7. Locality-Sensitive Hashing (LSH)
  8. Vector Quantization