2025 ML Learning Roadmap
🎯 Learning Goals
Above the Line: Understand Modern Recommender Systems
- Sequential recommendation systems
- Attention-based recommenders
- Generative recommenders
- Cold start problem solutions
- Personalization techniques
- Multi-objective optimization trade-offs
Below the Line:
Model Architectures:
- Two-tower models
- DLRM (Deep Learning Recommendation Model)
- DCN (Deep & Cross Network)
- DIN/DIEN/TWIN (Deep Interest Network variants)
- SASRec (Self-Attentive Sequential Recommendation)
- HSTU (Hierarchical Sequential User Modeling)
Key Concepts:
- Transformer Architecture
- Mixture of Experts (MoE/MMoE/POSO)
- LSH (Locality-Sensitive Hashing) & Vector Quantization for retrieval
- Parallelism strategies:
- Data parallelism
- Model parallelism
- Pipeline parallelism
📚 Reading Materials
ML Fundamentals
- Transformer Architecture (Prerequisite for all topics)
- Mixture of Experts (MOE) (Prerequisite for MMOE, POSO, Switch Transformer)
- Locality-Sensitive Hashing (LSH)
- LSH Stanford Slides
- Pinecone LSH Guide
- Deliverables:
- YouTube video explanation of LSH
- Python implementation of LSH
- Vector Quantization
Recommendation Systems
- Basic Architectures
- Attention in Recommendation
- Classic Lineage:
- Scaling Attention:
- Advanced Variants:
- Deliverable: YouTube video explanation for each paper
- Sequential Modeling
- SASRec Paper
- Deliverable 1: YouTube video explanation
- Deliverable 2: Python implementation
- HSTU Paper
- Deliverable: YouTube video explanation
- MMOE in Recommender Systems
📅 Learning Sequence Plan
- Transformer Architecture
- Mixture of Experts (MOE)
- Basic Recommendation Architectures
- Attention in Recommendation
- Sequential Modeling
- MMOE in Recommender Systems
- Locality-Sensitive Hashing (LSH)
- Vector Quantization