The most common question I get from engineers entering AI is: where do I start? Most reach for GenAI tools first and work backwards. That is the wrong order. Start with the learning path that builds your foundation first, then move into deep learning, transformers, vector databases, RAG, and agentic AI systems.
Below is a curated set of courses organized by learning stage, from foundational ML concepts through to agentic AI systems. Each stage builds on the one before it.
The Learning Path
Stage 1: Beginner
Microsoft Machine Learning for Beginners
Focus: ML fundamentals, classical ML, regression, classification, and clustering
A 12-week, 26-lesson open curriculum from Microsoft focused on classic machine learning. It is structured and beginner-friendly, with clear progression and no prior ML knowledge required. (Microsoft GitHub)
Why I recommend it: Well-paced and easy to follow. A good first step before moving into more mathematical or deep learning content.
Google Machine Learning Crash Course
Focus: ML basics, model training, loss, generalization, and neural networks
A fast-paced, practical introduction to machine learning from Google, with videos, visualizations, and hands-on exercises. Useful for anyone who wants to quickly build core ML vocabulary. (Google for Developers)
Why I recommend it: Concise and immediately practical. Works well alongside the Microsoft curriculum or as a standalone quick-start.
Machine Learning Specialization by Andrew Ng
Focus: ML fundamentals, supervised and unsupervised learning, recommender systems, and practical ML concepts
A three-course specialization by Andrew Ng focused on building intuition and understanding how models learn, how to evaluate them, and how common ML algorithms work. Full access requires a Coursera subscription; some preview content is available for free. (Coursera)
Why I recommend it: Gives learners a strong mental model of machine learning. Without this foundation, advanced AI topics like RAG, embeddings, and agents can feel disconnected.
Stage 2: Intermediate
Stanford CS229: Machine Learning Lectures
Focus: Supervised learning, regression, classification, neural networks, SVMs, clustering, dimensionality reduction, and reinforcement learning
Stanford CS229 is a rigorous machine learning lecture series taught by Andrew Ng. It is best for learners who already understand basic ML concepts and want to go deeper into the algorithms, math, and intuition behind machine learning. Since it is more theoretical than beginner courses, I recommend using it after completing a beginner-friendly ML course. (CS229: Machine Learning)
Why I recommend it: The mathematical grounding here makes advanced topics like transformer optimization and model evaluation much easier to reason about in practice.
Deep Learning Specialization by DeepLearning.AI
Focus: Neural networks, deep learning, optimization, CNNs, and sequence models
A five-course specialization by Andrew Ng covering how neural networks work and why deep learning underpins modern AI. Full access requires a Coursera subscription. (Coursera, DeepLearning.ai)
Why I recommend it: Bridges the gap between classical ML and modern AI systems. Particularly useful before studying transformers, LLMs, or GenAI applications.
Hugging Face LLM Course
Focus: Transformers, tokenizers, datasets, LLMs, NLP, and the Hugging Face ecosystem
A practical course for understanding how modern language models work, using Transformers, Datasets, Tokenizers, Accelerate, and the Hugging Face Hub. It is free, open-source, and regularly maintained. (Hugging Face, GitHub)
Why I recommend it: Moves learners from understanding ML theory to working hands-on with real LLM pipelines and open-source tooling.
Stage 3: Intermediate / Advanced
Building Applications with Vector Databases by DeepLearning.AI
Focus: Vector databases, embeddings, semantic search, retrieval-augmented generation, and real-world applications
A short course from DeepLearning.AI focused on building applications powered by vector databases, including semantic search, RAG, and anomaly detection. (DeepLearning.AI)
Why I recommend it: Connects ML concepts to real application architecture. Explains why embeddings matter and how vector search powers RAG systems.
Stage 4: Advanced
LangChain Academy: Introduction to LangGraph
Focus: LangGraph, agent workflows, stateful orchestration, tool use, and production-oriented agent design
LangChain Academy's foundation course for building agentic and multi-agent applications with LangGraph. (LangChain Academy)
Why I recommend it: Real enterprise AI involves orchestration, state, tools, routing, verification, and observability, not just calling an LLM. This course covers all of that.
Hugging Face Agents Course and Open-Source AI Cookbook
Focus: Agents, tool use, open-source AI workflows, notebooks, and applied GenAI patterns
Hugging Face's learning hub includes an Agents Course and an Open-Source AI Cookbook, alongside courses on LLMs, diffusion, audio, robotics, and deep RL. Free and regularly updated. (Hugging Face Learn)
Why I recommend it: A hands-on complement to LangGraph, focused on the open-source AI ecosystem rather than a specific framework.
Full Path at a Glance
| Stage | Course | Access | What you learn |
|---|---|---|---|
| Beginner | Microsoft ML for Beginners | Free | ML fundamentals and classic algorithms |
| Beginner | Google ML Crash Course | Free | Practical ML vocabulary and model training |
| Beginner | ML Specialization by Andrew Ng | Freemium | ML intuition and algorithms |
| Intermediate | Stanford CS229 (YouTube) | Free | ML algorithms, theory, and mathematical depth |
| Intermediate | Deep Learning Specialization | Freemium | Neural networks and deep learning |
| Intermediate | Hugging Face LLM Course | Free | Transformers, NLP, and LLM workflows |
| Int. / Advanced | Building with Vector Databases | Free | Embeddings, vector DBs, and RAG |
| Advanced | LangChain Academy: LangGraph | Free | Agentic AI and orchestration |
| Advanced | Hugging Face Agents / AI Cookbook | Free | Open-source GenAI application patterns |
One Gap to Know: Vector Databases and RAG
No single course covers vector databases and RAG end to end. The tooling is still maturing, and the best learning comes from combining resources:
- Learn embeddings through the Hugging Face LLM Course
- Practice with the official docs from open-source vector DB projects such as Chroma, Qdrant, Milvus, or Weaviate
- Build a small RAG application using LangChain or LlamaIndex; their documentation includes end-to-end tutorials
Final Takeaway
The learning order matters more than the specific platform. Start with ML fundamentals, move into deep learning, then transformers, then vector databases and RAG, and finally agentic AI systems.
Skipping the foundation and jumping straight to agents or RAG makes the advanced topics much harder to understand and apply in practice.
That path gives you more than tool knowledge. It gives you architectural thinking, which is what matters when building real AI systems.