- Artificial Intelligence Nanodegree
- Content
- Part 01 : Introduction to Artificial Intelligence
- Part 02 : Constraint Satisfaction Problems
- Part 03 : Classical Search
- Part 04 : Automated Planning
- Part 05 : Optimization Problems
- Part 06 : Adversarial Search
- Part 07 : Probabilistic Models
- Part 08 : After the AI Nanodegree Program
- Part 09 (Elective): Extracurricular
- Content
Artificial Intelligence Nanodegree
助教微信
udacity公众号
Nanodegree key: nd898
Version: 6.0.0
Locale: en-us
Become an expert in the core concepts of artificial intelligence and learn how to apply them to real-life problems.
Content
Part 01 : Introduction to Artificial Intelligence
Meet the instructional team including Sebastian Thrun, Peter Norvig, and Thad Starner who will be teaching you about the foundations of AI. Get acquainted with the resources available in your classroom & other important information about the program. Complete the lesson by building a Sudoku solver.
- Module 01: Introduction to the Nanodegree
- Lesson 01: Welcome to Artificial IntelligenceWelcome to the Artificial Intelligence Nanodegree program!
- Concept 01: Welcome to the Artificial Intelligence Nanodegree Program
- Concept 02: Meet Your Instructors
- Concept 03: Projects You Will Build
- Concept 04: Udacity Support
- Concept 05: Community Guidelines
- Concept 06: Weekly Lesson Plans
- Concept 07: References & Resources
- Concept 08: Get Started
- Concept 09: Lesson Plan - Week 1
- Lesson 02: Knowledge, Community, and CareersYou are starting a challenging but rewarding journey! Take 5 minutes to read how to get help with projects and content.
- Lesson 03: Get Help with Your AccountWhat to do if you have questions about your account or general questions about the program.
- Lesson 04: Intro to Artificial IntelligenceAn introduction to basic AI concepts and the challenge of answering “what is AI?”
- Concept 01: Welcome to AI!
- Concept 02: Navigation
- Concept 03: Game Playing
- Concept 04: Quiz: Tic Tac Toe
- Concept 05: Tic Tac Toe: Heuristics
- Concept 06: Quiz: Monty Hall Problem
- Concept 07: Monty Hall Problem: Explained
- Concept 08: Quiz: What is Intelligence?
- Concept 09: Defining Intelligence
- Concept 10: Agent, Environment And State
- Concept 11: Perception, Action and Cognition
- Concept 12: Quiz: Types of AI Problems
- Concept 13: Rational Behavior And Bounded Optimality
- Lesson 05: Solving Sudoku With AIIn this lesson, you’ll dive right in and apply Artificial Intelligence to solve every Sudoku puzzle.
- Lesson 06: WorkspacesReview the basic functionality of Workspaces—pre-configured development environments in the Udacity classroom for projects and exercises.
- Lesson 07: Setting Up Your Environment with AnacondaIf you do not want to use Workspaces, then follow these instructions to set up your own system using Anaconda, a popular tool to manage your environments and packages in python.
- Lesson 08: Build a Sudoku SolverUse constraint propagation and search to build an agent that reasons like a human would to efficiently solve any Sudoku puzzle.Project Description - Build a Sudoku SolverProject Rubric - Build a Sudoku Solver
- Lesson 01: Welcome to Artificial IntelligenceWelcome to the Artificial Intelligence Nanodegree program!
Module 02: Career Services
- Lesson 01: Jobs in AILearn about common jobs in artificial intelligence, and get tips on how to stay active in the community.
- Lesson 02: Optimize Your GitHub ProfileOther professionals are collaborating on GitHub and growing their network. Submit your profile to ensure your profile is on par with leaders in your field.Project Description - Optimize Your GitHub ProfileProject Rubric - Optimize Your GitHub Profile
- Concept 01: Prove Your Skills With GitHub
- Concept 02: Introduction
- Concept 03: GitHub profile important items
- Concept 04: Good GitHub repository
- Concept 05: Interview with Art - Part 1
- Concept 06: Identify fixes for example “bad” profile
- Concept 07: Quick Fixes #1
- Concept 08: Quick Fixes #2
- Concept 09: Writing READMEs with Walter
- Concept 10: Interview with Art - Part 2
- Concept 11: Commit messages best practices
- Concept 12: Reflect on your commit messages
- Concept 13: Participating in open source projects
- Concept 14: Interview with Art - Part 3
- Concept 15: Participating in open source projects 2
- Concept 16: Starring interesting repositories
- Concept 17: Next Steps
Part 02 : Constraint Satisfaction Problems
Take a deep dive into the constraint satisfaction problem framework and further explore constraint propagation, backtracking search, and other CSP techniques. Complete a classroom exercise using a powerful CSP solver on a variety of problems to gain experience framing new problems as CSPs.
Module 01: Constraint Satisfaction Problems
- Lesson 01: Constraint Satisfaction ProblemsExpand from the constraint propagation technique used in the Sudoku project to the Constraint Satisfaction Problem framework that can be used to solve a wide range of general problems.
- Concept 01: Lesson Plan - Week 2
- Concept 02: Introduction
- Concept 03: CSP Examples
- Concept 04: Map Coloring
- Concept 05: Constraint Graph
- Concept 06: Map Coloring Quiz
- Concept 07: Constraint Types
- Concept 08: Backtracking Search
- Concept 09: Why Backtracking?
- Concept 10: Improving Backtracking Efficiency
- Concept 11: Backtracking Optimization Quiz
- Concept 12: Forward Checking
- Concept 13: Constraint Propagation and Arc Consistency
- Concept 14: Constraint Propagation Quiz
- Concept 15: Structured CSPs
- Lesson 02: CSP Coding ExercisePractice formulating some classical example problems as CSPs, and then to explore using a powerful open source constraint satisfaction tool called Z3 from Microsoft Research to solve them.
- Lesson 01: Constraint Satisfaction ProblemsExpand from the constraint propagation technique used in the Sudoku project to the Constraint Satisfaction Problem framework that can be used to solve a wide range of general problems.
Module 02: Additional Constraint Problem Topics
- Lesson 01: Additional ReadingsReading list of applications and additional topics related to CSPs.
- Concept 01: Reading List
Part 03 : Classical Search
Learn classical graph search algorithms—including uninformed search techniques like breadth-first and depth-first search and informed search with heuristics including A*. These algorithms are at the heart of many classical AI techniques, and have been used for planning, optimization, problem solving, and more. Complete the lesson by teaching PacMan to search with these techniques to solve increasingly complex domains.
- Concept 01: Reading List
- Lesson 01: Additional ReadingsReading list of applications and additional topics related to CSPs.
Module 01: Introduction
- Lesson 01: IntroductionPeter Norvig, co-author of Artificial Intelligence: A Modern Approach, explains a framework for search problems, and introduces uninformed & informed search strategies to solve them.
- Module 02: Uninformed Search
- Lesson 01: Uninformed SearchPeter introduces uninformed search strategies—which can only solve problems by generating successor states and distinguishing between goal and non-goal states.
- Concept 01: Intro to Uninformed Search
- Concept 02: Example: Route Finding
- Concept 03: Quiz: Tree Search
- Concept 04: Tree Search Continued
- Concept 05: Quiz: Graph Search
- Concept 06: Quiz: Breadth First Search 1
- Concept 07: Breadth First Search 2
- Concept 08: Quiz: Breadth First Search 3
- Concept 09: Breadth First Search 4
- Concept 10: Breadth First Search 5
- Concept 11: Uniform Cost Search
- Concept 12: Uniform Cost Search 1
- Concept 13: Uniform Cost Search 2
- Concept 14: Uniform Cost Search 3
- Concept 15: Uniform Cost Search 4
- Concept 16: Uniform Cost Search 5
- Concept 17: Quiz: Search Comparison
- Concept 18: Search Comparison 1
- Concept 19: Quiz: Search Comparison 2
- Concept 20: Search Comparison 3
- Lesson 01: Uninformed SearchPeter introduces uninformed search strategies—which can only solve problems by generating successor states and distinguishing between goal and non-goal states.
- Module 03: Informed Search
- Lesson 01: Informed SearchPeter introduces informed search strategies, which means that they use problem-specific knowledge to find solutions more efficiently than uninformed search.
- Concept 01: Intro to Informed Search
- Concept 02: On Uniform Cost
- Concept 03: A* Search
- Concept 04: A* Search 1
- Concept 05: A* Search 2
- Concept 06: A* Search 3
- Concept 07: A* Search 4
- Concept 08: A* Search 5
- Concept 09: Optimistic Heuristic
- Concept 10: Quiz: State Spaces
- Concept 11: State Spaces 1
- Concept 12: Quiz: State Spaces 2
- Concept 13: State Spaces 3
- Concept 14: Quiz: Sliding Blocks Puzzle
- Concept 15: Sliding Blocks Puzzle 1
- Concept 16: Sliding Blocks Puzzle 2
- Concept 17: A Note on Implementation
- Lesson 01: Informed SearchPeter introduces informed search strategies, which means that they use problem-specific knowledge to find solutions more efficiently than uninformed search.
- Module 04: Classroom Exercise: Search
- Lesson 01: Classroom Exercise: SearchComplete a practice exercise where you’ll implement informed and uninformed search strategies for the game PacMan.
Module 05: Additional Search Topics
- Lesson 01: Additional Search TopicsReferences to additional readings on search.
- Concept 01: Problems with Search
- Concept 02: Peter’s take on AI
- Concept 03: Suggested Readings
Part 04 : Automated Planning
Learn to represent general problem domains with symbolic logic and use search to find optimal plans for achieving your agent’s goals. Planning & scheduling systems power modern automation & logistics operations, and aerospace applications like the Hubble telescope & NASA Mars rovers.
- Lesson 01: Additional Search TopicsReferences to additional readings on search.
Module 01: Symbolic Logic & Reasoning
- Lesson 01: Symbolic Logic & ReasoningPeter Norvig returns to explain propositional logic and first-order logic, which provide a symbolic logic framework that enables AI agents to reason about their actions.
- Concept 01: Lesson Plan - Week 4
- Concept 02: Introduction
- Concept 03: Background and Expert Systems
- Concept 04: Propositional Logic
- Concept 05: Truth Tables
- Concept 06: Truth Table Question
- Concept 07: Propositional Logic Question
- Concept 08: Terminology
- Concept 09: Propositional Logic Limitations
- Concept 10: First Order Logic
- Concept 11: Models
- Concept 12: Syntax
- Concept 13: Vacuum World
- Concept 14: FOL Question
- Concept 15: FOL Question 2
- Lesson 01: Symbolic Logic & ReasoningPeter Norvig returns to explain propositional logic and first-order logic, which provide a symbolic logic framework that enables AI agents to reason about their actions.
- Module 02: Automated Planning
- Lesson 01: Introduction to PlanningPeter Norvig defines automated planning problems in comparison to more general problem solving techniques to set the stage for classical planning algorithms in the next lesson.
- Concept 01: Problem Solving vs Planning
- Concept 02: Planning vs Execution
- Concept 03: Vacuum Cleaner Example
- Concept 04: Quiz: Sensorless Vacuum Cleaner Problem
- Concept 05: Partially Observable Vacuum Cleaner Example
- Concept 06: Quiz: Stochastic Environment Problem
- Concept 07: Infinite Sequences
- Concept 08: Finding a Successful Plan
- Concept 09: Quiz: Finding a Successful Plan Question
- Concept 10: Problem Solving via Mathematical Notation
- Concept 11: Tracking the-Predict Update Cycle
- Lesson 02: Classical PlanningPeter presents a survey of Classical Planning techniques: forward planning (progression search) & backward planning (regression search).
- Lesson 01: Introduction to PlanningPeter Norvig defines automated planning problems in comparison to more general problem solving techniques to set the stage for classical planning algorithms in the next lesson.
- Module 03: Build a Forward Planning Agent
- Lesson 01: Build a Forward-Planning AgentIn this project you’ll use experiment with search and symbolic logic to build an agent that automatically develops and executes plans to achieve their goals. Project Description - Build a Forward-Planning AgentProject Rubric - Build a Forward-Planning Agent
Module 04: Additional Planning Topics
- Lesson 01: Additional Planning TopicsPeter discusses plan space search & situational calculus. Finish the lesson with readings on advanced planning topics & modern applications of automated planning.
- Concept 01: Plan Space Search
- Concept 02: Sliding Puzzle Example
- Concept 03: Situation Calculus 1
- Concept 04: Situation Calculus 2
- Concept 05: Situation Calculus 3
- Concept 06: Automated Planning References
Part 05 : Optimization Problems
Learn about iterative improvement optimization problems and classical algorithms emphasizing gradient-free methods for solving them. These techniques can often be used on intractable problems to find solutions that are “good enough” for practical purposes, and have been used extensively in fields like Operations Research & logistics. Finish the lesson by completing a classroom exercise comparing the different algorithms’ performance on a variety of problems.
- Lesson 01: Additional Planning TopicsPeter discusses plan space search & situational calculus. Finish the lesson with readings on advanced planning topics & modern applications of automated planning.
Module 01: Optimization Problems
- Lesson 01: IntroductionThad Starner introduces the concept of iterative improvement problems, a class of optimization problems that can be solved with global optimization or local search techniques covered in this lesson.
- Module 02: Local Search
- Lesson 01: Hill ClimbingThad introduces Hill Climbing, a very simple local search optimization technique that works well on many iterative improvement problems.
- Lesson 02: Simulated AnnealingThad explains Simulated Annealing, a classical global optimization technique for optimization.
- Lesson 03: Genetic AlgorithmsThad introduces another optimization technique: Genetic Algorithms, which uses a population of samples to make iterative improvements towards the goal.
- Module 03: Optimization Exercise
- Lesson 01: Optimization ExerciseComplete a classroom exercise implementing simulated annealing to solve the traveling salesman problem.
Module 04: Additional Optimization Topics
- Lesson 01: Additional Optimization TopicsReview similarities of the techniques introduced in this lesson with links to readings on advanced optimization topics, then complete an optimization exercise in the classroom.
- Concept 01: Similarities Between Optimizers
- Concept 02: Reading List
Part 06 : Adversarial Search
Learn how to search in multi-agent environments (including decision making in competitive environments) using the minimax theorem from game theory. Then build an agent that can play games better than any human.
- Lesson 01: Additional Optimization TopicsReview similarities of the techniques introduced in this lesson with links to readings on advanced optimization topics, then complete an optimization exercise in the classroom.
Module 01: Adversarial Search: Game Playing
- Lesson 01: Search in Multiagent DomainsThad returns to teach search in multi-agent domains, using the Minimax theorem to solve adversarial problems and build agents that make better decisions than humans.
- Concept 01: Lesson Plan - Week 8
- Concept 02: Overview
- Concept 03: The Minimax Algorithm
- Concept 04: Isolation
- Concept 05: Building a Game Tree
- Concept 06: Coding: Building a Game Class
- Concept 07: Which of These Are Valid Moves?
- Concept 08: Coding: Game Class Functionality
- Concept 09: Building a Game Tree (Contd.)
- Concept 10: Isolation Game Tree with Leaf Values
- Concept 11: How Do We Tell the Computer Not to Lose?
- Concept 12: MIN and MAX Levels
- Concept 13: Coding: Scoring Min & Max Levels
- Concept 14: Propagating Values Up the Tree
- Concept 15: Computing MIN MAX Values
- Concept 16: Computing MIN MAX Solution
- Concept 17: Choosing the Best Branch
- Concept 18: Coding: Minimax Search
- Concept 19: Max Number of Nodes Visited
- Concept 20: Max Moves
- Concept 21: The Branching Factor
- Concept 22: Number of Nodes in a Game Tree
- Concept 23: The Branching Factor (Contd.)
- Concept 24: Max Number of Nodes
- Lesson 02: Optimizing Minimax SearchThad explains some of the limitations of minimax search and introduces optimizations & changes that make it practical in more complex domains.
- Concept 01: Lesson Plan - Week 9
- Concept 02: Minimax Quiz
- Concept 03: Depth-Limited Search
- Concept 04: Coding: Depth-Limited Search
- Concept 05: Evaluation Function Intro
- Concept 06: Testing the Evaluation Function
- Concept 07: Testing the Evaluation Function Part 2
- Concept 08: Testing Evaluation Functions
- Concept 09: Testing the Evaluation Function Part 3
- Concept 10: Coding: #my_moves Heuristic
- Concept 11: Quiescent Search
- Concept 12: A Problem
- Concept 13: Iterative Deepening
- Concept 14: Understanding Exponential Time
- Concept 15: Exponential b=3
- Concept 16: Varying the Branching Factor
- Concept 17: Coding: Iterative Deepening
- Concept 18: Horizon Effect
- Concept 19: Horizon Effect (Contd.)
- Concept 20: Good Evaluation Functions
- Concept 21: Evaluating Evaluation Functions
- Concept 22: Alpha-Beta Pruning
- Concept 23: Alpha-Beta Pruning Quiz 1
- Concept 24: Alpha-Beta Pruning Quiz 2
- Concept 25: Coding: Alpha-Beta Pruning
- Concept 26: Solving 5x5 Isolation
- Concept 27: Coding: Opening Book
- Concept 28: Thad’s Asides
- Lesson 01: Search in Multiagent DomainsThad returns to teach search in multi-agent domains, using the Minimax theorem to solve adversarial problems and build agents that make better decisions than humans.
- Module 02: Build an Adversarial Search Agent
- Lesson 01: Build an Adversarial Game Playing AgentExtend classical search to adversarial domains, to build agents that make good decisions without any human intervention—such as the DeepMind AlphaGo agent.Project Description - Build an Adversarial Game Playing AgentProject Rubric - Build an Adversarial Game Playing Agent
- Module 03: Additional Topics in Adversarial Search
- Lesson 01: Extending Minimax SearchThad introduces extensions to minimax search to support more than two players and non-deterministic domains.
- Concept 01: Introduction
- Concept 02: 3-Player Games
- Concept 03: 3-Player Games Quiz
- Concept 04: 3-Player Alpha-Beta Pruning
- Concept 05: Multi-player Alpha-Beta Pruning Reading
- Concept 06: Probabilistic Games
- Concept 07: Sloppy Isolation
- Concept 08: Sloppy Isolation Expectimax
- Concept 09: Expectimax Alpha-Beta Pruning
- Concept 10: Probabilistic Alpha-Beta Pruning
- Lesson 02: Additional Adversarial Search TopicsIntroduce Monte Carlo Tree Search, a highly-successful search technique in game domains, along with a reading list for other advanced adversarial search topics.
- Lesson 01: Extending Minimax SearchThad introduces extensions to minimax search to support more than two players and non-deterministic domains.
Module 04: Career Services
- Lesson 01: Take 30 Min to Improve your LinkedInFind your next job or connect with industry peers on LinkedIn. Ensure your profile attracts relevant leads that will grow your professional network.Project Description - Improve Your LinkedIn ProfileProject Rubric - Improve Your LinkedIn Profile
- Concept 01: Get Opportunities with LinkedIn
- Concept 02: Use Your Story to Stand Out
- Concept 03: Why Use an Elevator Pitch
- Concept 04: Create Your Elevator Pitch
- Concept 05: Use Your Elevator Pitch on LinkedIn
- Concept 06: Create Your Profile With SEO In Mind
- Concept 07: Profile Essentials
- Concept 08: Work Experiences & Accomplishments
- Concept 09: Build and Strengthen Your Network
- Concept 10: Reaching Out on LinkedIn
- Concept 11: Boost Your Visibility
- Concept 12: Up Next
Part 07 : Probabilistic Models
Learn to use Bayes Nets to represent complex probability distributions, and algorithms for sampling from those distributions. Then learn the algorithms used to train, predict, and evaluate Hidden Markov Models for pattern recognition. HMMs have been used for gesture recognition in computer vision, gene sequence identification in bioinformatics, speech generation & part of speech tagging in natural language processing, and more.
- Lesson 01: Take 30 Min to Improve your LinkedInFind your next job or connect with industry peers on LinkedIn. Ensure your profile attracts relevant leads that will grow your professional network.Project Description - Improve Your LinkedIn ProfileProject Rubric - Improve Your LinkedIn Profile
Module 01: Probability Refresher
- Lesson 01: ProbabilitySebastian Thrun briefly reviews basic probability theory including discrete distributions, independence, joint probabilities, and conditional distributions to model uncertainty in the real world.
- Concept 01: Lesson Plan - Week 10
- Concept 02: Intro to Probability and Bayes Nets
- Concept 03: Quiz: Probability / Coin Flip
- Concept 04: Quiz: Coin Flip 2
- Concept 05: Quiz: Coin Flip 3
- Concept 06: Quiz: Coin Flip 4
- Concept 07: Quiz: Coin Flip 5
- Concept 08: Probability Summary
- Concept 09: Quiz: Dependence
- Concept 10: What We Learned
- Concept 11: Quiz: Weather
- Concept 12: Quiz: Weather 2
- Concept 13: Quiz: Weather 3
- Concept 14: Quiz: Cancer
- Concept 15: Quiz: Cancer 2
- Concept 16: Quiz: Cancer 3
- Concept 17: Quiz: Cancer 4
- Concept 18: Bayes Rule
- Lesson 01: ProbabilitySebastian Thrun briefly reviews basic probability theory including discrete distributions, independence, joint probabilities, and conditional distributions to model uncertainty in the real world.
- Module 02: Naive Bayes
- Lesson 01: Naive BayesIn this section, you’ll learn how to build a spam e-mail classifier using the naive Bayes algorithm.
- Concept 01: Intro
- Concept 02: Guess the Person
- Concept 03: Known and Inferred
- Concept 04: Guess the Person Now
- Concept 05: Bayes Theorem
- Concept 06: Quiz: False Positives
- Concept 07: Solution: False Positives
- Concept 08: Bayesian Learning 1
- Concept 09: Bayesian Learning 2
- Concept 10: Bayesian Learning 3
- Concept 11: Naive Bayes Algorithm 1
- Concept 12: Naive Bayes Algorithm 2
- Concept 13: Building a Spam Classifier
- Concept 14: Exercise: Building a Spam Classifier
- Concept 15: Outro
- Lesson 01: Naive BayesIn this section, you’ll learn how to build a spam e-mail classifier using the naive Bayes algorithm.
- Module 03: Bayes Networks
- Lesson 01: Bayes NetsSebastian explains using Bayes Nets as a compact graphical model to encode probability distributions for efficient analysis.
- Concept 01: Lesson Plan - Week 11
- Concept 02: Introduction
- Concept 03: Quiz: Bayes Network
- Concept 04: Computing Bayes Rule
- Concept 05: Quiz: Two Test Cancer
- Concept 06: Quiz: Two Test Cancer 2
- Concept 07: Quiz: Conditional Independence
- Concept 08: Quiz: Conditional Independence 2
- Concept 09: Quiz: Absolute And Conditional
- Concept 10: Quiz: Confounding Cause
- Concept 11: Quiz: Explaining Away
- Concept 12: Quiz: Explaining Away 2
- Concept 13: Quiz: Explaining Away 3
- Concept 14: Conditional Dependence
- Concept 15: Quiz: General Bayes Net
- Concept 16: Quiz: General Bayes Net 2
- Concept 17: Quiz: General Bayes Net 3
- Concept 18: Value Of A Network
- Concept 19: Quiz: D Separation
- Concept 20: Quiz: D Separation 2
- Lesson 02: Inference in Bayes NetsSebastian explains probabilistic inference using Bayes Nets, i.e. how to use evidence to calculate probabilities from the network.
- Concept 01: Probabilistic Inference
- Concept 02: Quiz: Overview and Example
- Concept 03: Quiz: Enumeration
- Concept 04: Quiz: Speeding Up Enumeration
- Concept 05: Quiz: Speeding Up Enumeration 2
- Concept 06: Quiz: Speeding Up Enumeration 3
- Concept 07: Quiz: Speeding Up Enumeration 4
- Concept 08: Causal Direction
- Concept 09: Quiz: Variable Elimination
- Concept 10: Quiz: Variable Elimination 2
- Concept 11: Quiz: Variable Elimination 3
- Concept 12: Variable Elimination 4
- Concept 13: Approximate Inference
- Concept 14: Quiz: Sampling Example
- Concept 15: Approximate Inference 2
- Concept 16: Rejection Sampling
- Concept 17: Quiz: Likelihood Weighting
- Concept 18: Likelihood Weighting 1
- Concept 19: Likelihood Weighting 2
- Concept 20: Gibbs Sampling
- Concept 21: Quiz: Monty Hall Problem
- Concept 22: Monty Hall Letter
- Lesson 01: Bayes NetsSebastian explains using Bayes Nets as a compact graphical model to encode probability distributions for efficient analysis.
- Module 04: Hidden Markov Models
- Lesson 01: Hidden Markov ModelsLearn Hidden Markov Models, and apply them to part-of-speech tagging, a very popular problem in Natural Language Processing.
- Concept 01: Lesson Plan - Week 12
- Concept 02: Intro
- Concept 03: Part of Speech Tagging
- Concept 04: Lookup Table
- Concept 05: Bigrams
- Concept 06: When bigrams won’t work
- Concept 07: Hidden Markov Models
- Concept 08: Quiz: How many paths?
- Concept 09: Solution: How many paths
- Concept 10: Quiz: How many paths now?
- Concept 11: Quiz: Which path is more likely?
- Concept 12: Solution: Which path is more likely?
- Concept 13: Viterbi Algorithm Idea
- Concept 14: Viterbi Algorithm
- Concept 15: Further Reading
- Concept 16: Outro
- Lesson 01: Hidden Markov ModelsLearn Hidden Markov Models, and apply them to part-of-speech tagging, a very popular problem in Natural Language Processing.
- Module 05: Project: Part of Speech Tagging
- Lesson 01: Part of Speech TaggingIn this project you will build a hidden Markov model (HMM) to perform part of speech tagging, a common pre-processing step in Natural Language Processing.Project Description - Part of Speech TaggingProject Rubric - Part of Speech Tagging
Module 06: Additional Topics in PGMs
- Lesson 01: Dynamic Time WarpingThad explains the Dynamic Time Warping technique for working with time-series data.
- Lesson 02: Additional Topics in PGMsReading list of select topics to continue learning about probabilistic graphical models.
- Concept 01: Reading List
Part 08 : After the AI Nanodegree Program
Once you’ve completed the last project, review the information here to discover resources for you to continue learning and practicing AI.
- Concept 01: Reading List
Module 01: Additional Topics in AI
- Lesson 01: Additional Topics in AISuggested resources to continue learning about artificial intelligence after completing the Nanodegree program.
- Concept 01: Additional Topics in AI
Part 09 (Elective): Extracurricular
Additional lecture material on hidden Markov models and applications for gesture recognition.
- Concept 01: Additional Topics in AI
- Lesson 01: Additional Topics in AISuggested resources to continue learning about artificial intelligence after completing the Nanodegree program.
Module 01: Hidden Markov Models
- Lesson 01: Hidden Markov ModelsThad returns to discuss using Hidden Markov Models for pattern recognition with sequential data.
- Concept 01: Hidden Markov Models
- Concept 02: HMM Representation
- Concept 03: Sign Language Recognition
- Concept 04: Delta-y Quiz
- Concept 05: HMM: “I”
- Concept 06: HMM: “We”
- Concept 07: I vs We Quiz
- Concept 08: Viterbi Trellis: “I”
- Concept 09: “I” Transitions Quiz
- Concept 10: Viterbi Trellis: “I” (continued)
- Concept 11: Nodes for “I”
- Concept 12: Viterbi Path
- Concept 13: “We”: Transitions Quiz
- Concept 14: “We”: Transition Probabilities Quiz
- Concept 15: “We”: Output Probabilities Quiz
- Concept 16: “We”: Viterbi Path
- Concept 17: Which Gesture is Recognized?
- Concept 18: New Observation Sequence for “I”
- Concept 19: New Observation Sequence for “We”
- Concept 20: HMM Training
- Concept 21: Baum Welch
- Lesson 02: Advanced HMMsThad shares advanced techniques that can improve performance of HMMs recognizing American Sign Language, and more complex HMM models for applications like speech synthesis.
- Concept 01: Multidimensional Output Probabilities
- Concept 02: Using a Mixture of Gaussians
- Concept 03: HMM Topologies
- Concept 04: Phrase Level Recognition
- Concept 05: Stochastic Beam Search
- Concept 06: Context Training
- Concept 07: Statistical Grammar
- Concept 08: State Tying
- Concept 09: HMM Resources
- Concept 10: Segmentally Boosted HMMs
- Concept 11: SBHMM Resources
- Concept 12: Using HMMs to Generate Data
- Concept 13: HMMs for Speech Synthesis
- Lesson 01: Hidden Markov ModelsThad returns to discuss using Hidden Markov Models for pattern recognition with sequential data.
【点击购买】