Probability and Statistics for Deep Learning
Deep Learning is often called “Statistical Learning” and approached by many experts as statistical theory of the problem of the function estimation from a given collection of data. This course will introduce fundamental concepts of probability theory and statistics. It will cover many important algorithms and modelling used in supervised learning of neural networks. In addition, the course will introduce tools and underlying mathematical concepts of data interpretation that work with specific models of neural networks. Upon completion of this course you will have acquired the background in probability and statistics necessary for Machine Learning, and have the ability to use TensorFlow to create and train neural networks for specific practical problems.
- Revise the basic probability theory and learn common data distributions used in machine learning
- Bayesian concept learning
- Gaussian models, (mixture model)
- Undirected graphical model
- Linear and Logistic Regression
- Support Vector Machines for classification, (Kernels)
- Feedforward artificial neural network, multilayer perceptron (MLP)
- Learn how to use Python/R in TensorFlow for Deep NN
Software: TensorFlow, an open source software library for high performance numerical computation.
Course typically offered: Online during our Spring and Fall academic quarters.
Prerequisites: Basic knowledge of Linear Algebra - concept of vectors and matrices.
Next steps: Upon completion, consider additional coursework in our specialized certificate in Machine Learning Methods to continue learning.
More information: For more information about this course, please contact firstname.lastname@example.org.
Course Number: CSE-41305
Credit: 3.00 unit(s)
Related Certificate Programs: Machine Learning Methods