Sale!

Machine Learning Refined: Foundations, Algorithms, and Applications 2nd Edition, ISBN-13: 978-1108480727

$14.99

Trustpilot

Machine Learning Refined: Foundations, Algorithms, and Applications 2nd Edition, ISBN-13: 978-1108480727

[PDF eBook eTextbook] – Available Instantly

  • Publisher: ‎ Cambridge University Press; 2nd edition (March 12, 2020)
  • Language: ‎ English
  • 594 pages
  • ISBN-10: ‎ 1108480721
  • ISBN-13: ‎ 978-1108480727

An intuitive approach to machine learning covering key concepts, real-world applications, and practical Python coding exercises.

With its intuitive yet rigorous approach to machine learning, this text provides students with the fundamental knowledge and practical tools needed to conduct research and build data-driven products. The authors prioritize geometric intuition and algorithmic thinking, and include detail on all the essential mathematical prerequisites, to offer a fresh and accessible way to learn. Practical applications are emphasized, with examples from disciplines including computer vision, natural language processing, economics, neuroscience, recommender systems, physics, and biology. Over 300 color illustrations are included and have been meticulously designed to enable an intuitive grasp of technical concepts, and over 100 in-depth coding exercises (in python) provide a real understanding of crucial machine learning algorithms. A suite of online resources including sample code, data sets, interactive lecture slides, and a solutions manual are provided online, making this an ideal text both for graduate courses on machine learning and for individual reference and self-study.

Table of Contents:

Half-title
Title page
Copyright information
Dedication
Contents
Preface
Acknowledgements
1 Introduction to Machine Learning
1.1 Introduction
1.2 Distinguishing Cats from Dogs: a Machine Learning Approach
1.3 The Basic Taxonomy of Machine Learning Problems
1.4 Mathematical Optimization
1.5 Conclusion
Part I Mathematical Optimization
2 Zero-Order Optimization Techniques
2.1 Introduction
2.2 The Zero-Order Optimality Condition
2.3 Global Optimization Methods
2.4 Local Optimization Methods
2.5 Random Search
2.6 Coordinate Search and Descent
2.7 Conclusion
2.8 Exercises
3 First-Order Optimization Techniques
3.1 Introduction
3.2 The First-Order Optimality Condition
3.3 The Geometry of First-Order Taylor Series
3.4 Computing Gradients Efficiently
3.5 Gradient Descent
3.6 Two Natural Weaknesses of Gradient Descent
3.7 Conclusion
3.8 Exercises
4 Second-Order Optimization Techniques
4.1 The Second-Order Optimality Condition
4.2 The Geometry of Second-Order Taylor Series
4.3 Newton’s Method
4.4 Two Natural Weaknesses of Newton’s Method
4.5 Conclusion
4.6 Exercises
Part II Linear Learning
5 Linear Regression
5.1 Introduction
5.2 Least Squares Linear Regression
5.3 Least Absolute Deviations
5.4 Regression Quality Metrics
5.5 Weighted Regression
5.6 Multi-Output Regression
5.7 Conclusion
5.8 Exercises
5.9 Endnotes
6 Linear Two-Class Classification
6.1 Introduction
6.2 Logistic Regression and the Cross Entropy Cost
6.3 Logistic Regression and the Softmax Cost
6.4 The Perceptron
6.5 Support Vector Machines
6.6 Which Approach Produces the Best Results?
6.7 The Categorical Cross Entropy Cost
6.8 Classification Quality Metrics
6.9 Weighted Two-Class Classification
6.10 Conclusion
6.11 Exercises
7 Linear Multi-Class Classification
7.1 Introduction
7.2 One-versus-All Multi-Class Classification
7.3 Multi-Class Classification and the Perceptron
7.4 Which Approach Produces the Best Results?
7.5 The Categorical Cross Entropy Cost Function
7.6 Classification Quality Metrics
7.7 Weighted Multi-Class Classification
7.8 Stochastic and Mini-Batch Learning
7.9 Conclusion
7.10 Exercises
8 Linear Unsupervised Learning
8.1 Introduction
8.2 Fixed Spanning Sets, Orthonormality, and Projections
8.3 The Linear Autoencoder and Principal Component Analysis
8.4 Recommender Systems
8.5 K-Means Clustering
8.6 General Matrix Factorization Techniques
8.7 Conclusion
8.8 Exercises
8.9 Endnotes
9 Feature Engineering and Selection
9.1 Introduction
9.2 Histogram Features
9.3 Feature Scaling via Standard Normalization
9.4 Imputing Missing Values in a Dataset
9.5 Feature Scaling via PCA-Sphering
9.6 Feature Selection via Boosting
9.7 Feature Selection via Regularization
9.8 Conclusion
9.9 Exercises
Part III Nonlinear Learning
10 Principles of Nonlinear Feature Engineering
10.1 Introduction
10.2 Nonlinear Regression
10.3 Nonlinear Multi-Output Regression
10.4 Nonlinear Two-Class Classification
10.5 Nonlinear Multi-Class Classification
10.6 Nonlinear Unsupervised Learning
10.7 Conclusion
10.8 Exercises
11 Principles of Feature Learning
11.1 Introduction
11.2 Universal Approximators
11.3 Universal Approximation of Real Data
11.4 Naive Cross-Validation
11.5 Efficient Cross-Validation via Boosting
11.6 Efficient Cross-Validation via Regularization
11.7 Testing Data
11.8 Which Universal Approximator Works Best in Practice?
11.9 Bagging Cross-Validated Models
11.10 K-Fold Cross-Validation
11.11 When Feature Learning Fails
11.12 Conclusion
11.13 Exercises
12 Kernel Methods
12.1 Introduction
12.2 Fixed-Shape Universal Approximators
12.3 The Kernel Trick
12.4 Kernels as Measures of Similarity
12.5 Optimization of Kernelized Models
12.6 Cross-Validating Kernelized Learners
12.7 Conclusion
12.8 Exercises
13 Fully Connected Neural Networks
13.1 Introduction
13.2 Fully Connected Neural Networks
13.3 Activation Functions
13.4 The Backpropagation Algorithm
13.5 Optimization of Neural Network Models
13.6 Batch Normalization
13.7 Cross-Validation via Early Stopping
13.8 Conclusion
13.9 Exercises
14 Tree-Based Learners
14.1 Introduction
14.2 From Stumps to Deep Trees
14.3 Regression Trees
14.4 Classification Trees
14.5 Gradient Boosting
14.6 Random Forests
14.7 Cross-Validation Techniques for Recursively Defined Trees
14.8 Conclusion
14.9 Exercises
Part IV Appendices
Appendix A Advanced First- and Second-Order Optimization Methods
A.1 Introduction
A.2 Momentum-Accelerated Gradient Descent
A.3 Normalized Gradient Descent
A.4 Advanced Gradient-Based Methods
A.5 Mini-Batch Optimization
A.6 Conservative Steplength Rules
A.7 Newton’s Method, Regularization, and Nonconvex Functions
A.8 Hessian-Free Methods
Appendix B Derivatives and Automatic Differentiation
B.1 Introduction
B.2 The Derivative
B.3 Derivative Rules for Elementary Functions and Operations
B.4 The Gradient
B.5 The Computation Graph
B.6 The Forward Mode of Automatic Differentiation
B.7 The Reverse Mode of Automatic Differentiation
B.8 Higher-Order Derivatives
B.9 Taylor Series
B.10 Using the autograd Library
Appendix C Linear Algebra
C.1 Introduction
C.2 Vectors and Vector Operations
C.3 Matrices and Matrix Operations
C.4 Eigenvalues and Eigenvectors
C.5 Vector and Matrix Norms
References
Index

Jeremy Watt received his Ph.D. in Electrical Engineering from Northwestern University, Illinois, and is now a machine learning consultant and educator. He teaches machine learning, deep learning, mathematical optimization, and reinforcement learning at Northwestern University, Illinois.

Reza Borhani received his Ph.D. in Electrical Engineering from Northwestern University, Illinois, and is now a machine learning consultant and educator. He teaches a variety of courses in machine learning and deep learning at Northwestern University, Illinois.

Aggelos K. Katsaggelos is the Joseph Cummings Professor at Northwestern University, Illinois, where he heads the Image and Video Processing Laboratory. He is a Fellow of Institute of Electrical and Electronics Engineers (IEEE), SPIE, the European Association for Signal Processing (EURASIP), and The Optical Society (OSA) and the recipient of the IEEE Third Millennium Medal (2000).

What makes us different?

• Instant Download

• Always Competitive Pricing

• 100% Privacy

• FREE Sample Available

• 24-7 LIVE Customer Support

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.