Have any question ?
+91 8106-920-029
+91 6301-939-583
team@appliedaicourse.com
Register
Login
COURSES
Applied Machine Learning Course
Diploma in AI and ML
GATE CS Blended Course
Interview Preparation Course
AI Workshop
AI Case Studies
Courses
Applied Machine Learning Course
Workshop
Case Studies
Job Guarantee
Job Guarantee Terms & Conditions
Incubation Center
Student Blogs
Live Sessions
Success Stories
For Business
Upskill
Hire From Us
Contact Us
Home
Courses
Cancer Diagnosis using Medical Records
Exercise: t-SNE visualization of Amazon reviews with polarity based color-coding
Exercise: t-SNE visualization of Amazon reviews with polarity based color-coding
Instructor:
Applied AI Course
Duration:
6 mins
Full Screen
Close
This content is restricted. Please
Login
Prev
Next
Avg-Word2Vec and TFIDF-Word2Vec (Code Sample)
How Classification Works?
Real world problem: Predict rating given product reviews on Amazon
1.1
Dataset overview: Amazon Fine Food reviews(EDA)
23 min
1.2
Data Cleaning: Deduplication
15 min
1.3
Why convert text to a vector?
14 min
1.4
Bag of Words (BoW)
18 min
1.5
Text Preprocessing: Stemming, Stop-word removal, Tokenization, Lemmatization.
15 min
1.6
uni-gram, bi-gram, n-grams.
9 min
1.7
tf-idf (term frequency- inverse document frequency)
22 min
1.8
Why use log in IDF?
14 min
1.9
Word2Vec.
16 min
1.10
Avg-Word2Vec, tf-idf weighted Word2Vec
9 min
1.11
Bag of Words( Code Sample)
19 min
1.12
Text Preprocessing( Code Sample)
11 min
1.13
Bi-Grams and n-grams (Code Sample)
5 min
1.14
TF-IDF (Code Sample)
6 min
1.15
Word2Vec (Code Sample)
12 min
1.16
Avg-Word2Vec and TFIDF-Word2Vec (Code Sample)
2 min
1.17
Exercise: t-SNE visualization of Amazon reviews with polarity based color-coding
6 min
Classification And Regression Models: K-Nearest Neighbors
2.1
How Classification Works?
10 min
2.2
Data matrix notation
7 min
2.3
Classification vs Regression (examples)
6 min
2.4
K-Nearest Neighbors Geometric intuition with a toy example
11 min
2.5
Failure cases of KNN
7 min
2.6
Distance measures: Euclidean(L2) , Manhattan(L1), Minkowski, Hamming
20 min
2.7
Cosine Distance & Cosine Similarity
19 min
2.8
How to measure the effectiveness of k-NN?
16 min
2.9
Test/Evaluation time and space complexity
12 min
2.10
KNN Limitations
9 min
2.11
Decision surface for K-NN as K changes
23 min
2.12
Overfitting and Underfitting
12 min
2.13
Need for Cross validation
22 min
2.14
K-fold cross validation
17 min
2.15
Visualizing train, validation and test datasets
13 min
2.16
How to determine overfitting and underfitting?
19 min
2.17
Time based splitting
19 min
2.18
k-NN for regression
5 min
2.19
Weighted k-NN
8 min
2.20
Voronoi diagram
4 min
2.21
Binary search tree
16 min
2.22
How to build a kd-tree
17 min
2.23
Find nearest neighbours using kd-tree
13 min
2.24
Limitations of Kd tree
9 min
2.25
Extensions
3 min
2.26
Hashing vs LSH
10 min
2.27
LSH for cosine similarity
40 min
2.28
LSH for euclidean distance
13 min
2.29
Probabilistic class label
8 min
2.30
Code Sample:Decision boundary .
23 min
2.31
Code Sample:Cross Validation.
13 min
2.32
Exercise: Apply k-NN on Amazon reviews dataset
5 min
Classification algorithms in various situations
3.1
Introduction
5 min
3.2
Imbalanced vs balanced dataset
23 min
3.3
Multi-class classification
12 min
3.4
k-NN, given a distance or similarity matrix
9 min
3.5
Train and test set differences
22 min
3.6
Impact of outliers
7 min
3.7
Local outlier Factor (Simple solution :Mean distance to Knn)
13 min
3.8
K-Distance(A),N(A)
4 min
3.9
Reachability-Distance(A,B)
8 min
3.10
Local reachability-density(A)
9 min
3.11
Local outlier Factor(A)
21 min
3.12
Impact of Scale & Column standardization
12 min
3.13
Interpretability
12 min
3.14
Feature Importance and Forward Feature selection
22 min
3.15
Handling categorical and numerical features
24 min
3.16
Handling missing values by imputation
21 min
3.17
Curse of dimensionality
27 min
3.18
Bias-Variance tradeoff
24 min
3.19
Best and worst cases for an algorithm
6 min
Performance measurement of models
4.1
Accuracy
15 min
4.2
Confusion matrix, TPR, FPR, FNR, TNR
25 min
4.3
Precision and recall, F1-score
10 min
4.4
Receiver Operating Characteristic Curve (ROC) curve and AUC
19 min
4.5
Log-loss
12 min
4.6
R-Squared/Coefficient of determination
14 min
4.7
Median absolute deviation (MAD)
5 min
4.8
Distribution of errors
7 min
Naive Bayes
5.1
Conditional probability
13 min
5.2
Independent vs Mutually exclusive events
6 min
5.3
Bayes Theorem with examples
18 min
5.4
Exercise problems on Bayes Theorem
30 min
5.5
Naive Bayes algorithm
26 min
5.6
Toy example: Train and test stages
26 min
5.7
Naive Bayes on Text data
16 min
5.8
Laplace/Additive Smoothing
24 min
5.9
Log-probabilities for numerical stability
11 min
5.10
Bias and Variance tradeoff
14 min
5.11
Feature importance and interpretability
10 min
5.12
Imbalanced data
14 min
5.13
Outliers
6 min
5.14
Missing values
3 min
5.15
Handling Numerical features (Gaussian NB)
13 min
5.16
Multiclass classification
2 min
5.17
Similarity or Distance matrix
3 min
5.18
Large dimensionality
2 min
5.19
Best and worst cases
8 min
5.20
Code example
7 min
5.21
Exercise: Apply Naive Bayes to Amazon reviews
6 min
Logistic Regression
6.1
Geometric intuition
31 min
6.2
Sigmoid function: Squashing
37 min
6.3
Mathematical formulation of Objective function
24 min
6.4
Weight vector
11 min
6.5
Regularization: Overfitting and Underfitting
26 min
6.6
L1 regularization and sparsity
11 min
6.7
Probabilistic Interpretation: Gaussian Naive Bayes
19 min
6.8
Loss minimization interpretation
24 min
6.9
Feature importance and interpretability
14 min
6.10
Collinearity of features
14 min
6.11
Test/Run time space and time complexity
10 min
6.12
Real World cases
11 min
6.13
Non-linearly separable data & feature engineering
28 min
6.14
Code sample: Logistic regression, GridSearchCV, RandomSearchCV
23 min
6.15
Exercise: Apply Logistic regression to Amazon reviews dataset
6 min
6.16
Extensions to Logistic Regression: Generalized linear models
9 min
6.17
Hyperparameter search: Grid Search and Random Search
16 min
6.18
Column Standardization
5 min
Linear Regression
7.1
Geometric intuition
13 min
7.2
Mathematical formulation
14 min
7.3
Real world Cases
9 min
7.4
Code sample for Linear Regression
13 min
Solving optimization problems : Stochastic Gradient Descent
8.1
Differentiation
29 min
8.2
Online differentiation tools
8 min
8.3
Maxima and Minima
12 min
8.4
Vector calculus: Grad
10 min
8.5
Gradient descent: geometric intuition
19 min
8.6
Learning rate
8 min
8.7
Gradient descent for linear regression
8 min
8.8
SGD algorithm
9 min
8.9
Constrained Optimization & PCA
14 min
8.10
Logistic regression formulation revisited
6 min
8.11
Why L1 regularization creates sparsity?
17 min
8.12
Exercise: Implement SGD for linear regression
6 min
Close