AD-Click Predicition

₹15,000.00
  • Plotting for exploratory data analysis (EDA) 0/1

    exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task.

  • Linear Algebra 0/1

    It will give you the tools to help you with the other areas of mathematics required to understand and build better intuitions for machine learning algorithms.

  • Probability and Statistics 0/11

    • Lecture3.1
      Introduction to Probability and Stats
    • Lecture3.2
      Gaussian/Normal Distribution
    • Lecture3.3
      Introduction to Correlation and Co-variance
    • Lecture3.4
      Pearson Correlation Coefficient
    • Lecture3.5
      Spearman Rank Correlation Coefficient
    • Lecture3.6
      Correlation vs Causation 30 min
    • Lecture3.7
      Why learn Hypothesis testing?
    • Lecture3.8
      Testing methodology, Null-hypothesis, test-statistic, p-value
    • Lecture3.9
      Resampling and permutation test 30 min
    • Lecture3.10
      K-S Test for similarity of two distributions
    • Lecture3.11
      Example from Iris flower dataset.
  • Dimensionality reduction and Visualization: 0/10

    In machine learning and statistics, dimensionality reduction or dimension reduction is the process of reducing the number of random variables under consideration, via obtaining a set of principal variables. It can be divided into feature selection and feature extraction.

    • Lecture4.1
      what is dimensionality reduction? 03 min
    • Lecture4.2
      Row vector, Column vector: Iris dataset example 05 min
    • Lecture4.3
      Represent a dataset: D= {x_i, y_i} 04 min
    • Lecture4.4
      Represent a dataset as a Matrix 07 min
    • Lecture4.5
      Data preprocessing: Column Normalization 20 min
    • Lecture4.6
      Mean of a data matrix 06 min
    • Lecture4.7
      Data preprocessing: Column Standardization 16 min
    • Lecture4.8
      Co-variance of a Data Matrix 24 min
    • Lecture4.9
      MNIST dataset (784 dimensional) 20 min
    • Lecture4.10
      Code to load MNIST Dataset 12 min
  • PCA(principal component analysis) 0/10

    • Lecture5.1
      Why learn PCA? 04 min
    • Lecture5.2
      Geometric intuition of PCA 14 min
    • Lecture5.3
      Mathematical objective function of PCA 13 min
    • Lecture5.4
      Alternative formulation of PCA: Distance minimization 10 min
    • Lecture5.5
      Eigen values and Eigen vectors 23 min
    • Lecture5.6
      PCA for Dimensionality Reduction and Visualization 10 min
    • Lecture5.7
      Visualize MNIST dataset 05 min
    • Lecture5.8
      Limitations of PCA 05 min
    • Lecture5.9
      PCA Code example using Visualization 19 min
    • Lecture5.10
      PCA Code example using non-Visualization 15 min
  • (t-SNE)T-distributed Stochastic Neighbourhood Embedding 0/7

    • Lecture6.1
      What is t-SNE? 07 min
    • Lecture6.2
      Neighborhood of a point, Embedding 05 min
    • Lecture6.3
      Geometric intuition of t-SNE 09 min
    • Lecture6.4
      Crowding Problem 10 min
    • Lecture6.5
      How to apply t-SNE and interpret its output 38 min
    • Lecture6.6
      t-SNE on MNIST 08 min
    • Lecture6.7
      Code example of t-SNE 09 min
  • Real world problem: Predict rating given product reviews on Amazon 0/3

    • Lecture7.1
      Amazon product reviews overview
    • Lecture7.2
      Featurizations: convert text to numeric vectors
    • Lecture7.3
      Exercise: t-SNE visualization of Amazon reviews
  • KNN Classification 0/17

    • Lecture8.1
      Classification vs Regression (examples)
    • Lecture8.2
      Data matrix notation an Decision Surface 30 min
    • Lecture8.3
      K-Nearest Neighbors Geometric intuition with a toy example
    • Lecture8.4
      K-Nearest Neighbors Smoothness assumptions
    • Lecture8.5
      K-Nearest Neighbors Distance measures: Euclidean, Manhattan, Hamming
    • Lecture8.6
      K-Nearest Neighbors Simple implementation with Majority Vote
    • Lecture8.7
      KNN Majority Vote Pseudo code
    • Lecture8.8
      KNN Train time and space complexity
    • Lecture8.9
      KNN Test time and space complexity
    • Lecture8.10
      KNN Limitations
    • Lecture8.11
      KNN Determining the right “k” with cross validation
    • Lecture8.12
      KNN Determining the right “k” with K-fold cross validation
    • Lecture8.13
      KNN for regression
    • Lecture8.14
      KNN Decision surface and voronoi tessellation
    • Lecture8.15
      kd-tree based k-NN
    • Lecture8.16
      KNN Locality sensitive Hashing (LSH)
    • Lecture8.17
      KNN Code samples, references and Exercise
  • Performance measurement of models 0/7

    • Lecture9.1
      Accuracy
    • Lecture9.2
      Confusion matrix, TPR, FPR, FNR, TNR
    • Lecture9.3
      Precision and recall
    • Lecture9.4
      Receiver Operating Characteristic Curve (ROC) curve and AUC
    • Lecture9.5
      Log-loss
    • Lecture9.6
      R-Squared
    • Lecture9.7
      Median absolute deviation (MAD)
  • Naive Bayes 0/10

    • Lecture10.1
      Conditional probability
    • Lecture10.2
      Conditional independence
    • Lecture10.3
      Bayes rule and examples
    • Lecture10.4
      Naive Bayes algorithm
    • Lecture10.5
      Toy example
    • Lecture10.6
      Space and Time complexity: train and test time
    • Lecture10.7
      Laplace/Additive Smoothing
    • Lecture10.8
      Underfitting and Overfitting
    • Lecture10.9
      Feature importance and interpretability
    • Lecture10.10
      Exercise: Apply Naive Bayes to Amazon reviews
  • Logistic Regression 0/17

    • Lecture11.1
      Geometric intuition
    • Lecture11.2
      Sigmoid function: Squashing
    • Lecture11.3
      Mathematical formulation of Objective function
    • Lecture11.4
      Weight vector
    • Lecture11.5
      Regularization: Overfitting and Underfitting
    • Lecture11.6
      L2 regularization, L1 regularization and sparsity
    • Lecture11.7
      Probabilistic Interpretation: Gaussian Naive Bayes
    • Lecture11.8
      Loss function interpretation
    • Lecture11.9
      Centering and Scaling of columns
    • Lecture11.10
      Feature importance and interpretability
    • Lecture11.11
      Collinearity of features
    • Lecture11.12
      Featurizing categorical features: one-hot encoding
    • Lecture11.13
      Featuring Nominal features
    • Lecture11.14
      Test/Run time space and time complexity
    • Lecture11.15
      Internet scale: Large data and low-latency
    • Lecture11.16
      Decision surface and examples
    • Lecture11.17
      Exercise: Apply Logistic regression to Amazon reviews dataset
  • Linear Regression 0/4

    • Lecture12.1
      Geometric intuition
    • Lecture12.2
      Mathematical formulation
    • Lecture12.3
      Squared loss and loss-function based interpretation
    • Lecture12.4
      toy-example
  • Solving optimization problems : Stochastic Gradient Descent 0/6

    • Lecture13.1
      Gradient, derivative, slope, partial derivative
    • Lecture13.2
      Gradient descent: geometric intuition
    • Lecture13.3
      Rate of convergence
    • Lecture13.4
      SGD: algorithm and rate of convergence
    • Lecture13.5
      Constrained optimization and Penalty method
    • Lecture13.6
      Exercise: Implement SGD for linear regression
  • Bias-Variance tradeoff 0/3

    • Lecture14.1
      Intuition: Underfit and Overfit
    • Lecture14.2
      Derivation for linear regression
    • Lecture14.3
      Bias Variance tradeoff for k-NN, NaiveBayes, Logistic Regression, Linear regression
  • Support Vector Machines (SVM) 0/16

    • Lecture15.1
      Geometric Intution
    • Lecture15.2
      Mathematical derivation
    • Lecture15.3
      Loss function (Hinge Loss) based interpretation
    • Lecture15.4
      Support vectors
    • Lecture15.5
      Linear SVM
    • Lecture15.6
      Primal and Dual
    • Lecture15.7
      Kernelization
    • Lecture15.8
      RBF-Kernel
    • Lecture15.9
      Polynomial kernel
    • Lecture15.10
      Domain specific Kernels
    • Lecture15.11
      Train and run time complexities
    • Lecture15.12
      Bias-variance tradeoff: Underfitting and Overfitting
    • Lecture15.13
      nu-SVM: control errors and support vectors
    • Lecture15.14
      SVM Regression
    • Lecture15.15
      Code Samples
    • Lecture15.16
      Exercise: Apply SVM to Amazon reviews dataset
  • Decision Trees 0/9

    • Lecture16.1
      Geometric Intuition: Axis parallel hyperplanes
    • Lecture16.2
      Nested if-else conditions
    • Lecture16.3
      Sample Decision tree
    • Lecture16.4
      Building a decision Tree, Entropy, Information Gain
    • Lecture16.5
      Gini Impurity (CART), Depth of a tree: Geometric and programming intuition.
    • Lecture16.6
      Building a decision Tree Categorical features with many levels
    • Lecture16.7
      Regression using Decision Trees
    • Lecture16.8
      Bias-Variance tradeoff
    • Lecture16.9
      Decision tree Limitations and Code Samples
  • Ensemble Models 0/12

    • Lecture17.1
      Introduction to Bootstrapped Aggregation (Bagging)
    • Lecture17.2
      Random Forest and their construction
    • Lecture17.3
      Bias-Variance tradeoff(Random Forest)
    • Lecture17.4
      Applicative details,, Code Samples(Random Forest)
    • Lecture17.5
      Intution to Boosting
    • Lecture17.6
      Gradient Boosting and XGBoost Algorithm
    • Lecture17.7
      Loss function(Gradient Boosting and XGBoost)
    • Lecture17.8
      XGBoost Code samples
    • Lecture17.9
      AdaBoost: geometric intuition
    • Lecture17.10
      Cascading models, Stacking models.
    • Lecture17.11
      How to win Kaggle competitions using Ensembles
    • Lecture17.12
      Exercise: Apply GBDT and RF to Amazon reviews dataset
  • Python for Data Science Introduction 0/0

  • Python for Data Science: Data Structures 0/2

    • Lecture19.1
      Lists and tuples
    • Lecture19.2
      Dictionaries, sets and Strings
  • Python for Data Science: Functions 0/5

    • Lecture20.1
      Introduction and types of functions
    • Lecture20.2
      Function arguments
    • Lecture20.3
      Recursive functions
    • Lecture20.4
      Lambda functions
    • Lecture20.5
      Modules and Packages
  • Python for Data Science: Miscellaneous 0/3

    • Lecture21.1
      File handling
    • Lecture21.2
      exception handling
    • Lecture21.3
      debugging mode
  • Python for Data Science: OOPS(object oriented program) 0/2

    • Lecture22.1
      Principles of Object oriented programming
    • Lecture22.2
      classes and objects
  • Python for Data Science: Pandas 0/7

    • Lecture23.1
      Getting started with pandas
    • Lecture23.2
      Data Frame Basics
    • Lecture23.3
      Loading Data from csv, excel, txt.. etc
    • Lecture23.4
      Handling Missing data
    • Lecture23.5
      Group by, Concat and merging data
    • Lecture23.6
      Pivot Table and Reshaping of Table
    • Lecture23.7
      Time series
  • Python for Data Science: Matplotlib 0/1

    • Lecture24.1
      Getting started with Matplotlib
  • Python for Data Science: Numpy and Scipy 0/1

    • Lecture25.1
      Getting started with Numpy and Scipy
  • Python for Data Science: Seaborn 0/1

    • Lecture26.1
      Getting Started with Seaborn
  • Python for Data Science: Scikit Learn 0/1

    • Lecture27.1
      Getting started with scikit learn
Statement:

A key technology behind search advertising is to predict the click-through rate (pCTR) of ads, as the economic model behind search advertising requires pCTR values to rank ads and to price clicks.
Given the training instances derived from session logs of the Tencent proprietary search engine, soso.com, participants are expected to accurately predict the pCTR of ads in the testing instances.

  • Data Type
    1. csv files
    2. Both text and integer values
    3. Training Data

      • QueryId_tokensid.txt (id query_toks1|query_toks2|query_toks3|…)
      • PurchaseId_tokensid.txt (id Purchase_toks1|Purchase_toks2|…)
      • TitleId_tokensid.txt (id Title_toks1|Title_toks2|Title_toks3|…)
      • DescriptionId_tokensid.txt (id Description_toks1|Description_toks2|…)
      • UserId_tokensid.txt (UserId, gender{0,1,2}, age)
    4. Test data set
      The testing dataset shares the same format as the training dataset, except for the counts of ad impressions and ad clicks that are needed for computing the empirical CTR.
  • Data Size: 700 MB

Key Points:

  1. Validity of this course is 240 days( i.e Starts from the date of your registration to this course)
  2. Expert Guidance, we will try to answer your queries in atmost 24hours
  3. 10+ machine learning algorithms will be taught in this course.
  4. No prerequisites– we will teach every thing from basics ( we just expect you to know basic programming)
  5.  Python for Data science is part of the course curriculum.

 

Target Audience:

We are building our course content and teaching methodology to cater to the needs to students at various levels of expertise and varying background skills. This course can be taken by anyone with a working knowledge of a modern programming language like C/C++/Java/Python. We expect the average student to spend at least 5 hours a week over a 6 month period amounting to a 145+ hours of effort. More the effort, better the results. Here is a list of customers who would benefit from our course:

    1. Undergrad (BS/BTech/BE) students in engineering and science.
    2. Grad(MS/MTech/ME/MCA) students in engineering and science.
    3. Working professionals: Software engineers, Business analysts, Product managers, Program managers, Managers, Startup teams building ML products/services.

Course Features

  • Lectures 203
  • Quizzes 0
  • Duration 100+ hours
  • Skill level All levels
  • Language English
  • Students 0
  • Assessments Yes
QUALIFICATION: Masters from IISC Bangalore PROFESSIONAL EXPIERENCE: 9+ years of Experience( Yahoo Labs, Matherix Labs Co-founder and Amazon)
₹15,000.00

Leave A Reply