I feel extremely happy that I’m nearing the end of this course that introduced me to the amazing world of Machine Learning and the countkess possibilities it has to offer.
Since my last update on the Machine Learning course, I have explored Unsupervised learning algorithms in the course. Previously, I had written about the various Supervised Learning algorithms and techniques that had been taught to us. In this post, I wish to make a note of the Unsupervised Learning and specialized learning techniques I implemented, namely Clustering (K-Means), PCA, Anomaly Detection and Recommender systems : –
1. Clustering : I learnt how Unsupervised learning algorithms employ clustering techniques to make sense out of unlableled data. It is important to note that Unsupervised learning techniques cannot be implemented for all problems which have an unlabaled data. In order to make sense out of the clusters we get, we need to also understand what the problem statement demands.
K-Means clustering was the most basic Clustering algorithm I learnt. I have explained this algorithm in one of my previous posts, around the time when I started to learn about it. It basically involves two simultaneous steps to cluster unlabaled data: Cluster assignment step, followed by Move centroid step, performed iteratively one after the other. When K-Means stabilizes, we get the output as our data, in the form of k clusters.
I found K-Means extremely easy to implement in Octave (Okay, Octave isnt that bad. :p)
2. PCA (Principle Components Analysis) : PCA is an unsupervised learning algorithm that basically performs dimensionality reduction, and can reduce a data from n-dimensions to k-dimensions. You can also call it as a type of compression algorithm. It does so by minimizing the projection error (distance) between a point in an n-dimensional environment, and its projection on the k-dimensional enviroment.
I implemented it in Octave by using single valued decomposition of the co-variance matrix to find its eigen vectors.
K is the no of principal components. Recontstruction on of data back into n-dimensions was also implemented. PCA can be really helpful to reduce the dimensionality of data, which will cause the speed of an algorithm to increase and also reduce memory requirements to store the data.
3. Anomaly Detection : Anomaly Detection algorithm is basically used to find anomalies or outlier values in a data. It consists of modelling the data to generate its probablity distribution and decalaring a data example as an anomaly is the value of probablity at that point is less than some threshold, epsilon.
Anomaly detection is different from supervised learning algorithms in the sense that there much less positive examples (meaning anomalies here), as compared to the number of negative examples in the training data.
Different applications may want to employ this algorithm to detect anomalies or to flag outlier data points
4. Recommender Systems : I was actually waiting for when we would hit this topic in the course and I really enjoyed a lot learning about how recommender systems work. Implementing a recommender system for movies in Octave was challenging, yet enjoyable for me. I will code a recommender system again and try and learn more about them.
They basically employ two data sets, one which acts as a features vector for the various movies that user rate, and another which denotes the user preferences itself. We try and learn user parameters from movie feature vectors to rate movies which the user hasn’t rated yet. Based on the user parameters, the system will try recommending the user movies similar to the ones he prefers.
Collaborative filtering introduces an improved type of RS, in which user parameters and movie feature vectors learn from each other simultaneously, to generate better learnt parameters. CF also introduces feature learning, in which the algorithm tries to learn its features itself.
Mean normalisation is important for correct implementation.
I only have to implement SVM exercise to fully complete the course. I’m hoping to get it done by this weekend, along with other general ML advice Andrew Ng gives towards the end of the course.
I have decided upon making a repo for this course and pushing all the exercises I have done on Github. Doing this doesn’t really seem unethical to me, I’m not forcing anyone else to copy my codes!