This blog post in an update on the smart mirror v2. To fully understand this update, it will be beneficial to read the previous posts about the smart mirror and Edge AI, too.
This post will mainly try to shed some light on the so-called transfer learning process that we’re using for the smart mirror. I hope you find our visualisations easy to understand (thx so much to Ketaki who just joined us full-time in October!). Secondly, I’ll give you a quick update on how we progress to replace the original smart mirror in our SAP Customer Experience Labs Showroom in Munich.
Transfer Learning with Headless MobileNet
To classify images without a long and complicated model training process, we use a technique called Transfer Learning. Specifically, we use a so-called headless model. This is simply a Tensorflow model which has the last layer, that typically relates to the labels, removed. As we are using a headless MobileNet model, this means that our last layer now results in a so-called “embedding vector” with one thousand numbers. For each image that is fed through this special MobileNet model, we receive this embedding vector.
Now for training a class that the MobileNet (which is able to classify 1000 things) has never seen before, we can simply store these embedding vectors and associate the new label we’re training for.
To visualise this training phase, Ketaki from our team has created this visualisation below. As humans have a hard time to imagine a 1000-dimensional space, we thought reducing this to a 3-dimensional space for the sake of explaining makes total sense. So – step-by-step, here’s the training phase explained:
-
- We’re training for new label A and images that represent label A are shown to the camera
-
- The images are fed into the headless MobileNet and result in embedding vectors. Here, we reduced the vectors to a size of 3 while in reality they have 1000 elements for a MobileNet model.
-
- The embedding vectors are stored in combination with the label – A – they represent.
-
- This is repeated for each new label that we want to train for.
Our embedding vectors that we used for the visualization above have just 3 elements, which means we can easily show them in a little 3-dimensional space. Check the 3 training phases above – the first pair of glasses yield the vector [1,1,0] which results in the first green dot on the right side. While just one dot is shown for the visualization, for each training phase many embedding vectors would typically be stored with their label.
Classifying the new labels
The goal of all this is to be able to classify for labels, that have not been known to MobileNet before. In fact, we’re not using the old labels at all, we’re just using the new labels we’ve used in the training phase.
When we want to make a classification, we simply present an image to the headless MobileNet and get another embedding vector. Now we simply compare the embedding vector to the previously stored embedding vectors (the ones for which we also stored the new labels). In 3D space, think of it as determining the shortest route to an existing point/embedding vector in our storage space. The embedding vector / label combination which is closest is detected and we take that label as the classification result. Easy, right?
As you can see above, the new labels that we classify via this method are then resolved in product recommendations via the SAP Marketing Cloud. In our system, the labels are predefined, think of them as A/B/C, but we can quickly retrain the system to use different trigger products for these labels. A marketeer can now log into the SAP Marketing Cloud and change the recommendatitons or another algorithm can determine these recommendations based on products bought together, etc.
First trials in our showroom
This week, we’ve improved our smart mirror v2 docker image and the overall system (which is based on a Raspberry PI4 with the Google Coral USB Accelerator) in a few ways – notable changes are:
-
- the Rasbperry PI’s Raspbian OS boots into a full-screen chromium kiosk mode which loads the local smart mirror web app
-
- the smartmirror docker cotainer restarts on boot, too
-
- many smaller improvements and changes to the current web UI. We now have a main screen for the key demo phases and an idle screen which we’ll use to educate about this demo.
This blog post would not be complete without showing you a picture from some trials in our showroom – here’s Lars trying it out: