We focus on addressing the challenges in responsible beauty product recommendation, particularly when it involves comparing the product’s color with a person’s skin tone, such as for foundation and concealer products. To make accurate recommendations, it is crucial to infer both the product attributes and the product specific facial features such as skin conditions or tone. However, while many product photos are taken under good lighting conditions, face photos are taken from a wide range of conditions. The features extracted using the photos from ill-illuminated environment can be highly misleading or even be incompatible to be compared with the product attributes. Hence, bad illumination condition can severely degrade quality of the recommendation.
We introduce a machine learning framework for illumination assessment which classifies images into having either good or bad illumination condition. We then build an automatic user guidance tool which informs a user holding their camera if their illumination condition is good or bad. This way, the user is provided with rapid feedback and can interactively control how the photo is taken for their recommendation. Only a few studies are dedicated to this problem, mostly due to the lack of datasets that are large, labeled, and diverse both in terms of skin tones and lighting patterns. Lack of such datasets lead to neglecting skin tone diversity. Therefore, we begin by constructing a diverse synthetic dataset that simulates various skin tones and lighting patterns in addition to an existing facial image dataset. Next, we train a Convolutional Neural Network (CNN) for illumination assessment that outperforms the existing solutions using the synthetic dataset. Finally, we analyze how our work improves the shade recommendation for various foundation products.
Improving the accuracy of beauty product recommendations by assessing face illumination quality
2023
Research areas