Visual Listening In: Extracting Brand Image Portrayed on Social Media
46 Pages Posted: 6 Jun 2017 Last revised: 23 Sep 2019
Date Written: August 24, 2019
We propose a “visual listening in” approach (i.e., mining visual content posted by users) to measure how brands are portrayed on social media. Using a deep-learning framework, we develop BrandImageNet, a multi-label convolutional neural network model, to predict the presence of perceptual brand attributes in the images that consumers post online. We validate model performance using human judges, and find a high degree of agreement between our model and human evaluations of images. We apply the BrandImageNet model to brand-related images posted on social media, and compute a brand-portrayal metric based on model predictions, for 56 national brands in the apparel and beverages categories. We find a strong link between brand portrayal in consumer-created images and consumer brand perceptions collected through survey tools. Images are close to surpassing text as the medium of choice for online conversations. They convey rich information about the consumption experience, attitudes, and feelings of the user. We show that valuable insights can be efficiently extracted from consumer-created images. Firms can use the BrandImageNet model to automatically monitor in real time their brand portrayal and better understand consumer brand perceptions and attitudes toward theirs and competitors’ brands.
Keywords: Social Media, Visual Marketing, Brand Perceptions, Computer Vision, Machine Learning, Deep Learning, Transfer Learning, Big Data
Suggested Citation: Suggested Citation