Context-Specific Small Language Models vs. LLMs: Deciphering Customer Interactions with AI assistants
43 Pages Posted: 30 Sep 2024 Last revised: 30 Apr 2025
Date Written: August 28, 2024
Abstract
As AI assistants become more prevalent, consumers are increasingly turning to their AI assistants for a multitude of purposes, from controlling smart home devices to asking about general knowledge information to making purchases. In this paper, we investigate methods to predict purchase intent from consumers’ interactions with AI assistants, enabling marketers to more effectively advertise when appropriate. We compare two alternative approaches: a Graph-based method and a Large-Language Model (LLM)-based approach. Using the Graph-based approach, we construct a bipartite network of nouns and verbs from consumers’ utterances to calculate purchase intent (PI) scores. Using the LLM-based approach, we obtain PI scores by prompting ChatGPT. Both PI scores demonstrate strong prediction power on key metrics in search engine marketing: cost-per-click (CPC) and competitive density (CD). More importantly, we show that the Graph-based method can capture additional information on top of the LLM-based approach. Additionally, for commercially relevant keywords, the Graph-based approach outperforms the LLM-based approach. Our research demonstrates that a context-specific "small language model" built on limited data can complement and even outperform LLMs in understanding consumer’s purchase intent.
Keywords: Artificial Intelligence, Language Models, Text Analysis, Predicting Purchase Intent, Voice Search, Graph Methods, Network Embeddings, Cost-Per-Click, AI Assistants
Suggested Citation: Suggested Citation