Efficient Sampling: Sample Average Inference of Service Capacity in a Queue
53 Pages Posted: 26 Jul 2021
Date Written: July 24, 2021
Thanks to the abundant user-generated content on the review websites, customers can easily make a casual inference on the service speed through observing and learning from others' service experiences. We study a service system where customers make their join-or-balk decisions based on a sample set of service speed observations and examine how such a sampling process shapes customer behavior. We show how the amount of information utilized by customers to make informed decisions affects the system performance, including throughput, social welfare, and revenue. In particular, we provide exact conditions under which random sampling of service rates by customers with a finite sample size, or full disclosure of the service rate, is efficient from the perspective of maximizing the throughput or social welfare. First, when the potential system load is high, the throughput is maximized if the sample size is very small; when the potential system load is not high, there is a threshold on the fractional part of the normalized service reward below which revealing the service capacity maximizes throughout and above which a finite sample size maximizes throughput. Moreover, there exists a threshold (that could be one) on the fractional part of the normalized service reward, below which a finite sample size maximizes social welfare and above which revealing the service capacity maximizes social welfare. These analyses yield insights on whether small businesses or a social planner may want to rely on customers' random sampling or simply self-disclose the service capacity information.
Keywords: service system, observable queue, sample average inference, learning, unknown service capacity, queueing economics
Suggested Citation: Suggested Citation