## 1. Introduction

In this note, we consider the problem of assortment optimization under the multinomial logit (MNL) choice model with a capacity constraint on the size of the assortment and incomplete information about the model parameters. This problem has recently received considerable attention in the literature (see, e.g., [Reference Agrawal, Avadhanula, Goyal and Zeevi1–Reference Kallus and Udell7,Reference Rusmevichientong, Shen and Shmoys9,Reference Sauré and Zeevi10]).

Two notable recent contributions are from Agrawal *et al.* [Reference Agrawal, Avadhanula, Goyal and Zeevi1,Reference Agrawal, Avadhanula, Goyal and Zeevi2], who construct decision policies based on Thompson Sampling and Upper Confidence Bounds, respectively, and show that the regret of these policies—the cumulative expected revenue loss compared with the benchmark of always offering an optimal assortment—is, up to logarithmic terms, bounded by a constant times $\sqrt {N T}$, where $N$ denotes the number of products and $T\geqslant N$ denotes the length of the time horizon. These upper bounds are complemented by the recent work from Chen and Wang [Reference Chen and Wang3], who show that the regret of any policy is bounded from below by a positive constant times $\sqrt {N T}$, implying that the policies by Agrawal *et al.* [Reference Agrawal, Avadhanula, Goyal and Zeevi1,Reference Agrawal, Avadhanula, Goyal and Zeevi2] are (up to logarithmic terms) asymptotically optimal.

The lower bound by Chen and Wang [Reference Chen and Wang3] is proven under the assumption that the product revenues are *constant*—that is, each product generates the same amount of revenue when sold. In practice, it often happens that different products have *different* marginal revenues, and it is *a priori* not completely clear whether the policies by Agrawal *et al.* [Reference Agrawal, Avadhanula, Goyal and Zeevi1,Reference Agrawal, Avadhanula, Goyal and Zeevi2] are still asymptotically optimal or that a lower regret can be achieved. In addition, Chen and Wang [Reference Chen and Wang3] assume that $K$, the maximum number of products allowed in an assortment, is bounded by $\tfrac {1}{4} \cdot N$, but point out that this constant $\tfrac {1}{4}$ can probably be increased.

In this note, we settle this open question by proving a $\sqrt {N T}$ regret lower bound for *any* given vector of product revenues. This implies that policies with ${{\mathcal {O}}}(\sqrt {N T})$ regret are asymptotically optimal *regardless* of the product revenue parameters. Furthermore, our result is valid for all $K < \tfrac {1}{2} N$, thereby confirming the intuition of Chen and Wang [Reference Chen and Wang3] that the constraint $K \leqslant \tfrac {1}{4} N$ is not tight.

## 2. Model and main result

We consider the problem of dynamic assortment optimization under the MNL choice model. In this model, the number of products is $N \in {{\mathbb {N}}}$. Henceforth, we abbreviate the set of products $\{1\ldots ,N\}$ as $[N]$. Each product $i\in [N]$ yields a known marginal revenue to the seller of $w_i> 0$. Without loss of generality due to scaling, we can assume that $w_i\leqslant 1$ for all $i\in [N]$. Each product $i\in [N]$ is associated with a preference parameter $v_i\geqslant 0$, unknown to the seller. Each offered assortment $S\subseteq [N]$ must satisfy a capacity constraint, that is, $|S|\leqslant K$ for capacity constraint $K \in {{\mathbb {N}}}$, $K\leqslant N$. For notational convenience, we write

for the collection of all assortments of size at most $K$, and

for the collection of all assortments of exact size $K$. Let $T \in {{\mathbb {N}}}$ denote a finite time horizon. Then, at each time $t\in [T]$, the seller selects an assortment $S_t\in \mathcal {A}_K$ based on the purchase information available up to and including time $t-1$. Thereafter, the seller observes a single purchase $Y_t\in S_t\cup \{0\}$, where product $0$ indicates a no-purchase. The purchase probabilities within the MNL model are given by

for all $t \in [T]$ and $i \in S \cup \{0\}$, where we write $v_0 := 1$. The assortment decisions of the seller are described by his/her policy: a collection of probability distributions $\pi = (\pi (\,\cdot \,|\,h) : h \in H)$ on $\mathcal {A}_K$, where

is the set of possible histories. Then, conditionally on $h=(S_1, Y_1, \ldots , S_{t-1}, Y_{t-1})$, assortment $S_t$ has distribution $\pi (\,\cdot \,|\,h)$, for all $h \in H$ and all $t \in [T]$. Let $\mathbb {E}^{\pi }_v$ be the expectation operator under policy $\pi$ and preference vector $v\in \mathcal {V} := [0, \infty )^{N}$. The objective for the seller is to find a policy $\pi$ that maximizes the total accumulated revenue or, equivalently, minimizes the accumulated regret:

where $r(S,v)$ is the expected revenue of assortment $S\subseteq [N]$ under preference vector $v \in \mathcal {V}$:

In addition, we define the worst-case regret:

The main result, presented below, states that the regret of any policy can uniformly be bounded from below by a constant times $\sqrt {NT}$.

Theorem 1 Suppose that $K < N / 2$. Then, there exists a constant $c_1 > 0$ such that, for all $T \geqslant N$ and for all policies $\pi$,

## 3. Proof of Theorem 1

### 3.1. Proof outline

The proof of Theorem 1 can be broken up into four steps. First, we define a baseline preference vector $v^{0}\in \mathcal {V}$ and we show that under $v^{0}$ *any* assortment $S\in \mathcal {S}_K$ is optimal. Second, for each $S\in \mathcal {A}_K$, we define a preference vector $v^{S}\in \mathcal {V}$ by

for some $\epsilon \in (0,1]$. For each such $v^{S}$, we show that the instantaneous regret from offering a suboptimal assortment $S_t$ is bounded from below by a constant times the number of products $|S \setminus S_t|$ not in $S$; cf. Lemma 1 below. This lower bound takes into account how much the assortments $S_1,\ldots ,S_T$ overlap with $S$ when the preference vector is $v^{S}$. Third, let $N_i$ denote the number of times product $i\in [N]$ is contained $S_1,\ldots ,S_T$, that is,

Then, we use the Kullback–Leibler (KL) divergence and Pinsker's inequality to upper bound the difference between the expected value of $N_i$ under $v^{S}$ and $v^{S\backslash \{i\}}$, see Lemma 2. Fourth, we apply a randomization argument over $\{v^{S}:S\in \mathcal {S}_K\}$, we combine the previous steps, and we set $\epsilon$ accordingly to conclude the proof.

The novelty of this work is concentrated in the first two steps. The third and fourth step closely follow the work of Chen and Wang [Reference Chen and Wang3]. These last steps are included (1) because of slight deviations in our setup, (2) for the sake of completeness, and (3) since the proof techniques are extended to the case where $K/N<1/2$. In the work of Chen and Wang [Reference Chen and Wang3], the lower bound is shown for $K/N\leqslant 1/4$, but the authors already mention that this constraint can probably be relaxed. Our proof confirms that this is indeed the case.

### 3.2. Step 1: Construction of the baseline preference vector

Let $\underline {w} := \min _{i\in [N]} w_i>0$ and define the constant

Note that $s<\underline {w}$. The baseline preference vector is formally defined as

Now, the expected revenue for any $S\in \mathcal {A}_K$ under $v^{0}$ can be rewritten as

The expression on the right-hand side is only maximized by assortments $S$ with maximal size $|S| = K$, in which case

It follows that *all* assortments $S$ with size $K$ are optimal.

### 3.3. Step 2: Lower bound on the instantaneous regret of $v^{S}$

For the second step, we bound the instantaneous regret under $v^{S}$.

Lemma 1 Let $S\in \mathcal {S}_K$. Then, there exists a constant $c_2>0$, only depending on $\underline {w}$ and $s$, such that, for all $t\in [T]$ and $S_t\in \mathcal {A}_K$,

As a consequence,

Proof. Fix $S\in \mathcal {S}_K$. First, note that since $\epsilon \leqslant 1$, for any $S'\in \mathcal {A}_K$, it holds that

Second, let $S^{*}\in \textrm {arg max}_{S'\in \mathcal {A}_K}\,r(S',v^{S})$ and $\varrho ^{*} = r(S^{*},v^{S})$. By rewriting the inequality $\varrho ^{*}\geqslant r(S',v^{S})$ for all $S'\in \mathcal {A}_K$, we find that for all $S'\in \mathcal {A}_K$

Let $t\in [T]$ and $S_t\in \mathcal {A}_K$. Then, it holds that

Here, the first inequality is due to (3.3) and the second inequality follows from (3.4) with $S'=S$. Next, note that since $|S_t|\leqslant K$ and $|S|=K$, we find that

Now, term $(a)$ can be bounded from below as

Here, at the final inequality, we used (3.5). Next, term $(b)$ can be bounded from above as

Now, for term $(c)$, we note that $v^{S}_i\geqslant v^{0}_i$ for all $i\in [N]$. In addition, since $r(S^{*},v^{0})\leqslant s$,

This entails an upper bound for $(c)$. Term $(d)$ is bounded from above as

Here, at the final inequality, we used (3.5) and the fact that $\epsilon \leqslant 1$. Now, we combine the upper bounds of $(c)$ and $(d)$ to find that

It follows from (3.6) and (3.7) that

where

Note that the constant $c_2$ is positive if $(\underline {w}-s)^{2}>3s$. This follows from $s=\underline {w}^{2}/(3+2\underline {w})$ since

Statement (3.2) follows from the additional observation

### 3.4. Step 3: KL divergence and Pinsker's inequality

We denote the dependence of the expected value and the probability on the preference vector $v^{S}$ as $\mathbb {E}_S[\,\cdot \,]$ and ${{\mathbb {P}}}_S(\cdot )$ for $S\in \mathcal {A}_K$. In addition, we write $S \backslash i$ instead of $S \backslash \{i\}$. The lemma below states an upper bound on the KL divergence of ${{\mathbb {P}}}_S$ and ${{\mathbb {P}}}_{S\backslash i}$ and uses Pinsker's inequality to derive an upper bound on the absolute difference between the expected value of $N_i$ under $v^{S}$ and $v^{S\backslash i}$.

Lemma 2 Let $S\in \mathcal {S}_K$, $S'\in \mathcal {A}_K$, and $i\in S$. Then, there exists a constant $c_3$, only depending on $\underline {w}$ and $s$, such that

As a consequence,

Proof. Let ${{\mathbb {P}}}$ and ${{\mathbb {Q}}}$ be arbitrary probability measures on $S'\cup \{0\}$. It can be shown, see, for example, Lemma 3 from Chen and Wang [Reference Chen and Wang3], that

where $p_j$ and $q_j$ are the probabilities of outcome $j$ under ${{\mathbb {P}}}$ and ${{\mathbb {Q}}}$, respectively. We apply this result for $p_j$ and $q_j$ defined as

for $j\in S'\cup \{0\}$. First, note that by (3.3), for all $j\in S'\cup \{0\}$,

Now, we bound $|p_j-q_j|$ from above for $j\in S'\cup \{0\}$. Note that for $j=0$ it holds that

For $j\neq i$, since $\epsilon \leqslant 1$, we find that

For $j=i$, we find that

Therefore, we conclude that

where

Next, note that the entire probability measures ${{\mathbb {P}}}_S$ and ${{\mathbb {P}}}_{S\backslash i}$ depend on $T$. Then, as a consequence of the chain rule of the KL divergence, we find that

Now, statement (3.8) follows from

where the step in (3.9) follows from, for example, Proposition 4.2 from Levin *et al.* [Reference Levin, Peres and Wilmer8] and we used Pinsker's inequality at the final inequality.

### 3.5. Step 4: Proving the main result

With all the established ingredients, we can finalize the proof of the lower bound on the regret.

Proof of Theorem 1. Since $v^{S}\in \mathcal {V}$ for all $S\in \mathcal {S}_K$ and by Lemma 1, we know that

We decompose $(a)$ into two terms:

Recall that $c = K/N\in (0,1/2)$. By summing over $S' = S \backslash i$ instead of over $S$, we bound $(b)$ from above by

where the first inequality follows from $\sum _{i \in [N]} \mathbb {E}_{S' }[N_i] \leqslant T K$, and the second inequality from

Now, $(c)$ can be bounded by applying Lemma 2:

By plugging the upper bounds on $(b)$ and $(c)$ in (3.10), we obtain

Now, we set $\epsilon$ as

This yields, for all $T \geqslant N$,

Finally, note that for $T\geqslant N$ it follows that $T\geqslant \sqrt {NT}$ and therefore

where

## Acknowledgments

The authors thank the Editor in Chief and an anonymous reviewer for their positive remarks and useful comments.

## Competing interest

The authors declare no conflict of interest.