Censored Poisson Distribution
This post examines the mean of a Censored Poisson Distribution. The Poisson distribution has the probability mass funtion:
\[\Pr(X=k)= f(\lambda,k)=\frac{\lambda ^{k}e^{-\lambda }}{k!} \quad; \quad k\in \{0,1,2,\dots\}\]We are interested in the random variable \(Y = \min(X,C)\), where \(X\) is a Poisson random variable with rate \(\lambda\) and \(C\) is a constant. Specifically, we show some useful properties of \(\mathbb{E}[Y]\). I know at least two places where this distribution shows up. First is in inventory control. If a warehouse has invorty level \(C\) and \(X\) customers arrive in a day, the number of sales is \(Y=\min(X,C)\). The quantity \(X-Y\) is called lost sales. Typically, we do not observe \(X\) and can only observe \(Y\). Second is in queueing. If a queue has \(C\) customers in the system and \(X\) customers could have been processed if there were infinite customers, the number of customers processed is \(Y=\min(X,C)\). Like before, in this application we only observe \(Y\) and not \(X\).
\[\begin{align*} g(\lambda, C) = \mathbb{E}[Y] &= \sum_{k=0}^\infty \min(k,C) \Pr(X=k)\\ \end{align*}\]We show that \(g\) is increasing and concave in \(\lambda\) and \(C\). Consider the derivative of \(f(\lambda,k)\) wrt \(\lambda\).
\[\frac{\partial f }{\partial \lambda} (\lambda, k) = \frac{k\lambda ^{k-1}e^{-\lambda }}{k!} - \frac{\lambda ^{k}e^{-\lambda }}{k!} = \Pr(X=k-1) - \Pr(X=k)\]Taking the partial derivative of \(g\) wrt \(\lambda\):
\[\begin{align*} \frac{\partial g }{\partial \lambda} (\lambda, C)&= \sum_{k=0}^\infty \min(k,C)\frac{\partial f(\lambda, k) }{\partial \lambda} \\ &= \sum_{k=0}^\infty \min(k,C) ( \Pr(X=k-1) - \Pr(X=k))\\ &= \sum_{k=0}^\infty (\min(k+1,C)-\min(k,C)) \Pr(X=k) \\ &= \sum_{k=0}^{C-1}\Pr(X=k) = \Pr(X\leq C-1) = \sum_{k=0}^{C-1} f(\lambda, k) \end{align*}\]So, \(g\) is increasing in \(\lambda\). Taking its derivative again, we get:
\[\begin{align*} \frac{\partial^2 g }{\partial \lambda^2} (\lambda, C)&= \sum_{k=0}^{C-1} \frac{\partial f(\lambda,k)}{\partial \lambda} = \sum_{k=0}^{C-1} \Pr(X=k-1) - \Pr(X=k) \\ &= - \Pr(X = C-1) \end{align*}\]So, \(g\) is concave in \(\lambda\). Next, consider the forward difference \(g(\lambda, C+1) - g(\lambda, C)\):
\[\begin{align*} g(\lambda, C+1) - g(\lambda, C) &= \sum_{k=0}^\infty (\min(k,C+1)-\min(k,C)) \Pr(X=k)\\ &=\sum_{k=C+1}^\infty \Pr(X=k) = \Pr(X\geq C+1) \end{align*}\]So, \(g\) is increasing in \(C\). Finally, consider the forward diference \(\Pr(X\geq C+2) - \Pr(X\geq C+1)\)
\[\Pr(X\geq C+2) - \Pr(X\geq C+1) = -\Pr(X=C+1)\]So, \(g\) is concave in \(C\).
Finally verify that:
\[\frac{\partial g }{\partial \lambda}(\lambda, C+1) - \frac{\partial g }{\partial \lambda} (\lambda, C) = \frac{\partial (g(\lambda, C+1) - g(\lambda, C)) }{\partial \lambda} = \Pr(X=C)\]Forming the Hessian, and computing its determinant, we see
\[\text{det}\begin{bmatrix}-\Pr(X=C-1) & \Pr(X=C)\\ \Pr(X=C) & -\Pr(X=C+1)\end{bmatrix} < 0\]So \(g\) is not jointly concave.