\section{Binned Kernel Density Estimation} % KDE by rosenblatt and parzen % general KDE % Gauss Kernel % Formula Gauss KDE % -> complexity/operation count % Binned KDE % Binned Gauss KDE % -> complexity/operation count %The histogram is a simple and for a long time the most used non-parametric estimator. %However, its inability to produce a continuous estimate dismisses it for many applications where a smooth distribution is assumed. %In contrast, The KDE is often the preferred tool to estimate a density function from discrete data samples because of its ability to produce a continuous estimate and its flexibility. % Given a univariate random sample $X=\{X_1, X_2, \dots, X_n\}$, the kernel estimator $\hat{f}$ which defines the estimate at the point $x$ is given as \begin{equation} \label{eq:kde} \hat{f}(x) = \frac{1}{n} \sum_{i=1}^{n} K_h(x-X_i) \end{equation} where $K_h(t)=K(t/h)/h$ is the normalized kernel \cite[138]{scott2015} and $h\in\R^+$ is an arbitrary smoothing parameter called bandwidth. %, and $h=h_n$ is a function of the sample size $n$ with $h\rightarrow0$ as $n\rightarrow\infty$ \cite{rosenblatt1956remarks}. Any function which satisfies $\int K_h(u) \dop{u} = 1$ is a valid kernel. In general any kernel can be used, however the general advice is to chose a symmetric and low-order polynomial kernel. Thus, several popular kernel functions are used in practice, like the Uniform, Gaussian, Epanechnikov, or Silverman kernel \cite[152.]{scott2015}. While the kernel estimate inherits all the properties of the kernel, usually it is not of crucial matter if a non-optimal kernel was chosen \cite[151f.]{scott2015}. As a matter of fact, the quality of the kernel estimate is primarily determined by the smoothing parameter $h$ \cite[145]{scott2015}. %In theory it is possible to calculate an optimal bandwidth $h^*$ regarding to the asymptotic mean integrated squared error. %However, in order to do so the density function to be estimated needs to be known which is obviously unknown in practice. % %Any non-optimal bandwidth causes undersmoothing or oversmoothing. %An undersmoothing estimator has a large variance and hence a small $h$ leads to undersmoothing. %On the other hand given a large $h$ the bias increases, which leads to oversmoothing \cite[7]{Cybakov2009}. %Clearly with an adverse choice of the bandwidth crucial information like modality might get smoothed out. %All in all it is not obvious to determine a good choice of the bandwidth. % %This is aggravated by the fact that the structure of the data may vary significantly. %Given such a situation it is beneficial to adapt the bandwidth to the neighbourhood of the given data point. %As a result, a lot of research is put into developing data-driven bandwidth selections algorithms to obtain an adequate value of $h$ directly from the data. % TODO aus gründen wird hier die Bandbreite als gegeben angenommen % %As mentioned above the particular choice of the kernel is only of minor importance as it affects the overall result in an negligible way. It is common practice to suspect that the data is approximately Gaussian, and therefore the Gaussian kernel is frequently used. %Note that this assumption is different compared to assuming a concrete distribution family like a Gaussian distribution or mixture distribution. In this work we choose the Gaussian kernel in favour of computational efficiency as our approach is based on the approximation of the Gaussian filter. The Gaussian kernel is given as \begin{equation} \label{eq:gausKern} K_G(u)=\frac{1}{\sqrt{2\pi}} \expp{- \frac{u^2}{2} } \text{.} \end{equation} So far only the univariate case was considered. This is due to the fact, that univariate kernel estimators can quite easily be extended to multivariate distributions. A common approach is to apply an univariate kernel with a possibly different bandwidth in each dimension. These kind of multivariate kernel is called product kernel as the multivariate kernel result is the product of each individual univariate kernel. Given a multivariate random variable $X=(x_1,\dots ,x_d)$ in $d$ dimensions. The sample $\bm{X}$ is a $n\times d$ matrix defined as \cite[162]{scott2015} \begin{equation} \bm{X}= \begin{pmatrix} X_1 \\ \vdots \\ X_n \\ \end{pmatrix} = \begin{pmatrix} x_{11} & \dots & x_{1d} \\ \vdots & \ddots & \vdots \\ x_{n1} & \dots & x_{nd} \end{pmatrix} \text{.} \end{equation} The multivariate kernel density estimator $\hat{f}$ which defines the estimate pointwise at $\bm{x}=(x_1, \dots, x_d)^T$ is given as \cite[162]{scott2015} \begin{equation} \hat{f}(\bm{x}) = \frac{1}{nh_1 \dots h_d} \sum_{i=1}^{n} \left[ \prod_{j=1}^{d} K\left( \frac{x_j-x_{ij}}{h_j} \right) \right] \text{.} \end{equation} where the bandwidth is given as a vector $\bm{h}=(h_1, \dots, h_d)$. %Multivariate Gauss-Kernel %\begin{equation} %K(u)=\frac{1}{(2\pi)^{d/2}} \expp{-\frac{1}{2} \bm{x}^T \bm{x}} %\end{equation} The flexibility of the KDE comes at the expense of computational efficiency, which leads to the development of more efficient computation schemes. The computation time depends, besides the number of calculated points, on the number of data points $N$. In general, reducing the size of the sample negatively affects the accuracy of the estimate. Still, the sample size is a suitable parameter to speedup the computation. \todo{neu schreiben} Silverman \cite{silverman1982algorithm} suggested to reduce the number of single data points by combining adjacent points into data bins. This approximation is called binned kernel density estimate (BKDE) and was extensively analysed \cite{fan1994fast} \cite{wand1994fast} \cite{hall1996accuracy} \cite{holmstrom2000accuracy}. Usually the data is binned over an equidistant grid. Due to the equally-spaced grid many kernel evaluations are almost the same and can be saved, which greatly reduces the number of evaluated kernels and naturally leads to a reduced computation time \cite{fan1994fast}. \todo{bin size variable einführen} At first the data, i.e. a random sample $X$, has to be assigned to a grid. A binning rule distributes a sample $x$ among the grid points $g_j=j\delta$ for $j\in\Z$ and can be represented as a set of functions $\{ w_j(x,\delta), j\in\Z \}$. For computation a finite grid is used on the interval $[a,b]$ containing the data, thus the number of grid points is $G=(b-a)/\delta+1$. Given a binning rule $w_j$ the BKDE $\tilde{f}$ of a density $f$ computed pointwise at the grid point $g_x$ is given as \begin{equation} \label{eq:binKde} \tilde{f}(g_x) = \frac{1}{n} \sum_{j=1}^{G} N_j K_h(g_x-g_j) \end{equation} where $G$ is the number of grid points and \begin{equation} \label{eq:gridCnts} N_j=\sum_{i=1}^{n} w_j(x_i,\delta) \end{equation} is the count at grid point $g_j$ \cite{hall1996accuracy}. So far the binned estimator introduced two additional parameters which affects the accuracy of the estimation. The choice of the number of grid points is crucial, because it determines the trade-off between the approximation error introduced by the binning and the computational speed of the algorithm \cite{wand1994fast}. Also, the choice of the binning rule affects the accuracy of the approximation. There are several possible binning rules \cite{hall1996accuracy} but the most commonly used binning rules are the simple \begin{align} \label{eq:simpleBinning} w_j(x,\delta) &= \begin{cases} 1 & \text{if } x \in ((j-\frac{1}{2})\delta, (j-\frac{1}{2})\delta ] \\ 0 & \text{else} \end{cases} \end{align} and the common linear binning rule \begin{align} w_j(x,\delta) &= \begin{cases} 1-|\delta^{-1}x-j| & \text{if } |\delta^{-1}x-j|\le1 \\ 0 & \text{else.} \end{cases} \end{align} An advantage of these often used binning rules is that their effect on the approximation is extensively investigated and well understood \cite{wand1994fast} \cite{hall1996accuracy} \cite{holmstrom2000accuracy}. \todo{textfluß} While the estimate can be efficiently computed it is unknown how large the grid should be chosen. Because the computation time heavily depends on the grid size, it is desirable to chose a grid as small as possible without losing to much accuracy. In general, there is no definite answer because the amount of binning depends on the structure of the unknown density and the sample size. The roughness of the unknown density directly affects the grid size. Coarser grids allow a greater speedup but at the same time might conceal important details of the unknown density \cite{wand1994fast}. As already stated the computational savings are achieved by reducing the number of evaluated kernels. A naive implementation of \eqref{eq:binKde} reduces the number evaluations to $\landau{G^2}$ \cite{fan1994fast}. Because of the fixed grid spacing $\delta$ most of the kernel evaluations are the same, as each $g_j-g_{j-k}=k\delta$ is independent of $j$ \cite{fan1994fast}. Therefore, many evaluated kernels can be reused, so that the number kernel evaluations are reduced to $\landau{G}$ \cite{fan1994fast}. However, more important for this work the fact that the BKDE can be seen as a convolution operation. \todo{Satz} Once the grid counts $N_j$ in \eqref{eq:gridCnts} and kernel values are computed they need to be combined, which is, in fact, a discrete convolution \cite{wand1994fast}. This makes it possible to apply a wide range of well studied techniques from the DSP field. Often a FFT-convolution based computation scheme is used to efficiently compute the estimate \cite{silverman1982algorithm}\cite[210ff.]{scott2015}. Using the Gaussian kernel from \eqref{eq:gausKern} in conjunction with the BKDE results in the following equation \begin{equation} \hat{f}(g_x)=\frac{1}{nh\sqrt{2\pi}} \sum_{i=1}^{G} N_j \expp{-\frac{(x-X_i)^2}{2h^2}} \text{.} \end{equation} \todo{großes N zu großes C und im Text unten benutzen damit klarer} As already stated the above formula is a convolution operation of the data and the kernel. More precisely it is a discrete convolution of the finite data grid and the Gaussian function. In terms of DSP this is analogous to filter the binned data with a Gaussian filter. %Given $n$ independently observed realizations of the observation set $X=(x_1,\dots,x_n)$, the kernel density estimate $\hat{f}_n$ of the density function $f$ of the underlying distribution is given with %\begin{equation} %\label{eq:kde} %\hat{f}_n = \frac{1}{nh} \sum_{i=1}^{n} K \left( \frac{x-X_i}{h} \right) \text{,} %= \frac{1}{n} \sum_{i=1}^{n} K_h(x-x_i) %\end{equation} %where $K$ is the kernel function and $h\in\R^+$ is an arbitrary smoothing parameter called bandwidth. %While any density function can be used as the kernel function $K$ (such that $\int K(u) \dop{u} = 1$), a variety of popular choices of the kernel function $K$ exits. %In practice the Gaussian kernel is commonly used: %\begin{equation} %\label{eq:gausKern} %K(u)=\frac{1}{\sqrt{2\pi}} \expp{- \frac{u^2}{2} } %\end{equation} % %\begin{equation} %\hat{f}_n = \frac{1}{nh\sqrt{2\pi}} \sum_{i=1}^{n} \expp{-\frac{(x-X_i)^2}{2h^2}} %\end{equation}