Fixed many bugs

This commit is contained in:
2018-02-27 10:49:05 +01:00
parent 9d4927a365
commit 1fb9461a5f
8 changed files with 67 additions and 68 deletions

View File

@@ -5,7 +5,7 @@ Each method can be seen as several one-dimensional problems combined to a multi-
%However, with an increasing number of dimensions the computation time significantly increases.
In the following, the generalization to multi-dimensional input are briefly outlined.
In order to estimate a multivariate density using KDE or BKDE a multivariate kernel needs to be used.
In order to estimate a multivariate density using KDE or BKDE, a multivariate kernel needs to be used.
Multivariate kernel functions can be constructed in various ways, however, a popular way is given by the product kernel.
Such a kernel is constructed by combining several univariate kernels into a product, where each kernel is applied in each dimension with a possibly different bandwidth.
@@ -19,21 +19,21 @@ The multivariate KDE $\hat{f}$ which defines the estimate pointwise at $\bm{u}=(
where the bandwidth is given as a vector $\bm{h}=(h_1, \dots, h_d)$.
Note that \eqref{eq:mvKDE} does not include all possible multivariate kernels, such as spherically symmetric kernels, which are based on rotation of a univariate kernel.
In general a multivariate product and spherically symmetric kernel based on the same univariate kernel will differ.
The only exception is the Gaussian kernel which is spherical symmetric and has independent marginals. % TODO scott cite?!
In addition, only smoothing in the direction of the axes are possible.
In general, a multivariate product and spherically symmetric kernel based on the same univariate kernel will differ.
The only exception is the Gaussian kernel, which is spherically symmetric and has independent marginals. % TODO scott cite?!
In addition, only smoothing in the direction of the axes is possible.
If smoothing in other directions is necessary, the computation needs to be done on a prerotated sample set and the estimate needs to be rotated back to fit the original coordinate system \cite{wand1994fast}.
For the multivariate BKDE, in addition to the kernel function the grid and the binning rules need to be extended to multivariate data.
For the multivariate BKDE, in addition to the kernel function, the grid and the binning rules need to be extended to multivariate data.
Their extensions are rather straightforward, as the grid is easily defined on many dimensions.
Likewise, the ideas of common and linear binning rule scale with the dimensionality \cite{wand1994fast}.
Likewise, the ideas of common and linear binning rule scale with dimensionality \cite{wand1994fast}.
In general multi-dimensional filters are multi-dimensional convolution operations.
However, by utilizing the separability property of convolution a straightforward and a more efficient implementation can be found.
In general, multi-dimensional filters are multi-dimensional convolution operations.
However, by utilizing the separability property of convolution, a straightforward and a more efficient implementation can be found.
Convolution is separable if the filter kernel is separable, i.e. it can be split into successive convolutions of several kernels.
In example, the Gaussian filter is separable, because of $e^{x^2+y^2} = e^{x^2}\cdot e^{y^2}$.
Likewise digital filters based on such kernels are called separable filters.
They are easily applied to multi-dimensional signals, because the input signal can be filtered in each dimension individually by an one-dimensional filter.
They are easily applied to multi-dimensional signals, because the input signal can be filtered in each dimension individually by an one-dimensional filter \cite{dspGuide1997}.