Fixed FE 1
This commit is contained in:
@@ -10,15 +10,15 @@ Multivariate kernel functions can be constructed in various ways, however, a pop
|
||||
Such a kernel is constructed by combining several univariate kernels into a product, where each kernel is applied in each dimension with a possibly different bandwidth.
|
||||
|
||||
Given a multivariate random variable $\bm{X}=(x_1,\dots ,x_d)$ in $d$ dimensions.
|
||||
The sample set $\mathcal{X}$ is a $n\times d$ matrix \cite[162]{scott2015}.
|
||||
The sample set $\mathcal{X}$ is a $n\times d$ matrix \cite{scott2015}.
|
||||
The multivariate KDE $\hat{f}$ which defines the estimate pointwise at $\bm{u}=(u_1, \dots, u_d)^T$ is given as
|
||||
\begin{equation}
|
||||
\label{eq:mvKDE}
|
||||
\hat{f}(\bm{u}) = \frac{1}{W} \sum_{i=1}^{n} \frac{w_i}{h_1 \dots h_d} \left[ \prod_{j=1}^{d} K\left( \frac{u_j-x_{ij}}{h_j} \right) \right] \text{,}
|
||||
\hat{f}(\bm{u}) = \frac{1}{W} \sum_{i=1}^{n} \frac{w_i}{h_1 \dots h_d} \left[ \prod_{j=1}^{d} K\left( \frac{u_j-x_{i,j}}{h_j} \right) \right] \text{,}
|
||||
\end{equation}
|
||||
where the bandwidth is given as a vector $\bm{h}=(h_1, \dots, h_d)$.
|
||||
|
||||
Note that \eqref{eq:mvKDE} does not include all possible multivariate kernels, such as spherically symmetric kernels, which are based on rotation of a univariate kernel.
|
||||
Note that \eqref{eq:mvKDE} does not include all possible multivariate kernels, such as spherically symmetric kernels, which are based on rotation of an univariate kernel.
|
||||
In general, a multivariate product and spherically symmetric kernel based on the same univariate kernel will differ.
|
||||
The only exception is the Gaussian kernel, which is spherically symmetric and has independent marginals. % TODO scott cite?!
|
||||
In addition, only smoothing in the direction of the axes is possible.
|
||||
@@ -30,7 +30,7 @@ Likewise, the ideas of common and linear binning rule scale with dimensionality
|
||||
|
||||
In general, multi-dimensional filters are multi-dimensional convolution operations.
|
||||
However, by utilizing the separability property of convolution, a straightforward and a more efficient implementation can be found.
|
||||
Convolution is separable if the filter kernel is separable, i.e. it can be split into successive convolutions of several kernels.
|
||||
Convolution is separable if the filter kernel is separable, \ie{} it can be split into successive convolutions of several kernels.
|
||||
In example, the Gaussian filter is separable, because of $e^{x^2+y^2} = e^{x^2}\cdot e^{y^2}$.
|
||||
Likewise digital filters based on such kernels are called separable filters.
|
||||
They are easily applied to multi-dimensional signals, because the input signal can be filtered in each dimension individually by an one-dimensional filter \cite{dspGuide1997}.
|
||||
@@ -45,7 +45,7 @@ They are easily applied to multi-dimensional signals, because the input signal c
|
||||
%These kind of multivariate kernel is called product kernel as the multivariate kernel result is the product of each individual univariate kernel.
|
||||
%
|
||||
%Given a multivariate random variable $X=(x_1,\dots ,x_d)$ in $d$ dimensions.
|
||||
%The sample $\bm{X}$ is a $n\times d$ matrix defined as \cite[162]{scott2015}
|
||||
%The sample $\bm{X}$ is a $n\times d$ matrix defined as \cite{scott2015}
|
||||
%\begin{equation}
|
||||
% \bm{X}=
|
||||
% \begin{pmatrix}
|
||||
@@ -61,7 +61,7 @@ They are easily applied to multi-dimensional signals, because the input signal c
|
||||
% \end{pmatrix} \text{.}
|
||||
%\end{equation}
|
||||
%
|
||||
%The multivariate kernel density estimator $\hat{f}$ which defines the estimate pointwise at $\bm{x}=(x_1, \dots, x_d)^T$ is given as \cite[162]{scott2015}
|
||||
%The multivariate kernel density estimator $\hat{f}$ which defines the estimate pointwise at $\bm{x}=(x_1, \dots, x_d)^T$ is given as \cite{scott2015}
|
||||
%\begin{equation}
|
||||
% \hat{f}(\bm{x}) = \frac{1}{nh_1 \dots h_d} \sum_{i=1}^{n} \left[ \prod_{j=1}^{d} K\left( \frac{x_j-x_{ij}}{h_j} \right) \right] \text{.}
|
||||
%\end{equation}
|
||||
@@ -77,7 +77,7 @@ They are easily applied to multi-dimensional signals, because the input signal c
|
||||
%\end{equation}
|
||||
|
||||
% Gaus:
|
||||
%If the filter kernel is separable, the convolution is also separable i.e. multi-dimensional convolution can be computed as individual one-dimensional convolutions with a one-dimensional kernel.
|
||||
%If the filter kernel is separable, the convolution is also separable \ie{} multi-dimensional convolution can be computed as individual one-dimensional convolutions with a one-dimensional kernel.
|
||||
%Because of $e^{x^2+y^2} = e^{x^2}\cdot e^{y^2}$ the Gaussian filter is separable and can be easily applied to multi-dimensional signals. \todo{quelle}
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user