Fixed FE 1

This commit is contained in:
MBulli
2018-03-12 22:21:39 +01:00
parent c224967b19
commit 316b1d2911
11 changed files with 76 additions and 72 deletions

View File

@@ -5,7 +5,7 @@
%As the density estimation poses only a single step in the whole process, its computation needs to be as fast as possible.
% not taking to much time from the frame
Consider a set of two-dimensional samples with associated weights, e.g. presumably generated from a particle filter system.
Consider a set of two-dimensional samples with associated weights, \eg{} presumably generated from a particle filter system.
The overall process for bivariate data is described in Algorithm~\ref{alg:boxKDE}.
Assuming that the given $N$ samples are stored in a sequential list, the first step is to create a grid representation.
@@ -35,7 +35,7 @@ Such knowledge should be integrated into the system to avoid a linear search ove
\Statex
%\For{$1 \textbf{ to } n$}
\Loop{ $n$ \textbf{times}} \Comment{$n$ box filter iterations}
\Loop{ $n$ \textbf{times}} \Comment{$n$ separated box filter iterations}
\For{$ i=1 \textbf{ to } G_1$}
@@ -51,26 +51,26 @@ Such knowledge should be integrated into the system to avoid a linear search ove
\end{algorithm}
Given the extreme values of the samples and grid sizes $G_1$ and $G_2$ defined by the user, a $G_1\times G_2$ grid can be constructed, using a binning rule from \eqref{eq:simpleBinning} or \eqref{eq:linearBinning}.
As the number of grid points directly affects both computation time and accuracy, a suitable grid should be as coarse as possible, but at the same time narrow enough to produce an estimate sufficiently fast with an acceptable approximation error.
As the number of grid points directly affects both, computation time and accuracy, a suitable grid should be as coarse as possible, but at the same time narrow enough to produce an estimate sufficiently fast with an acceptable approximation error.
If the extreme values are known in advanced, the computation of the grid is $\landau{N}$, otherwise an additional $\landau{N}$ search is required.
The grid is stored as an linear array in memory, thus its space complexity is $\landau{G_1\cdot G_2}$.
Next, the binned data is filtered with a Gaussian using the box filter approximation.
The box filter width is derived from the standard deviation of the approximated Gaussian, which is in turn equal to the bandwidth of the KDE.
The box filter's width is derived by \eqref{eq:boxidealwidth} from the standard deviation of the approximated Gaussian, which is in turn equal to the bandwidth of the KDE.
However, the bandwidth $h$ needs to be scaled according to the grid size.
This is necessary as $h$ is defined in the input space of the KDE, i.e. in relation to the sample data.
This is necessary as $h$ is defined in the input space of the KDE, \ie{} in relation to the sample data.
In contrast, the bandwidth of a BKDE is defined in the context of the binned data, which differs from the unbinned data due to the discretisation of the samples.
For this reason, $h$ needs to be divided by the bin size to account the discrepancy between the different sampling spaces.
Given the scaled bandwidth the required box filter width can be computed. % as in \eqref{label}
Given the scaled bandwidth the required box filter's width can be computed. % as in \eqref{label}
Due to its best runtime performance the recursive box filter implementation is used.
If multivariate data is processed, the algorithm is easily extended due to its separability.
Each filter pass is computed in $\landau{G}$ operations, however, an additional memory buffer is required.
Each filter pass is computed in $\landau{G}$ operations, however, an additional memory buffer is required \cite{dspGuide1997}.
While the integer-sized box filter requires fewest operations, it causes a larger approximation error due to rounding errors.
Depending on the required accuracy the extended box filter algorithm can further improve the estimation results, with only a small additional overhead.
Due to its simple indexing scheme, the recursive box filter can easily be computed in parallel using SIMD operations or parallel computation cores.
Depending on the required accuracy, the extended box filter algorithm can further improve the estimation results, with only a small additional overhead \cite{gwosdek2011theoretical}.
Due to its simple indexing scheme, the recursive box filter can easily be computed in parallel using SIMD operations and parallel computation cores.
Finally, the most likely state can be obtained from the filtered data, i.e. from the estimated discrete density, by searching filtered data for its maximum value.
Finally, the most likely state can be obtained from the filtered data, \ie{} from the estimated discrete density, by searching filtered data for its maximum value.