replace algo pos

This commit is contained in:
toni
2018-02-20 15:58:18 +01:00
parent 4c72c4f15f
commit 8370920d7d
3 changed files with 12 additions and 12 deletions

View File

@@ -5,8 +5,8 @@ We now empirically evaluate the accuracy of our method, using the mean integrate
The ground truth is given as $N=1000$ synthetic samples drawn from a bivariate mixture normal density $f$
\begin{equation}
\begin{split}
\bm{X} \sim &\G{\VecTwo{0}{0}}{0.5\bm{I}} + \G{\VecTwo{3}{0}}{\bm{I}} \\
&+ \G{\VecTwo{0}{3}}{\bm{I}} + \G{\VecTwo{-3}{0} }{\bm{I}} + \G{\VecTwo{0}{-3}}{\bm{I}}
\bm{X} \sim & ~\G{\VecTwo{0}{0}}{0.5\bm{I}} + \G{\VecTwo{3}{0}}{\bm{I}} + \G{\VecTwo{0}{3}}{\bm{I}} \\
&+ \G{\VecTwo{-3}{0} }{\bm{I}} + \G{\VecTwo{0}{-3}}{\bm{I}}
\end{split}
\end{equation}
where the majority of the probability mass lies in the range $[-6; 6]^2$.

View File

@@ -61,7 +61,7 @@ This recursive calculation scheme further reduces the time complexity of the box
Furthermore, only one addition and subtraction is required to calculate a single output value.
The overall algorithm to efficiently compute \eqref{eq:boxFilt} is listed in Algorithm~\ref{alg:naiveboxalgo}.
\begin{algorithm}[ht]
\begin{algorithm}[t]
\caption{Recursive 1D box filter}
\label{alg:naiveboxalgo}
\begin{algorithmic}[1]

View File

@@ -5,7 +5,15 @@
%As the density estimation poses only a single step in the whole process, its computation needs to be as fast as possible.
% not taking to much time from the frame
\begin{algorithm}[ht]
Consider a set of two-dimensional samples with associated weights, e.g. presumably generated from a particle filter system.
The overall process for bivariate data is described in Algorithm~\ref{alg:boxKDE}.
Assuming that the given $N$ samples are stored in a sequential list, the first step is to create a grid representation.
In order to efficiently construct the grid and to allocate the required memory the extrema of the samples need to be known in advance.
These limits might be given by the application, for example, the position of a pedestrian within a building is limited by the physical dimensions of the building.
Such knowledge should be integrated into the system to avoid a linear search over the sample set, naturally reducing the computation time.
\begin{algorithm}[t]
\caption{Bivariate \textsc{boxKDE}}
\label{alg:boxKDE}
\begin{algorithmic}[1]
@@ -42,14 +50,6 @@
\end{algorithmic}
\end{algorithm}
Consider a set of two-dimensional samples with associated weights, e.g. presumably generated from a particle filter system.
The overall process for bivariate data is described in Algorithm~\ref{alg:boxKDE}.
Assuming that the given $N$ samples are stored in a sequential list, the first step is to create a grid representation.
In order to efficiently construct the grid and to allocate the required memory the extrema of the samples need to be known in advance.
These limits might be given by the application, for example, the position of a pedestrian within a building is limited by the physical dimensions of the building.
Such knowledge should be integrated into the system to avoid a linear search over the sample set, naturally reducing the computation time.
Given the extreme values of the samples and grid sizes $G_1$ and $G_2$ defined by the user, a $G_1\times G_2$ grid can be constructed, using a binning rule from \eqref{eq:simpleBinning} or \eqref{eq:linearBinning}.
As the number of grid points directly affects both computation time and accuracy, a suitable grid should be as coarse as possible, but at the same time narrow enough to produce an estimate sufficiently fast with an acceptable approximation error.