State estimation v2
This commit is contained in:
@@ -18,19 +18,19 @@
|
||||
Each particle is a realization of one possible system state, here, the position of a pedestrian within a building.
|
||||
The set of all particles represents the posterior of the system.
|
||||
In other words, the particle filter naturally generates a sample based representation of the posterior.
|
||||
With this representation a point estimator can directly be applied to the sample data to derive a sample statistic severing as a \qq{best guess}.
|
||||
With this representation a point estimator can directly be applied to the sample data to derive a sample statistic serving as a \qq{best guess}.
|
||||
|
||||
A popular point estimate, which can be directly obtained from the sample set, is the minimum mean squared error (MMSE) estimate.
|
||||
In the case of particle filters the MMSE estimate equals to the weighted-average over all samples, \ie{} the sample mean
|
||||
% TODO Notation prüfen
|
||||
\begin{equation}
|
||||
\hat{\mStateVec}_t := \frac{1}{W_t} \sum_{i=1}^{N} w^i_t \mStateVec^i_t \, \text{,}
|
||||
\end{equation}
|
||||
\commentByMarkus{Passt die Notation so?}
|
||||
where $W_t=\sum_{i=1}^{N}w^i_t$ is the sum of all weights.
|
||||
While producing an overall good result in many situations, it fails when the posterior is multimodal.
|
||||
In these situations the weighted-average estimate will find the estimate somewhere between the modes.
|
||||
Clearly, such a position between modes could never be the real position of the pedestrian.
|
||||
The real position is more likely to be found at the position of one of the modes, but never somewhere between.
|
||||
Clearly, such a position between modes is extremely unlikely the real position of the pedestrian.
|
||||
The real position is more likely to be found at the position of one of the modes, but virtually never somewhere between.
|
||||
|
||||
In the case of a multimodal posterior the system should estimate the position based on the most highest mode.
|
||||
Therefore, the maximum a posteriori (MAP) estimate is a suitable choice for such a situation.
|
||||
@@ -54,9 +54,10 @@ For our system we choose the Gaussian kernel in favour of computational efficien
|
||||
|
||||
The great flexibility of the KDE comes at the cost of a high computational time, which renders it unpractical for real time scenarios.
|
||||
The complexity of a naive implementation of the KDE is \landau{MN}, given by $M$ evaluations and $N$ particles as input size.
|
||||
Our rapid computation scheme of the KDE has a linear time complexity and is fast enough to estimate the density of the posterior in each time step \cite{Bullmann-18}.
|
||||
|
||||
\commentByMarkus{To be continued}
|
||||
A fast approximation of the KDE can be applied if the data is stored in a equidistant bins.
|
||||
Computation of the KDE with a Gaussian kernel on the binned data becomes analogous to applying a Gaussian filter, which can be approximated by iterated box filter in \landau{N} \cite{Bullmann-18}.
|
||||
Our rapid computation scheme of the KDE is fast enough to estimate the density of the posterior in each time step.
|
||||
This allows us to recover the most prober state from occurring multimodal posterior.
|
||||
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user