added mc sampling and fixed some stuff in smoothing
This commit is contained in:
@@ -3,8 +3,9 @@
|
||||
% 3/4 Seite ca.
|
||||
|
||||
%kurze einleitung zum smoothing
|
||||
Filtering algorithm, like aforementioned particle filter, use all observations $\mObsVec_{1:t}$ until the current time $t$ for computing an estimation of the state $\mStateVec_t$.
|
||||
In a Bayesian setting, this can be formalized as the computation of the posterior distribution $p(\mStateVec_t \mid \mObsVec_{1:t})$.
|
||||
Sequential MC filter, like aforementioned particle filter, use all observations $\mObsVec_{1:t}$ until the current time $t$ for computing an estimation of the state $\mStateVec_t$.
|
||||
In a Bayesian setting, this can be formalized as the computation of the posterior distribution $p(\mStateVec_t \mid \mObsVec_{1:t})$ using a sample of $N$ independent random variables, $\vec{X}^i_{t} \sim (\mStateVec_t \mid \mObsVec_{1:t})$ for $i = 1,...,N$ for approximation.
|
||||
Due to importance sampling, a weight $W^i_t$ is assigned to each sample $\vec{X}^i_{t}$.
|
||||
By considering a situation given all observations $\vec{o}_{1:T}$ until a time step $T$, where $t \ll T$, standard filtering methods are not able to make use of this additional data for computing $p(\mStateVec_t \mid \mObsVec_{1:T})$.
|
||||
This problem can be solved with a smoothing algorithm.
|
||||
|
||||
@@ -18,7 +19,7 @@ On the other hand, fixed-interval smoothing requires all observations until time
|
||||
The origin of MC smoothing can be traced back to Genshiro Kitagawa.
|
||||
In his work \cite{kitagawa1996monte} he presented the simplest form of smoothing as an extension to the particle filter.
|
||||
This algorithm is often called the filter-smoother since it runs online and a smoothing is provided while filtering.
|
||||
This approach uses the particle filter steps to update weighted paths $\{(\vec{q}_{1:t}^i , w^i_t)\}^N_{i=1}$, producing an accurate approximation of the filtering posterior $p(\vec{q}_{t} \mid \vec{o}_{1:t})$ with a computational complexity of only $\mathcal{O}(N)$.
|
||||
This approach uses the particle filter steps to update weighted paths $\{(\vec{X}_{1:t}^i , W^i_t)\}^N_{i=1}$, producing an accurate approximation of the filtering posterior $p(\vec{q}_{t} \mid \vec{o}_{1:t})$ with a computational complexity of only $\mathcal{O}(N)$.
|
||||
However, it gives a poor representation of previous states due a monotonic decrease of distinct particles caused by resampling of each weighted path \cite{Doucet11:ATO}.
|
||||
Based on this, more advanced methods like the forward-backward smoother \cite{doucet2000} and backward simulation \cite{Godsill04:MCS} were developed.
|
||||
Both methods are running backwards in time to reweight a set of particles recursively by using future observations.
|
||||
|
||||
@@ -1,16 +1,13 @@
|
||||
\section{Smoothing}
|
||||
\label{sec:smoothing}
|
||||
|
||||
The main purpose of this work is to provide MC smoothing methods in context of indoor localisation.
|
||||
As mentioned before, those algorithm are able to compute probability distributions in the form of $p(\mStateVec_t \mid \mObsVec_{1:T})$ and are therefore able to make use of future observations between $t$ and $T$.
|
||||
|
||||
|
||||
The main purpose of this work is to provide smoothing methods in context of indoor localisation.
|
||||
As mentioned before, those algorithm are able to compute probability distributions in the form of $p(\mStateVec_t \mid \mObsVec_{1:T})$ and are therefore able to make use of future observations between $t$ and $T$.
|
||||
%Especially fixed-lag smoothing is very promising in context of pedestrian localisation.
|
||||
In the following we discuss the algorithmic details of the forward-backward smoother and the backward simulation.
|
||||
Further, two novel approaches for incorporating them into the localisation system are shown.
|
||||
|
||||
\todo{Einfuehren von $X$ und etwas konkreter schreiben.}
|
||||
|
||||
\subsection{Forward-backward Smoother}
|
||||
|
||||
The forward-backward smoother (FBS) of \cite{Doucet00:OSM} is a well established alternative to the simple filter-smoother. The foundation of this algorithm was again laid by Kitagawa in \cite{kitagawa1987non}.
|
||||
@@ -31,8 +28,6 @@ The weights are obtained through the backward recursion in line 9.
|
||||
\caption{Forward-Backward Smoother}
|
||||
\label{alg:forward-backwardSmoother}
|
||||
\begin{algorithmic}[1] % The number tells where the line numbering should start
|
||||
\Statex{\textbf{Input:} Prior $\mu(\vec{X}^i_1)$}
|
||||
\Statex{~}
|
||||
\For{$t = 1$ \textbf{to} $T$} \Comment{Filtering}
|
||||
\State{Obtain the weighted trajectories $ \{ W^i_t, \vec{X}^i_t\}^N_{i=1}$}
|
||||
\EndFor
|
||||
@@ -65,8 +60,6 @@ Therefore, \cite{Godsill04:MCS} presented the backward simulation (BS). Where a
|
||||
\caption{Backward Simulation Smoothing}
|
||||
\label{alg:backwardSimulation}
|
||||
\begin{algorithmic}[1] % The number tells where the line numbering should start
|
||||
\Statex{\textbf{Input:} Prior $\mu(\vec{X}^i_1)$}
|
||||
\Statex{~}
|
||||
\For{$t = 1$ \textbf{to} $T$} \Comment{Filtering}
|
||||
\State{Obtain the weighted trajectories $ \{ W^i_t, \vec{X}^i_t\}^N_{i=1}$}
|
||||
\EndFor
|
||||
@@ -88,7 +81,3 @@ This method can be seen in algorithm \ref{alg:backwardSimulation} in pseudo-algo
|
||||
|
||||
\subsection{Transition for Smoothing}
|
||||
|
||||
|
||||
%komplexität eingehen
|
||||
The reason for not behandeln liegt ...
|
||||
However, \cite{} and \cite{} have proven this wrong and reduced the complexity of different smoothing methods.
|
||||
|
||||
Reference in New Issue
Block a user