frank d annotations
This commit is contained in:
Binary file not shown.
@@ -2,7 +2,7 @@
|
||||
Indoor localisation continuous to be a topic of growing importance.
|
||||
Despite the advances made, several profound problems are still present.
|
||||
For example, estimating an accurate position from a multimodal distribution or recovering from the influence of faulty measurements.
|
||||
Within this work, we try to solve such problems with help of Monte Carlo smoothing methods, namely forward-backward smoother and backward simulation.
|
||||
Within this work, we solve such problems with help of Monte Carlo smoothing methods, namely forward-backward smoother and backward simulation.
|
||||
In contrast to normal filtering procedures like particle filtering, smoothing methods are able to incorporate future measurements instead of just using current and past data.
|
||||
This enables many possibilities for further improving the position estimation.
|
||||
Both smoothing techniques are deployed as fixed-lag and fixed-interval smoother and a novel approach for incorporating them easily within our localisation system is presented.
|
||||
|
||||
@@ -3,10 +3,10 @@
|
||||
% 3/4 Seite ca.
|
||||
|
||||
%kurze einleitung zum smoothing
|
||||
Sequential MC filter, like aforementioned particle filter, use all observations $\mObsVec_{1:t}$ until the current time $t$ for computing an estimation of the state $\mStateVec_t$.
|
||||
In a Bayesian setting, this can be formalized as the computation of the posterior distribution $p(\mStateVec_t \mid \mObsVec_{1:t})$ using a sample of $N$ independent random variables, $\vec{X}^i_{t} \sim (\mStateVec_t \mid \mObsVec_{1:t})$ for $i = 1,...,N$ for approximation.
|
||||
Sequential MC filters, like the aforementioned particle filter, use all observations $\mObsVec_{1:t}$ until the current time $t$ for computing an estimation of the state $\mStateVec_t$.
|
||||
In a Bayesian setting, this can be formalized as the computation of the posterior distribution $p(\mStateVec_t \mid \mObsVec_{1:t})$ using a sample of $N$ independent random variables, $\vec{X}^i_{t} \sim p(\mStateVec_t \mid \mObsVec_{1:t})$ for $i = 1,...,N$ for approximation.
|
||||
Due to importance sampling, a weight $W^i_t$ is assigned to each sample $\vec{X}^i_{t}$.
|
||||
In context of particle filtering $\{W^i_{1:t}, \vec{X}^i_{1:t} \}_{i=1}^N$ is a weighted set of samples, also called particles.
|
||||
In the context of particle filtering $\{W^i_{1:t}, \vec{X}^i_{1:t} \}_{i=1}^N$ is a weighted set of samples, also called particles.
|
||||
Therefore a particle is a representation of one possible system state $\mStateVec$.
|
||||
By considering a situation given all observations $\vec{o}_{1:T}$ until a time step $T$, where $t \ll T$, standard filtering methods are not able to make use of this additional data for computing $p(\mStateVec_t \mid \mObsVec_{1:T})$.
|
||||
This problem can be solved with a smoothing algorithm.
|
||||
@@ -23,7 +23,7 @@ In his work \cite{kitagawa1996monte} he presented the simplest form of smoothing
|
||||
This algorithm is often called the filter-smoother since it runs online and a smoothing is provided while filtering.
|
||||
%\commentByFrank{das mit dem weighted paths irritiert mich etwas. war das original work auch fuer etwas, wo pfade im spiel waren? weils halt gar so gut passt. ned dass da begrifflichkeiten durcheinander kommen. beim lesen fehlt mir das beim 1. anlauf was damit gemeint ist}
|
||||
This approach uses the particle filter steps to update weighted paths $\{(W^i_t, \vec{X}_{1:t}^i)\}^N_{i=1}$, producing an accurate approximation of the filtering posterior $p(\vec{q}_{t} \mid \vec{o}_{1:t})$ with a computational complexity of only $\mathcal{O}(N)$.
|
||||
However, it gives a poor representation of previous states due a monotonic decrease of distinct particles caused by resampling of each weighted path \cite{Doucet11:ATO}.
|
||||
However, it gives a poor representation of previous states due to a monotonic decrease of distinct particles caused by resampling of each weighted path \cite{Doucet11:ATO}.
|
||||
Based on this, more advanced methods like the forward-backward smoother \cite{doucet2000} and backward simulation \cite{Godsill04:MCS} were developed.
|
||||
Both methods are running backwards in time to reweight a set of particles recursively by using future observations.
|
||||
Algorithmic details will be shown in section \ref{sec:smoothing}.
|
||||
@@ -45,7 +45,7 @@ The experiments of \cite{Nurminen2014} clearly emphasize the benefits of smoothi
|
||||
However, a fixed-lag smoother was discussed only in theory.
|
||||
|
||||
In the work of \cite{Paul2009} both fixed-interval and fixed-lag smoothing were presented.
|
||||
They implemented Wi-Fi, binary infra-red motion sensors, binary foot-switches and a potential field for floor plan restrictions.
|
||||
They implemented Wi-Fi, binary infrared motion sensors, binary foot-switches and a potential field for floor plan restrictions.
|
||||
Those sensors were incorporated using a sigma-point Kalman filter in combination with a forward-backward smoother.
|
||||
It was also proven by \cite{Paul2009}, that the fixed-lag smoother is slightly less accurate than the fixed-interval smoother, as one would expect from the theoretical foundation.
|
||||
Unfortunately, even a sigma-point Kalman filters is after all just a linearisation and therefore not as flexible and suited for the complex problem of indoor localisation as a non-linear estimator like a particle filter.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
\label{sec:smoothing}
|
||||
|
||||
The main purpose of this work is to provide MC smoothing methods in context of indoor localisation.
|
||||
As mentioned before, those algorithms are able to compute probability distributions in the form of $p(\mStateVec_t \mid \mObsVec_{1:T})$ and are therefore able to make use of future observations between $t$ and $T$, where $t << T$.
|
||||
As mentioned before, those algorithms are able to compute probability distributions in the form of $p(\mStateVec_t \mid \mObsVec_{1:T})$ and are therefore able to make use of future observations between $t$ and $T$, where $t \ll T$.
|
||||
%Especially fixed-lag smoothing is very promising in context of pedestrian localisation.
|
||||
In the following we discuss the algorithmic details of the forward-backward smoother and the backward simulation.
|
||||
Further, a novel approach for incorporating them into the localisation system is shown.
|
||||
@@ -38,7 +38,6 @@ The weights are obtained through the backward recursion in line 9.
|
||||
\begin{algorithmic}[1] % The number tells where the line numbering should start
|
||||
\For{$t = 1$ \textbf{to} $T$} \Comment{Filtering}
|
||||
\State{Obtain the weighted trajectories $ \{ W^i_t, \vec{X}^i_t\}^N_{i=1}$}
|
||||
\todo{Filtering hier genauer beschreiben?}
|
||||
\EndFor
|
||||
\For{ $i = 1$ \textbf{to} $N$} \Comment{Initialization}
|
||||
\State{Set $W^i_{T \mid T} = W^i_T$}
|
||||
@@ -64,7 +63,7 @@ By reweighting the filter particles, the FBS improves the simple filter-smoother
|
||||
|
||||
\subsection{Backward Simulation}
|
||||
For smoothing applications with a high number of particles, it is often not necessary to use all particles for smoothing.
|
||||
This decision can for example be made due to a high sample impoverishment and/or highly accurate sensors.
|
||||
This decision can, for example, be made due to a high sample impoverishment and/or highly accurate sensors.
|
||||
By choosing a good sub-set for representing the posterior distribution, it is theoretically possible to further improve the estimation.
|
||||
|
||||
Therefore, \cite{Godsill04:MCS} presented the backward simulation (BS). Where a number of independent sample realisations
|
||||
@@ -100,15 +99,14 @@ Here, $\tilde{\vec{q}}_t$ is a random sample drawn approximately from $p(\vec{q}
|
||||
For example $\tilde{\vec{q}}_t$ could be chosen by selecting particles within a cumulative frequency.
|
||||
Therefore $\tilde{\vec{q}}_{1:T} = (\tilde{\vec{q}}_{1}, \tilde{\vec{q}}_{2}, ...,\tilde{\vec{q}}_{T})$ is one particular sample
|
||||
realisation from $p(\vec{q}_{1:T} \mid \vec{o}_{1:T})$.
|
||||
Further independent realisations are obtained by repeating the algorithm until the desired number $N_{\text{sample}}$ is reached.
|
||||
Further independent realisations are obtained by repeating the algorithm until the desired number of realisations $N_{\text{sample}}$ is reached.
|
||||
The computational complexity for one particular realisation is $\mathcal{O}(N)$.
|
||||
However, the computations are then repeated for each realisation drawn \cite{Godsill04:MCS}.
|
||||
|
||||
\subsection{Transition for Smoothing}
|
||||
As seen above, both algorithms are reweighting particles based on a state transition model.
|
||||
Unlike the transition presented in section \ref{sec:transition}, it is not possible to just draw a set of new samples.
|
||||
Here, $p(\vec{q}_{t+1} \mid \vec{q}_{t})$ needs to provide the probability of the \textit{known} future state $\vec{q}_{t+1}$ under the condition of the current state $\vec{q}_{t}$.
|
||||
In case of indoor localisation using particle filtering, it is necessary to not only provide the probability of moving to a particle's position under the condition of its ancestor, but also of all other particles at time $t$.
|
||||
Here, $p(\vec{q}_{t+1} \mid \vec{q}_{t})$ needs to provide the probability of the \textit{known} future state $\vec{q}_{t+1}$ under the condition of its ancestor $\vec{q}_{t}$.
|
||||
The smoothing transition model therefore calculates the probability of being in a state $\vec{q}_{t+1}$ in regard to previous states and the pedestrian's walking behaviour.
|
||||
This means that a state $\vec{q}_t$ is more likely if it is a proper ancestor (realistic previous position) of a future state $\vec{q}_{t+1}$.
|
||||
In the following a simple and inexpensive approach for receiving this information will be described.
|
||||
@@ -119,7 +117,7 @@ p(\vec{q}_{t+1} \mid \vec{q}_t, \mObsVec_t)_{\text{step}} = \mathcal{N}(\Delta d
|
||||
\label{eq:smoothingTransDistance}
|
||||
\end{equation}
|
||||
we receive a statement about how likely it is to cover a distance $\Delta d_t$ between two states $\vec{q}_{t+1}$ and $\vec{q}_{t}$.
|
||||
In the easiest case, $\Delta d_t$ is the linear distance between two states.
|
||||
In the easiest case, $\Delta d_t$ is the euclidean distance between two states.
|
||||
Of course, based on the graph structure, one could calculate the shortest path between both and sum up the respective edge lengths.
|
||||
However, this requires tremendous calculation time for negligible improvements.
|
||||
Therefore this is not further discussed within this work.
|
||||
@@ -156,7 +154,7 @@ Looking at \refeq{eq:smoothingTransDistance} to \refeq{eq:smoothingTransPressure
|
||||
\begin{equation}
|
||||
\arraycolsep=1.2pt
|
||||
\begin{array}{ll}
|
||||
p(\vec{q}_{t+1} \mid \vec{q}_t) =
|
||||
p(\vec{q}_{t+1} \mid \vec{q}_t, \mObsVec_t) =
|
||||
&p(\vec{q}_{t+1} \mid \vec{q}_t, \mObsVec_t)_{\text{step}}\\
|
||||
&p(\vec{q}_{t+1} \mid \vec{q}_t, \mObsVec_t)_{\text{turn}}\\
|
||||
&p(\vec{q}_{t+1} \mid \vec{q}_t, \mObsVec_t)_{\text{baro}}
|
||||
|
||||
@@ -19,7 +19,7 @@ Therefore, a Bayes filter that satisfies the Markov property is used to calculat
|
||||
%
|
||||
Here, the previous observation $\mObsVec_{t-1}$ is included into the state transition \cite{Koeping14-PSA}.
|
||||
For approximating eq. \eqref{equ:bayesInt} by means of MC methods, the transition is used as proposal distribution, also known as CONDENSATION algorithm \cite{isard1998smoothing}.
|
||||
The handle the phenomenon of weight degeneracy we additionally apply a resampling step.
|
||||
This algorithm also performs a resampling step to handle the phenomenon of weight degeneracy.
|
||||
|
||||
In context of indoor localisation, the hidden state $\mStateVec$ is defined as follows:
|
||||
\begin{equation}
|
||||
|
||||
Reference in New Issue
Block a user