added comments
worked on eval and transition
This commit is contained in:
@@ -2,7 +2,9 @@
|
||||
\label{sec:smoothing}
|
||||
|
||||
The main purpose of this work is to provide MC smoothing methods in context of indoor localisation.
|
||||
\commentByFrank{algorithms?}
|
||||
As mentioned before, those algorithm are able to compute probability distributions in the form of $p(\mStateVec_t \mid \mObsVec_{1:T})$ and are therefore able to make use of future observations between $t$ and $T$.
|
||||
\commentByFrank{evtl nochmal das $t << T$ dazu? is ne weile her und verwirrt vlt mit groß und klein t}
|
||||
|
||||
%Especially fixed-lag smoothing is very promising in context of pedestrian localisation.
|
||||
In the following we discuss the algorithmic details of the forward-backward smoother and the backward simulation.
|
||||
@@ -10,28 +12,38 @@ Further, a novel approach for incorporating them into the localisation system is
|
||||
|
||||
\subsection{Forward-backward Smoother}
|
||||
|
||||
\commentByFrank{Smoother (grosses S) wie in der caption?}
|
||||
The forward-backward smoother (FBS) of \cite{Doucet00:OSM} is a well established alternative to the simple filter-smoother. The foundation of this algorithm was again laid by Kitagawa in \cite{kitagawa1987non}.
|
||||
An approximation is given by
|
||||
\begin{equation}
|
||||
p(\vec{q}_t \mid \vec{o}_{1:T}) \approx \sum^N_{i=1} W^i_{t \mid T} \delta_{\vec{X}^i_{t}}(\vec{q}_{t}) \enspace,
|
||||
\label{eq:approxFBS}
|
||||
\end{equation}
|
||||
\commentByFrank{support?}
|
||||
\commentByFrank{ist $\delta$ irgendwo erklaert?}
|
||||
\commentByFrank{ist klein $\vec{x}$ irgendwo erklaert?}
|
||||
\commentByFrank{ist die notation $A_{b \mid c}$ bekannt? mir sagt das noch garnix}
|
||||
where $p(\vec{q}_t \mid \vec{o}_{1:T})$ has the same support as the filtering distribution $p(\vec{q}_t \mid \vec{o}_{1:t})$, but the weights are different.
|
||||
This means, that the FBS maintains the original particle locations and just reweights the particles to obtain a smoothed density.
|
||||
The complete FBS can be seen in algorithm \ref{alg:forward-backwardSmoother} in pseudo-algorithmic form.
|
||||
\commentByFrank{forward step vlt etwas genauer erklaeren weil 1. mal benutzt? oder is das hinlaenglich bekannt? :P}
|
||||
At first, the algorithm obtains the filtered distribution (particles) by deploying a forward step at each time $t$.
|
||||
\commentByFrank{pfennigfuchserei: sagt man smoothing distribution oder smoothed distribution? bin da ned drin}
|
||||
Then the backward step for determining the smoothing distribution is carried out.
|
||||
The weights are obtained through the backward recursion in line 9.
|
||||
|
||||
\commentByFrank{mir (als laie) wird nicht klar: mache ich erst alle forwaertsschritte (also alles bis zum pfadende durchlaufen) und gehe dann von da rueckwaerts (so klingts etwas im text), oder gehe ich nach jedem forwartsschritt rueckwarts (so klingts im pseudocode)}
|
||||
|
||||
\begin{algorithm}[t]
|
||||
\caption{Forward-Backward Smoother}
|
||||
\caption{Forward-Backward Smoother
|
||||
\commentByFrank{reihenfolge von $ \{ W^i_t, \vec{X}^i_t\}^N_{i=1}$ war oben andersrum. ned schlimm. nur wegen konsistenz :P}
|
||||
}
|
||||
\label{alg:forward-backwardSmoother}
|
||||
\begin{algorithmic}[1] % The number tells where the line numbering should start
|
||||
\For{$t = 1$ \textbf{to} $T$} \Comment{Filtering}
|
||||
\State{Obtain the weighted trajectories $ \{ W^i_t, \vec{X}^i_t\}^N_{i=1}$}
|
||||
\EndFor
|
||||
\For{ $i = 1$ \textbf{to} $N$} \Comment{Initialization}
|
||||
\commentByFrank{$t \mid T$ oder $T \mid T$?}
|
||||
\State{Set $W^i_{T \mid T} = W^i_T$}
|
||||
\EndFor
|
||||
\For{$t = T-1$ \textbf{to} $1$} \Comment{Smoothing}
|
||||
@@ -49,12 +61,17 @@ $}
|
||||
|
||||
|
||||
%Probleme? Nachteile? Komplexität etc.
|
||||
By reweighting the filter particles, the FBS improves the simple filter-smoother by removing its dependence on the inheritance (smoothed) paths \cite{fearnhead2010sequential}. However, by looking at algorithm \ref{alg:forward-backwardSmoother} it can easily be seen that this approach computes in $\mathcal{O}(N^2)$, where the calculation of each particle's weight is an $\mathcal{O}(N)$ operation. To reduce this computational bottleneck, \cite{klaas2006fast} introduced a solution using algorithms from N-body simulation. By integrating dual tree recursions and fast multipole techniques with the FBS a run-time cost of $\mathcal{O}(N \log N)$ can be achieved.
|
||||
\commentByFrank{muss ich die quelle gelesen haben ums zu verstehen? wird mir naemlich so ned klar}
|
||||
By reweighting the filter particles, the FBS improves the simple filter-smoother by removing its dependence on the inheritance (smoothed) paths \cite{fearnhead2010sequential}. However, by looking at algorithm \ref{alg:forward-backwardSmoother} it can easily be seen that this approach computes in $\mathcal{O}(N^2)$, where the calculation of each particle's weight is an $\mathcal{O}(N)$ operation. To reduce this computational bottleneck, \cite{klaas2006fast} introduced a solution using algorithms from N-body simulation. By integrating dual tree recursions and fast multipole techniques with the FBS, a run-time cost of $\mathcal{O}(N \log N)$ can be achieved.
|
||||
|
||||
\subsection{Backward Simulation}
|
||||
For smoothing applications with a high number of particles, it is often not necessary to use all particles for smoothing. This decision can for example be made due to a high sample impoverishment and/or highly certain sensors. By choosing a good sub-set for representing the posterior distribution, it is theoretically possible to further improve the estimation.
|
||||
For smoothing applications with a high number of particles, it is often not necessary to use all particles for smoothing.
|
||||
\commentByFrank{certain = accurate?}
|
||||
This decision can for example be made due to a high sample impoverishment and/or highly certain sensors.
|
||||
By choosing a good sub-set for representing the posterior distribution, it is theoretically possible to further improve the estimation.
|
||||
|
||||
Therefore, \cite{Godsill04:MCS} presented the backward simulation (BS). Where a number of independent sample realizations from the entire smoothing density are used to approximate the smoothing distribution.
|
||||
Therefore, \cite{Godsill04:MCS} presented the backward simulation (BS). Where a number of independent sample realisations
|
||||
from the entire smoothing density are used to approximate the smoothing distribution.
|
||||
%
|
||||
\begin{algorithm}[t]
|
||||
\caption{Backward Simulation Smoothing}
|
||||
@@ -72,12 +89,21 @@ Therefore, \cite{Godsill04:MCS} presented the backward simulation (BS). Where a
|
||||
\EndFor
|
||||
\State{Choose $\tilde{\vec{q}}^k_t = \vec{X}^j_t$ with probability $W^j_{t\mid t+1}$}
|
||||
\EndFor
|
||||
\State{$\tilde{\vec{q}}^k_{1:T} = (\tilde{\vec{q}}^k_1, \tilde{\vec{q}}^k_2, ..., \tilde{\vec{q}}^k_T)$ is one approximate realization from $p(\vec{q}_{1:T} \mid \vec{o}_{1:T})$}
|
||||
\State{$\tilde{\vec{q}}^k_{1:T} = (\tilde{\vec{q}}^k_1, \tilde{\vec{q}}^k_2, ..., \tilde{\vec{q}}^k_T)$ is one approximate realisation from $p(\vec{q}_{1:T} \mid \vec{o}_{1:T})$}
|
||||
\EndFor
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
%
|
||||
This method can be seen in algorithm \ref{alg:backwardSimulation} in pseudo-algorithmic form. Again, a particle filter is performed at first and then the smoothing procedure gets applied. Here, $\tilde{\vec{q}}_t$ is a random sample drawn approximately from $p(\vec{q}_{t} \mid \tilde{\vec{q}}_{t+1}, \vec{o}_{1:T})$. Therefore $\tilde{\vec{q}}_{1:T} = (\tilde{\vec{q}}_{1}, \tilde{\vec{q}}_{2}, ...,\tilde{\vec{q}}_{T})$ is one particular sample realization from $p(\vec{q}_{1:T} \mid \vec{o}_{1:T})$. Further independent realizations are obtained by repeating the algorithm until the desired number $N_{\text{sample}}$ is reached. The computational complexity for one particular realization is $\mathcal{O}(N)$. However, the computations are then repeated for each realization drawn \cite{Godsill04:MCS}.
|
||||
This method can be seen in algorithm \ref{alg:backwardSimulation} in pseudo-algorithmic form.
|
||||
Again, a particle filter is performed at first and then the smoothing procedure gets applied.
|
||||
\commentByFrank{das klingt so, als waeren particle-filter und smoothing zwei komplett verschiedene sachen.}
|
||||
\commentByFrank{was heisst 'drawn approximately'? nach welchen gesichtspunkte?}
|
||||
Here, $\tilde{\vec{q}}_t$ is a random sample drawn approximately from $p(\vec{q}_{t} \mid \tilde{\vec{q}}_{t+1}, \vec{o}_{1:T})$.
|
||||
Therefore $\tilde{\vec{q}}_{1:T} = (\tilde{\vec{q}}_{1}, \tilde{\vec{q}}_{2}, ...,\tilde{\vec{q}}_{T})$ is one particular sample
|
||||
realisation from $p(\vec{q}_{1:T} \mid \vec{o}_{1:T})$.
|
||||
Further independent realisations are obtained by repeating the algorithm until the desired number $N_{\text{sample}}$ is reached.
|
||||
The computational complexity for one particular realisation is $\mathcal{O}(N)$.
|
||||
However, the computations are then repeated for each realisation drawn \cite{Godsill04:MCS}.
|
||||
|
||||
\subsection{Transition for Smoothing}
|
||||
As seen above, both algorithms are reweighting particles based on a state transition model.
|
||||
@@ -95,11 +121,13 @@ p(\vec{q}_{t+1} \mid \vec{q}_t, \mObsVec_t)_{\text{step}} = \mathcal{N}(\Delta d
|
||||
\end{equation}
|
||||
we receive a statement about how likely it is to cover a distance $\Delta d_t$ between two states $\vec{q}_{t+1}$ and $\vec{q}_{t}$.
|
||||
In the easiest case, $\Delta d_t$ is the linear distance between two states.
|
||||
\commentByFrank{summarize: sum up?}
|
||||
Of course, based on the graph structure, one could calculate the shortest path between both and summarize the respective edge lengths.
|
||||
However, this requires tremendous calculation time for negligible improvements.
|
||||
Therefore this is not further discussed within this work.
|
||||
The average step length $\mu_{\text{step}}$ is based on the pedestrian's walking speed and $\sigma_{\gDist}^2$ denotes the step length's variance.
|
||||
Both values are chosen depending on the activity $x$ recognized at time $t$.
|
||||
\commentByFrank{then oder than?}
|
||||
For example $\mu_{\text{step}}$ gets smaller while a pedestrian is walking upstairs, then just walking straight.
|
||||
This requires to extend the smoothing transition by the current observation $\mObsVec_t$.
|
||||
Since $\mStateVec$ is hidden and the Markov property is satisfied, we are able to do so.
|
||||
@@ -121,7 +149,7 @@ p(\vec{q}_{t+1} \mid \vec{q}_t, \mObsVec_t)_{\text{baro}} = \mathcal{N}(\Delta z
|
||||
\label{eq:smoothingTransPressure}
|
||||
\end{equation}
|
||||
This assigns a low probability to false detected or misguided floor changes.
|
||||
Similar to \refeq{eq:smoothingTransDistance} we set $\mu_z$ and $\sigma^2_{z}$ based on the activity recognized at time $t$.
|
||||
Similar to \refeq{eq:smoothingTransDistance} we set $\mu_z$ and $\sigma^2_{z}$ based on the activity recognised at time $t$.
|
||||
Therefore, $\mu_z$ is the expected change in $z$-direction between two time steps.
|
||||
This means, if the pedestrian is walking alongside a corridor, we set $\mu_z = 0$.
|
||||
In contrast, $\mu_z$ is positive while walking downstairs or otherwise negative for moving upstairs.
|
||||
@@ -140,6 +168,7 @@ Looking at \refeq{eq:smoothingTransDistance} to \refeq{eq:smoothingTransPressure
|
||||
\end{equation}
|
||||
%
|
||||
It is important to notice, that all particles at each time step $t$ of the forward filtering need to be saved.
|
||||
\commentByFrank{increases?}
|
||||
Therefore, the memory requirement increasing proportional to the processing time.
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user