This repository has been archived on 2020-04-08. You can view files and clone it, but cannot push or open issues or pull requests.
Files
IPIN2017/tex/chapters/method.tex

131 lines
7.0 KiB
TeX

\section{IMMPF and Mixing}
\label{sec:immpf}
\commentByToni{TODO: Namen von Methoden gross oder klein? \\ Normalisierungsfaktoren dazu schreiben, oder langt Bemerkung im Text?}
In the previous section we have introduced a standard particle filter, an evaluation step and two different transition models.
Using this, we are able to implement two different localisation schemes.
One providing a high diversity with a robust, but uncertain position estimation.
The other keeps the localisation error low by using a very realistic propagation model, while being prone to sample impoverishment \cite{Ebner-15}.
In the following, we will combine those filters using the Interacting Multiple Model Particle Filter (IMMPF) and a non-trivial Markov switching process.
%Einführen des IMMPF
Consider a jump Markov non-linear system that is represented by different particle filters as state space description, where the characteristics change in time according to a Markov chain.
The posterior distribution is then described by
%
\begin{equation}
p(\mStateVec_{t}, m_t \mid \mObsVec_{1:t}) = P(m_k \mid \mObsVec_{1:t}) p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})
\label{equ:immpfPosterior}
\end{equation}
%
where $m_t\in M\subset \mathbb{N}$ is the modal state of the system \cite{Driessen2005}.
Given \eqref{equ:immpfPosterior} and \eqref{equ:bayesInt}, the mode conditioned filtering stage can be written as
%
\begin{equation}
\arraycolsep=1.2pt
\begin{array}{ll}
p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t}) \propto
&p(\mObsVec_{t} \mid m_t, \mStateVec_{t})\\
&\int p(\mStateVec_{t} \mid \mStateVec_{t-1}, m_t, \mObsVec_{t-1})\\
&p(\mStateVec_{t-1} \mid m_{t-1}, \mObsVec_{1:t-1})d\vec{q}_{t-1}
\end{array}
\label{equ:immpfFiltering}
\end{equation}
%
and the posterior mode probabilities are calculated by
%
\begin{equation}
p(m_t \mid \mObsVec_{1:t}) \propto p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t-1}) P(m_t \mid \mObsVec_{1:t-1})
\enspace .
\label{equ:immpModeProb}
\end{equation}
%
It should be noted that \eqref{equ:immpfFiltering} and \eqref{equ:immpModeProb} are not normalized and thus such a step is required.
To provide a solution for $P(m_t \mid \mObsVec_{1:t-1})$, the recursion for $m_t$ in \eqref{equ:immpfPosterior} is now derived by the mixing stage \cite{Driessen2005}.
Here, we compute
%
\begin{equation}
\arraycolsep=1.2pt
\begin{split}
&p(\mStateVec_{t} \mid m_{t+1}, \mObsVec_{1:t}) = \\
& \sum_{m_t} P(m_t \mid m_{t+1}, \mObsVec_{1:t}) p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})
\end{split}
\label{equ:immpModeMixing}
\end{equation}
%
with
%
\begin{equation}
P(m_t \mid m_{t+1}, \mObsVec_{1:t}) = \frac{P(m_{t+1} \mid m_t) P(m_t \mid \mObsVec_{1:t})}{P(m_t \mid \mObsVec_{1:t-1})}
\label{equ:immpModeMixing2}
\end{equation}
%
and
%
\begin{equation}
P(m_t \mid \mObsVec_{1:t-1}) = \sum_{m_t}{P(m_{t+1} \mid m_t) P(m_t \mid \mObsVec_{1:t})}
\enspace ,
\label{equ:immpModeMixing3}
\end{equation}
%
where \eqref{equ:immpModeMixing} is a weighted sum of distributions and the weights are provided through \eqref{equ:immpModeMixing2}.
The transition probability $P(m_{t+1} = k \mid m_t = l)$ is given by the Markov transition matrix $[\Pi_t]_{kl}$.
Sampling from \eqref{equ:immpModeMixing} is done by first drawing a modal state $m_t$ from $P(m_t \mid m_{t+1}, \mObsVec_{1:t})$ and then drawing a state $\mStateVec_{t}$ from $p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})$ in dependence to that $m_t$.
In context of particle filtering, this means that \eqref{equ:immpModeMixing} enables us to pick particles from all available modes in regard to the discrete distribution $P(m_t \mid m_{t+1}, \mObsVec_{1:t})$.
Further, the number of particles in each mode can be selected independently of the actual mode probabilities.
Algorithm \ref{alg:immpf} shows the complete IMMPF procedure in detail.
As prior knowledge, $M$ initial probabilities $P(m_1 \mid \mObsVec_{1})$ and initial distributions $p(\mStateVec_{1} \mid m_1, \mObsVec_{1})$ each providing a particle set $\{W^i_{1}, \vec{X}^i_{1} \}_{i=1}^N$ are available.
\begin{algorithm}[t]
\caption{IMMPF Algorithm}
\label{alg:immpf}
\begin{algorithmic}[1] % The number tells where the line numbering should start
\Statex{\textbf{Input:} Prior $P(m_1 \mid \mObsVec_{1})$ and $p(\mStateVec_{1} \mid m_1, \mObsVec_{1})$}
\Statex{~}
\For{$m_t = 0$ \textbf{to} $M$} \Comment{Mixing}
\For{$i = 0$ \textbf{to} $N_{m_t}$}
\State Sample $m^i_{t-1} \sim P(m_{t-1} \mid m_{t}, \mObsVec_{1:t-1})$
\State Sample $\vec{X}^{i, m_t}_{t-1} \sim p(\mStateVec_{t-1} \mid m^i_{t-1}, \mObsVec_{1:t-1})$
\State Set $W^{i, m_t}_{t-1}$ to $\frac{1}{N_{m_t}}$
\EndFor
\EndFor
\Statex{~}
\Statex \textbf{Run:} Parallel filtering for each $m_t \in M$ \Comment{Filtering}
\For{$i = 0$ \textbf{to} $N_{m_t}$}
\State Sample $\vec{X}_t^{i,m_t} \sim p(\vec{X}_t^{i,m_t} \mid \vec{X}_{t-1}^{i,m_t})$\Comment{Transition}
\State Compute $W^{i,m_t}_t \propto p(\vec{o}_t \mid \vec{X}_{t}^{i, m_t})$ \Comment{Evaluation}
\EndFor
\State Calculate $\lambda_t^{m_t} = \sum_{i=1}^{N_{m_t}} W^{i, m_t}_t$
\State Normalise $W^{i,m_t}_t$ using $\lambda_t^{m_t}$
\State Resample $\{W_{t}^{i,m_t}, \vec{X}_{t}^{i,m_t} \}$ to obtain $N_{m_t}$ new equally-weighted particles $\{\frac{1}{N_{m_t}}, \overline{\vec{X}}_{t}^{i,m_t} \}$
\vspace{0.1cm}
\State Estimate $P(m_t \mid \mObsVec_{1:t}) = \frac{\lambda_t^{m_t} P(m_t \mid \mObsVec_{1:t-1})}{\sum_{m=1}^M \lambda_t^{m_t} P(m_t \mid \mObsVec_{1:t-1})}$
\end{algorithmic}
\end{algorithm}
%grundidee warum die matrix so gewählt wird.
With the above, we are finally able to combine the two filters described in section \ref{}.
The basic idea is that if the more restrictive filter
grundidee der restrictive filter soll aufgrund seiner hohen genauigkeit die state estimation machen. der andere filter dient als support, wenn der genaurer in sample impoverishment läuft. da seine transition sehr robust ist und eine hohe diversity hat. d.h. wenn festgestellt wird, da der genauere filter stecken bleibt sollen partikel aus dem robusten gezogen werden, um somit ein sample impoversichemnt zu verhinden.
dafür kld nehmen...
aber:
es ist offensichlich, das das nur so lange funktioniert, wie wie wi-fi messungen stabil sind, da der robuste filter komplett abhängig davon ist und sprünge durch das wifi zulässt
heißt, schlechtes (attenuation) wi-fi für zu einer hohen kld und somit liefert der robuste filter komplett miese ergebnisse. um das zu verhinden soll nicht nur der gute filter vom robusten ziehen können, sonder nauch umgekehrt und zwar in abhängigkeit zu aktuellen wifi qualität. ist die qualität schlecht, dann ziehe partikel aus dem guten filter, da die restrictive transition und der pdr ansatz zusätzliches wissen schaffen. die qualität wird über ein das heuristische maß blabal gemssen.
All this can be utilized by a non-trivial markov blaa. The Markov transition matrix at time $t$ is then given by
%
\begin{equation}
d
\enspace ,
\label{equ:immpMatrix}
\end{equation}
%