177 lines
9.7 KiB
TeX
177 lines
9.7 KiB
TeX
\section{IMMPF and Mixing}
|
|
\label{sec:immpf}
|
|
|
|
In the previous section we have introduced a standard particle filter, an evaluation step and two different transition models.
|
|
Using this, we are able to implement two different localisation schemes.
|
|
One providing a high diversity with a robust, but uncertain position estimation.
|
|
The other keeps the localisation error low by using a very realistic propagation model, while being prone to sample impoverishment \cite{Ebner-15}.
|
|
In the following, we will combine those filters using the Interacting Multiple Model Particle Filter (IMMPF) and a non-trivial Markov switching process.
|
|
|
|
%Einführen des IMMPF
|
|
Consider a jump Markov non-linear system that is represented by different particle filters as state space description, where the characteristics change in time according to a Markov chain.
|
|
The posterior distribution is then described by
|
|
%
|
|
\begin{equation}
|
|
p(\mStateVec_{t}, m_t \mid \mObsVec_{1:t}) = P(m_k \mid \mObsVec_{1:t}) p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})
|
|
\label{equ:immpfPosterior}
|
|
\end{equation}
|
|
%
|
|
where $m_t\in M\subset \mathbb{N}$ is the modal state of the system \cite{Driessen2005}.
|
|
Given \eqref{equ:immpfPosterior} and \eqref{equ:bayesInt}, the mode conditioned filtering stage can be written as
|
|
%
|
|
\begin{equation}
|
|
\arraycolsep=1.2pt
|
|
\begin{array}{ll}
|
|
p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t}) \propto
|
|
&p(\mObsVec_{t} \mid m_t, \mStateVec_{t})\\
|
|
&\int p(\mStateVec_{t} \mid \mStateVec_{t-1}, m_t, \mObsVec_{t-1})\\
|
|
&p(\mStateVec_{t-1} \mid m_{t-1}, \mObsVec_{1:t-1})d\vec{q}_{t-1}
|
|
|
|
\end{array}
|
|
\label{equ:immpfFiltering}
|
|
\end{equation}
|
|
%
|
|
and the posterior mode probabilities are calculated by
|
|
%
|
|
\begin{equation}
|
|
p(m_t \mid \mObsVec_{1:t}) \propto p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t-1}) P(m_t \mid \mObsVec_{1:t-1})
|
|
\enspace .
|
|
\label{equ:immpModeProb}
|
|
\end{equation}
|
|
%
|
|
It should be noted that \eqref{equ:immpfFiltering} and \eqref{equ:immpModeProb} are not normalized and thus such a step is required.
|
|
To provide a solution for $P(m_t \mid \mObsVec_{1:t-1})$, the recursion for $m_t$ in \eqref{equ:immpfPosterior} is now derived by the mixing stage \cite{Driessen2005}.
|
|
Here, we compute
|
|
%
|
|
\begin{equation}
|
|
\arraycolsep=1.2pt
|
|
\begin{split}
|
|
&p(\mStateVec_{t} \mid m_{t+1}, \mObsVec_{1:t}) = \\
|
|
& \sum_{m_t} P(m_t \mid m_{t+1}, \mObsVec_{1:t}) p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})
|
|
\end{split}
|
|
\label{equ:immpModeMixing}
|
|
\end{equation}
|
|
%
|
|
with
|
|
%
|
|
\begin{equation}
|
|
P(m_t \mid m_{t+1}, \mObsVec_{1:t}) = \frac{P(m_{t+1} \mid m_t) P(m_t \mid \mObsVec_{1:t})}{P(m_t \mid \mObsVec_{1:t-1})}
|
|
\label{equ:immpModeMixing2}
|
|
\end{equation}
|
|
%
|
|
and
|
|
%
|
|
\begin{equation}
|
|
P(m_t \mid \mObsVec_{1:t-1}) = \sum_{m_t}{P(m_{t+1} \mid m_t) P(m_t \mid \mObsVec_{1:t})}
|
|
\enspace ,
|
|
\label{equ:immpModeMixing3}
|
|
\end{equation}
|
|
%
|
|
where \eqref{equ:immpModeMixing} is a weighted sum of distributions and the weights are provided through \eqref{equ:immpModeMixing2}.
|
|
The transition probability $P(m_{t+1} = k \mid m_t = l)$ is given by the Markov transition matrix $[\Pi_t]_{kl}$.
|
|
Sampling from \eqref{equ:immpModeMixing} is done by first drawing a modal state $m_t$ from $P(m_t \mid m_{t+1}, \mObsVec_{1:t})$ and then drawing a state $\mStateVec_{t}$ from $p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})$ in dependence to that $m_t$.
|
|
In context of particle filtering, this means that \eqref{equ:immpModeMixing} enables us to pick particles from all available modes in regard to the discrete distribution $P(m_t \mid m_{t+1}, \mObsVec_{1:t})$.
|
|
Further, the number of particles in each mode can be selected independently of the actual mode probabilities.
|
|
|
|
Algorithm \ref{alg:immpf} shows the complete IMMPF procedure in detail.
|
|
As prior knowledge, $M$ initial probabilities $P(m_1 \mid \mObsVec_{1})$ and initial distributions $p(\mStateVec_{1} \mid m_1, \mObsVec_{1})$ providing a particle set $\{W^i_{1}, \vec{X}^i_{1} \}_{i=1}^N$ are available.
|
|
The mixing step requires that the independent running filtering process are all finished.
|
|
|
|
\begin{algorithm}[t]
|
|
\caption{IMMPF Algorithm}
|
|
\label{alg:immpf}
|
|
\begin{algorithmic}[1] % The number tells where the line numbering should start
|
|
\Statex{\textbf{Input:} Prior $P(m_1 \mid \mObsVec_{1})$ and $p(\mStateVec_{1} \mid m_1, \mObsVec_{1})$}
|
|
\Statex{~}
|
|
\For{$m_t = 0$ \textbf{to} $M$} \Comment{Mixing}
|
|
\For{$i = 0$ \textbf{to} $N_{m_t}$}
|
|
\State Sample $m^i_{t-1} \sim P(m_{t-1} \mid m_{t}, \mObsVec_{1:t-1})$
|
|
\State Sample $\vec{X}^{i, m_t}_{t-1} \sim p(\mStateVec_{t-1} \mid m^i_{t-1}, \mObsVec_{1:t-1})$
|
|
\State Set $W^{i, m_t}_{t-1}$ to $\frac{1}{N_{m_t}}$
|
|
\EndFor
|
|
\EndFor
|
|
\Statex{~}
|
|
\Statex \textbf{Run:} Parallel filtering for each $m_t \in M$ \Comment{Filtering}
|
|
\For{$i = 0$ \textbf{to} $N_{m_t}$}
|
|
\State Sample $\vec{X}_t^{i,m_t} \sim p(\vec{X}_t^{i,m_t} \mid \vec{X}_{t-1}^{i,m_t}, \mObsVec_{t-1})$\Comment{Transition}
|
|
\State Compute $W^{i,m_t}_t \propto p(\vec{o}_t \mid \vec{X}_{t}^{i, m_t})$ \Comment{Evaluation}
|
|
\EndFor
|
|
\State Calculate $\omega_t^{m_t} = \sum_{i=1}^{N_{m_t}} W^{i, m_t}_t$
|
|
\State Normalise $W^{i,m_t}_t$ using $\omega_t^{m_t}$
|
|
\State Resample $\{W_{t}^{i,m_t}, \vec{X}_{t}^{i,m_t} \}$ to obtain $N_{m_t}$ new equally-weighted particles $\{\frac{1}{N_{m_t}}, \overline{\vec{X}}_{t}^{i,m_t} \}$
|
|
\vspace{0.1cm}
|
|
\State Estimate $P(m_t \mid \mObsVec_{1:t}) = \frac{\omega_t^{m_t} P(m_t \mid \mObsVec_{1:t-1})}{\sum_{m=1}^M \omega_t^{m_t} P(m_t \mid \mObsVec_{1:t-1})}$
|
|
\end{algorithmic}
|
|
\end{algorithm}
|
|
|
|
|
|
%grundidee warum die matrix so gewählt wird.
|
|
With the above, we are finally able to combine the two filters described in section \ref{sec:rse}.
|
|
The basic idea of our approach is to utilize the restrictive filter as the dominant one, providing the state estimation for the localisation.
|
|
Due to its robustness and good diversity the other, more permissive one, is then used as support for possible sample impoverishment.
|
|
If we recognize that the dominant filter gets stuck or loses track, particles from the supporting filter will be picked with a higher probability while mixing the new particle set for the dominant filter.
|
|
|
|
%kld
|
|
This is achieved by measuring the Kullback-Leibler divergence $D_{\text{KL}}(P \|Q)$ between both filtering posterior distributions $p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})$.
|
|
The Kullback-Leibler divergence is a non-symmetric non-negative difference between two probability distributions $Q$ and $P$.
|
|
It is also stated as the amount of information lost when $Q$ is used to approximate $P$ \cite{Sun2013}.
|
|
We set $D_{\text{KL}} = D_{\text{KL}}(P \|Q)$, while $P$ is the dominant and $Q$ the supporting filter.
|
|
Since the supporting filter is more robust and ignores all environmental restrictions, we are able to make a statement whether state estimation is stuck due to sample impoverishment or not by looking at the positive exponential distribution
|
|
%
|
|
\begin{equation}
|
|
f(D_{\text{KL}}, \lambda) = e^{-\lambda D_{\text{KL}}}
|
|
\label{equ:KLD}
|
|
\end{equation}
|
|
%
|
|
If $D_{\text{KL}}$ increases to a certain point, \eqref{equ:KLD} provides a probability that allows for mixing the particle sets.
|
|
$\lambda$ depends highly on the respective filter models and is therefore chosen heuristically.
|
|
In most cases $\lambda$ tends to be somewhere between $0.01$ and $0.10$.
|
|
|
|
However, it is obvious that \eqref{equ:KLD} only works reliable if the measurement noises are within reasonable limits, because the support filter depends solely on them.
|
|
Especially Wi-Fi serves as the main source for estimation and thus attenuated or bad Wi-Fi readings are causing $D_{\text{KL}}$ to grow, even if the dominant filter provides a good position estimation.
|
|
In such scenarios a lower diversity and higher focus of the particle set, as given by the dominant filter, is required.
|
|
We achieves this by introducing a Wi-Fi quality factor, allowing the support filter to pick particles from the dominant filter and prevent the later from doing it vice versa.
|
|
The quality factor is defined by
|
|
%
|
|
\begin{equation}
|
|
\newcommand{\leMin}{l_\text{min}}
|
|
\newcommand{\leMax}{l_\text{max}}
|
|
q(\mObsVec_t^{\mRssiVec_\text{wifi}}) =
|
|
\max \left(0,
|
|
\min \left(
|
|
\frac{
|
|
\bar\mRssi_\text{wifi} - \leMin
|
|
}{
|
|
\leMax - \leMin
|
|
},
|
|
1
|
|
\right)
|
|
\right)
|
|
%,\enskip
|
|
%\bar\mRssi_\text{wifi} = \frac{1}{n} \sum_{i = 1}^{n} \mRssi_i
|
|
\label{eq:wifiQuality}
|
|
\end{equation}
|
|
%
|
|
where $\bar\mRssi_\text{wifi}$ is the average of all signal-strength measurements received from the observation $\mObsVec_t$. An upper and lower bound is given by $l_\text{max}$ and $l_\text{min}$.
|
|
|
|
|
|
To incorporate all this within the IMMPF, we utilize a non-trivial Markov switching process.
|
|
This is done by updating the Markov transition matrix $\Pi_t$ at every time step $t$.
|
|
As reminder, $\Pi_t$ highly influences the mixing process in \eqref{equ:immpModeMixing2}.
|
|
Considering the above presented measures, $\Pi_t$ is two-dimensional and given by
|
|
%
|
|
\begin{equation}
|
|
\Pi_t =
|
|
\begin{pmatrix}
|
|
f(D_{\text{KL}}, \lambda) & 1 - f(D_{\text{KL}}, \lambda) \\
|
|
1 - q(\mObsVec_t^{\mRssiVec_\text{wifi}}) & q(\mObsVec_t^{\mRssiVec_\text{wifi}})\\
|
|
\end{pmatrix}
|
|
\enspace ,
|
|
\label{equ:immpMatrix}
|
|
\end{equation}
|
|
%
|
|
This matrix is the centrepiece of our approach.
|
|
It is responsible for controlling and satisfying the need of diversity and the need of focus for the whole localization approach.
|
|
Therefore, recovering from sample impoverishment or degeneracy depends to a large extent on $\Pi_t$.
|
|
|