part filt chapter finished, immpf nearly finished
This commit is contained in:
@@ -1,4 +1,7 @@
|
||||
\begin{abstract}
|
||||
Noch nichts. Hier kommt aber bald was.
|
||||
Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts.
|
||||
|
||||
Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was. Noch nichts. Hier kommt aber bald was.
|
||||
Noch nichts. Hier kommt aber bald was.Noch nichts. Hier kommt aber bald was.Noch nichts. Hier kommt
|
||||
\end{abstract}
|
||||
%\begin{IEEEkeywords} indoor positioning, Monte Carlo smoothing, particle smoothing, sequential Monte Carlo\end{IEEEkeywords}
|
||||
|
||||
@@ -13,7 +13,7 @@ At first, new particles are drawn according to some importance distribution, tho
|
||||
%transition und evaluation einführen
|
||||
In practice the importance distribution is often represented by the state transition, modelling the dynamics of the system.
|
||||
A new weight is then obtained by the state evaluation given different sensor measurements.
|
||||
Most localisation approaches differ mainly in how the transition and evaluation steps are implemented and the available sensors are incorporated \cite{Nurminen13-PSI, Ebner-15, Hilsenbeck2014}.
|
||||
Most localisation approaches differ mainly in how the transition and evaluation steps are implemented and the available sensors are incorporated \cite{Nurminen13-PSI, Ebner2016OPN, Hilsenbeck2014}.
|
||||
However, as \cite{Li2014} already mentioned, particle filter (and nearly all of its modifications) continue to suffer from two notorious problems: sample degeneracy and impoverishment.
|
||||
|
||||
As one can imagine, after a few iterations with continuously reweighting particles, the weight will concentrate on a few particles only.
|
||||
|
||||
@@ -1,7 +1,130 @@
|
||||
\section{IMMPF and Mixing}
|
||||
\label{sec:immpf}
|
||||
|
||||
\section{Similarity Resampling}
|
||||
\label{sec:res}
|
||||
\commentByToni{TODO: Namen von Methoden gross oder klein? \\ Normalisierungsfaktoren dazu schreiben, oder langt Bemerkung im Text?}
|
||||
|
||||
Hier kommt die tolle neue Methode.
|
||||
In the previous section we have introduced a standard particle filter, an evaluation step and two different transition models.
|
||||
Using this, we are able to implement two different localisation schemes.
|
||||
One providing a high diversity with a robust, but uncertain position estimation.
|
||||
The other keeps the localisation error low by using a very realistic propagation model, while being prone to sample impoverishment \cite{Ebner-15}.
|
||||
In the following, we will combine those filters using the Interacting Multiple Model Particle Filter (IMMPF) and a non-trivial Markov switching process.
|
||||
|
||||
%Einführen des IMMPF
|
||||
Consider a jump Markov non-linear system that is represented by different particle filters as state space description, where the characteristics change in time according to a Markov chain.
|
||||
The posterior distribution is then described by
|
||||
%
|
||||
\begin{equation}
|
||||
p(\mStateVec_{t}, m_t \mid \mObsVec_{1:t}) = P(m_k \mid \mObsVec_{1:t}) p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})
|
||||
\label{equ:immpfPosterior}
|
||||
\end{equation}
|
||||
%
|
||||
where $m_t\in M\subset \mathbb{N}$ is the modal state of the system \cite{Driessen2005}.
|
||||
Given \eqref{equ:immpfPosterior} and \eqref{equ:bayesInt}, the mode conditioned filtering stage can be written as
|
||||
%
|
||||
\begin{equation}
|
||||
\arraycolsep=1.2pt
|
||||
\begin{array}{ll}
|
||||
p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t}) \propto
|
||||
&p(\mObsVec_{t} \mid m_t, \mStateVec_{t})\\
|
||||
&\int p(\mStateVec_{t} \mid \mStateVec_{t-1}, m_t, \mObsVec_{t-1})\\
|
||||
&p(\mStateVec_{t-1} \mid m_{t-1}, \mObsVec_{1:t-1})d\vec{q}_{t-1}
|
||||
|
||||
\end{array}
|
||||
\label{equ:immpfFiltering}
|
||||
\end{equation}
|
||||
%
|
||||
and the posterior mode probabilities are calculated by
|
||||
%
|
||||
\begin{equation}
|
||||
p(m_t \mid \mObsVec_{1:t}) \propto p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t-1}) P(m_t \mid \mObsVec_{1:t-1})
|
||||
\enspace .
|
||||
\label{equ:immpModeProb}
|
||||
\end{equation}
|
||||
%
|
||||
It should be noted that \eqref{equ:immpfFiltering} and \eqref{equ:immpModeProb} are not normalized and thus such a step is required.
|
||||
To provide a solution for $P(m_t \mid \mObsVec_{1:t-1})$, the recursion for $m_t$ in \eqref{equ:immpfPosterior} is now derived by the mixing stage \cite{Driessen2005}.
|
||||
Here, we compute
|
||||
%
|
||||
\begin{equation}
|
||||
\arraycolsep=1.2pt
|
||||
\begin{split}
|
||||
&p(\mStateVec_{t} \mid m_{t+1}, \mObsVec_{1:t}) = \\
|
||||
& \sum_{m_t} P(m_t \mid m_{t+1}, \mObsVec_{1:t}) p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})
|
||||
\end{split}
|
||||
\label{equ:immpModeMixing}
|
||||
\end{equation}
|
||||
%
|
||||
with
|
||||
%
|
||||
\begin{equation}
|
||||
P(m_t \mid m_{t+1}, \mObsVec_{1:t}) = \frac{P(m_{t+1} \mid m_t) P(m_t \mid \mObsVec_{1:t})}{P(m_t \mid \mObsVec_{1:t-1})}
|
||||
\label{equ:immpModeMixing2}
|
||||
\end{equation}
|
||||
%
|
||||
and
|
||||
%
|
||||
\begin{equation}
|
||||
P(m_t \mid \mObsVec_{1:t-1}) = \sum_{m_t}{P(m_{t+1} \mid m_t) P(m_t \mid \mObsVec_{1:t})}
|
||||
\enspace ,
|
||||
\label{equ:immpModeMixing3}
|
||||
\end{equation}
|
||||
%
|
||||
where \eqref{equ:immpModeMixing} is a weighted sum of distributions and the weights are provided through \eqref{equ:immpModeMixing2}.
|
||||
The transition probability $P(m_{t+1} = k \mid m_t = l)$ is given by the Markov transition matrix $[\Pi_t]_{kl}$.
|
||||
Sampling from \eqref{equ:immpModeMixing} is done by first drawing a modal state $m_t$ from $P(m_t \mid m_{t+1}, \mObsVec_{1:t})$ and then drawing a state $\mStateVec_{t}$ from $p(\mStateVec_{t} \mid m_t, \mObsVec_{1:t})$ in dependence to that $m_t$.
|
||||
In context of particle filtering, this means that \eqref{equ:immpModeMixing} enables us to pick particles from all available modes in regard to the discrete distribution $P(m_t \mid m_{t+1}, \mObsVec_{1:t})$.
|
||||
Further, the number of particles in each mode can be selected independently of the actual mode probabilities.
|
||||
|
||||
Algorithm \ref{alg:immpf} shows the complete IMMPF procedure in detail.
|
||||
As prior knowledge, $M$ initial probabilities $P(m_1 \mid \mObsVec_{1})$ and initial distributions $p(\mStateVec_{1} \mid m_1, \mObsVec_{1})$ each providing a particle set $\{W^i_{1}, \vec{X}^i_{1} \}_{i=1}^N$ are available.
|
||||
|
||||
|
||||
\begin{algorithm}[t]
|
||||
\caption{IMMPF Algorithm}
|
||||
\label{alg:immpf}
|
||||
\begin{algorithmic}[1] % The number tells where the line numbering should start
|
||||
\Statex{\textbf{Input:} Prior $P(m_1 \mid \mObsVec_{1})$ and $p(\mStateVec_{1} \mid m_1, \mObsVec_{1})$}
|
||||
\Statex{~}
|
||||
\For{$m_t = 0$ \textbf{to} $M$} \Comment{Mixing}
|
||||
\For{$i = 0$ \textbf{to} $N_{m_t}$}
|
||||
\State Sample $m^i_{t-1} \sim P(m_{t-1} \mid m_{t}, \mObsVec_{1:t-1})$
|
||||
\State Sample $\vec{X}^{i, m_t}_{t-1} \sim p(\mStateVec_{t-1} \mid m^i_{t-1}, \mObsVec_{1:t-1})$
|
||||
\State Set $W^{i, m_t}_{t-1}$ to $\frac{1}{N_{m_t}}$
|
||||
\EndFor
|
||||
\EndFor
|
||||
\Statex{~}
|
||||
\Statex \textbf{Run:} Parallel filtering for each $m_t \in M$ \Comment{Filtering}
|
||||
\For{$i = 0$ \textbf{to} $N_{m_t}$}
|
||||
\State Sample $\vec{X}_t^{i,m_t} \sim p(\vec{X}_t^{i,m_t} \mid \vec{X}_{t-1}^{i,m_t})$\Comment{Transition}
|
||||
\State Compute $W^{i,m_t}_t \propto p(\vec{o}_t \mid \vec{X}_{t}^{i, m_t})$ \Comment{Evaluation}
|
||||
\EndFor
|
||||
\State Calculate $\lambda_t^{m_t} = \sum_{i=1}^{N_{m_t}} W^{i, m_t}_t$
|
||||
\State Normalise $W^{i,m_t}_t$ using $\lambda_t^{m_t}$
|
||||
\State Resample $\{W_{t}^{i,m_t}, \vec{X}_{t}^{i,m_t} \}$ to obtain $N_{m_t}$ new equally-weighted particles $\{\frac{1}{N_{m_t}}, \overline{\vec{X}}_{t}^{i,m_t} \}$
|
||||
\vspace{0.1cm}
|
||||
\State Estimate $P(m_t \mid \mObsVec_{1:t}) = \frac{\lambda_t^{m_t} P(m_t \mid \mObsVec_{1:t-1})}{\sum_{m=1}^M \lambda_t^{m_t} P(m_t \mid \mObsVec_{1:t-1})}$
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
|
||||
|
||||
%grundidee warum die matrix so gewählt wird.
|
||||
With the above, we are finally able to combine the two filters described in section \ref{}.
|
||||
The basic idea is that if the more restrictive filter
|
||||
|
||||
grundidee der restrictive filter soll aufgrund seiner hohen genauigkeit die state estimation machen. der andere filter dient als support, wenn der genaurer in sample impoverishment läuft. da seine transition sehr robust ist und eine hohe diversity hat. d.h. wenn festgestellt wird, da der genauere filter stecken bleibt sollen partikel aus dem robusten gezogen werden, um somit ein sample impoversichemnt zu verhinden.
|
||||
dafür kld nehmen...
|
||||
|
||||
aber:
|
||||
es ist offensichlich, das das nur so lange funktioniert, wie wie wi-fi messungen stabil sind, da der robuste filter komplett abhängig davon ist und sprünge durch das wifi zulässt
|
||||
heißt, schlechtes (attenuation) wi-fi für zu einer hohen kld und somit liefert der robuste filter komplett miese ergebnisse. um das zu verhinden soll nicht nur der gute filter vom robusten ziehen können, sonder nauch umgekehrt und zwar in abhängigkeit zu aktuellen wifi qualität. ist die qualität schlecht, dann ziehe partikel aus dem guten filter, da die restrictive transition und der pdr ansatz zusätzliches wissen schaffen. die qualität wird über ein das heuristische maß blabal gemssen.
|
||||
|
||||
All this can be utilized by a non-trivial markov blaa. The Markov transition matrix at time $t$ is then given by
|
||||
%
|
||||
\begin{equation}
|
||||
d
|
||||
\enspace ,
|
||||
\label{equ:immpMatrix}
|
||||
\end{equation}
|
||||
%
|
||||
|
||||
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
\section{Recursive State Estimation}
|
||||
\section{Standard Particle Filtering}
|
||||
\label{sec:rse}
|
||||
|
||||
Wie immer. bisschen umschreiben halt.
|
||||
In this section, we present two common localisation schemes based on particle filtering using two different transition models for propagating new states and an identical evaluation step for udpating the weights.
|
||||
|
||||
|
||||
As mentioned before, most smoothing methods require a preceding filtering.
|
||||
Similar to our previous works, we consider indoor localisation as a time-sequential, non-linear and non-Gaussian state estimation problem.
|
||||
Therefore, a Bayes filter that satisfies the Markov property is used to calculate the posterior:
|
||||
As mentioned before, we consider indoor localisation as a time-sequential, non-linear and non-Gaussian state estimation problem.
|
||||
Recursive filters, like the aforementioned particle filter, use all observations $\mObsVec_{t}$ until the current time $t$ for computing an estimation of the hidden state $\mStateVec_{t}$.
|
||||
In a Bayesian setting, this can be formalized as the computation of the posterior distribution:
|
||||
%
|
||||
\begin{equation}
|
||||
\arraycolsep=1.2pt
|
||||
@@ -19,45 +18,163 @@ Therefore, a Bayes filter that satisfies the Markov property is used to calculat
|
||||
\label{equ:bayesInt}
|
||||
\end{equation}
|
||||
%
|
||||
Here, the previous observation $\mObsVec_{t-1}$ is included into the state transition \cite{Ebner-15}.
|
||||
For approximating eq. \eqref{equ:bayesInt} by means of MC methods, the transition is used as proposal distribution, also known as CONDENSATION algorithm \cite{isard1998smoothing}.
|
||||
This algorithm also performs a resampling step to handle the phenomenon of weight degeneracy.
|
||||
Here, the previous observation $\mObsVec_{t-1}$ is included into the state transition \cite{Ebner-15}.
|
||||
For approximating $p(\mStateVec_{t} \mid \mObsVec_{1:t})$ with a particle filter, a sample set of $N$ independet random variables, $\vec{X}^i_{t} \sim p(\mStateVec_t \mid \mObsVec_{1:t})$ for $i = 1,...,N$, is used.
|
||||
Due to importance sampling, a weight $W^i_t$ is assigned to each sample $\vec{X}^i_{t}$.
|
||||
A particle set of the filter is then given by $\{W^i_{1:t}, \vec{X}^i_{1:t} \}_{i=1}^N$.
|
||||
The transition is used as proposal distribution, also known as CONDENSATION algorithm \cite{Isard98:CCD}.
|
||||
This algorithm also performs a traditional resampling step.
|
||||
|
||||
In the context of indoor localisation, the hidden state $\mStateVec$ is defined as follows:
|
||||
For indoor localisation we define the hidden state $\mStateVec$ as follows:
|
||||
\begin{equation}
|
||||
\mStateVec = (x, y, z, \mStateHeading, \mStatePressure),\enskip
|
||||
x, y, z, \mStateHeading, \mStatePressure \in \R \enspace,
|
||||
x, y, z, \mStateHeading, \mStatePressure \in \R \enspace.
|
||||
\end{equation}
|
||||
%
|
||||
where $x, y, z$ represent the position in 3D space, $\mStateHeading$ the user's heading and $\mStatePressure$ the relative atmospheric pressure prediction in hectopascal (hPa). Further, the observation is given by
|
||||
where $x, y, z$ represent the position in 3D space, $\mStateHeading$ the user's heading and $\mStatePressure$ the relative atmospheric pressure prediction in hectopascal (hPa).
|
||||
A particle is therefore a weighted representation of one possible system state $\mStateVec$. All relevant sensor measurements are incorporated into the observation $\mObsVec$ given by
|
||||
%
|
||||
\begin{equation}
|
||||
\mObsVec = (\mRssiVec_\text{wifi}, \mRssiVec_\text{ib}, \mObsHeading, \mObsSteps, \mObsPressure, \mObsActivity) \enspace,
|
||||
\mObsVec = (\mObsHeading, \mObsSteps, \mRssiVec_\text{wifi}, \mObsPressure) \enspace,
|
||||
\end{equation}
|
||||
%
|
||||
covering all relevant sensor measurements.
|
||||
Here, $\mRssiVec_\text{wifi}$ and $\mRssiVec_\text{ib}$ contain the measurements of all nearby \docAP{}s (\docAPshort{}) and \docIBeacon{}s, respectively.
|
||||
$\mObsHeading$ and $\mObsSteps$ describe the relative angular change and the number of steps detected for the pedestrian.
|
||||
$\mObsPressure$ is the relative barometric pressure with respect to a fixed reference.
|
||||
Finally, $\mObsActivity$ contains the activity currently estimated for the pedestrian, which is one of:
|
||||
unknown, standing, walking, walking up the stairs or walking down the stairs.
|
||||
Here, $\mObsHeading$ and $\mObsSteps$ describe the relative angular change and the number of steps detected for the pedestrian.
|
||||
Further, $\mRssiVec_\text{wifi}$ and $\mRssiVec_\text{ib}$ contain the measurements of all nearby \docAP{}s (\docAPshort{}).
|
||||
Finally, $\mObsPressure$ is the relative barometric pressure with respect to a fixed reference.
|
||||
|
||||
The probability density of the state evaluation is given by
|
||||
We assume a statistical independence of all sensors. The probability density of the state evaluation is then given by the following components:
|
||||
%
|
||||
\begin{equation}
|
||||
%\begin{split}
|
||||
p(\vec{o}_t \mid \vec{q}_t) =
|
||||
p(\vec{o}_t \mid \vec{q}_t)_\text{baro}
|
||||
\,p(\vec{o}_t \mid \vec{q}_t)_\text{ib}
|
||||
\,p(\vec{o}_t \mid \vec{q}_t)_\text{wifi}
|
||||
\enspace
|
||||
%\end{split}
|
||||
\label{eq:evalBayes}
|
||||
\end{equation}
|
||||
%
|
||||
and therefore similar to \cite{Ebner-16}.
|
||||
Here, we assume a statistical independence of all sensors and every single component refers to a probabilistic sensor model.
|
||||
The barometer information is evaluated using $p(\vec{o}_t \mid \vec{q}_t)_\text{baro}$, whereby absolute position information
|
||||
is given by $p(\vec{o}_t \mid \vec{q}_t)_\text{ib}$ for \docIBeacon{}s and by $p(\vec{o}_t \mid \vec{q}_t)_\text{wifi}$ for \docWIFI{}.
|
||||
The current pressure value is evaluated using $p(\vec{o}_t \mid \vec{q}_t)_\text{baro}$ and absolute position information is given by $p(\vec{o}_t \mid \vec{q}_t)_\text{wifi}$ for \docWIFI{}.
|
||||
|
||||
\subsection{Evaluation}
|
||||
|
||||
%Barometer
|
||||
First, the smartphone’s barometer is used to infer the likeliness of the current $z$-location.
|
||||
Due to noisy sensors, we calculate the average $\overline{\mObsPressure}$ of several sensor readings and the sensor's uncertainty $\sigma_\text{baro}$.
|
||||
This average serves as relative base for all future measurements and is carried out while the pedestrian chooses his destination \cite{Fetzer2016OMC}.
|
||||
|
||||
The evaluation step for time $t$ is given by
|
||||
%
|
||||
\begin{equation}
|
||||
p(\mObsVec_t \mid \mStateVec_t)_\text{baro} = \mathcal{N}(\mObs_t^{\mObsPressure} \mid \mState_t^{\mStatePressure}, \sigma_\text{baro}^2) \enspace.
|
||||
\label{eq:baroEval}
|
||||
\end{equation}
|
||||
%
|
||||
Here, every predicted relative pressure $\mState_t^{\mStatePressure}$ is compared with the observed one $\mObs_t^{\mObsPressure}$ using a normal distribution.
|
||||
The state's relative pressure prediction $\mStatePressure$ is estimated within each transition from $\mStateVec_{t-1}$ to $\mStateVec_t$ by tracking every height-change ($z$-axis):
|
||||
%
|
||||
\begin{equation}
|
||||
\mState_{t}^{\mStatePressure} = \mState_{t-1}^{\mStatePressure} + \Delta z \cdot b
|
||||
,\enskip
|
||||
\Delta z = \mState_{t-1}^{z} - \mState_{t}^z
|
||||
,\enskip
|
||||
b \in \R
|
||||
\enspace ,
|
||||
\label{eq:baroTransition}
|
||||
\end{equation}
|
||||
%
|
||||
Here, $b$ denotes the common pressure change in $\frac{\text{hPa}}{\text{m}}$.
|
||||
|
||||
%WI-FI
|
||||
Signal strength models are a well-established and popular method for estimating a pedestrian's position in indoor environments.
|
||||
We are using the wall attenuation factor model based on Friis transmission equation to predict an \docAP{}’s (\docAPshort{}) signal strength at an arbitrary position $\mStateVec_t$ \cite{Ebner-15}.
|
||||
Here, the positions of detected \docAP{}s (\docAPshort{}) are known beforehand.
|
||||
The main advantage of this approach is that no time-consuming initial calibration phase and updates in case of infrastructural changes are needed.
|
||||
Using the 3D distance $d$ and the number of floors $\Delta f$ between the transmitter and the state-in-question, it can be described by
|
||||
%
|
||||
\begin{equation}
|
||||
P_r(d, \Delta f) = \mTXP - 10 \mPLE \log_{10}{\frac{\mMdlDist}{\mMdlDist_0}} + \Delta{f} \mWAF \enspace ,
|
||||
\label{eq:waf}
|
||||
\end{equation}
|
||||
%
|
||||
where $\mTXP$ contains the AP’s signal strength at $\mMdlDist_0$ and $\mPLE$ models the signal’s depletion with growing distance.
|
||||
The attenuation per floor is described by $\mWAF$.
|
||||
To reduce the system's setup time, we use the same values for all \docAP{}s at the cost of accuracy.
|
||||
|
||||
By assuming statistical independence, the overall probability can be determined using
|
||||
%
|
||||
\begin{equation}
|
||||
\mProb(\mObsVec_t \mid \mStateVec_t)_\text{wifi} =
|
||||
\prod\limits_{i=1}^{n} \mathcal{N}(\mRssi_\text{wifi}^{i} \mid P_{r}(\mMdlDist_{i}, \Delta{f_{i}}), \sigma_{\text{wifi}}^2) \enspace .
|
||||
\label{eq:wifiTotal}
|
||||
\end{equation}
|
||||
The uncertainty of the measurements is given by $\Delta{f_{i}}$. More details on this approach and possible extensions can be found in \cite{Ebner-15} and \cite{Ebner-17}.
|
||||
|
||||
|
||||
\subsection{Transition}
|
||||
|
||||
As mentioned before, we are searching for a solution to satisfy both, sample diversity and focus.
|
||||
In the following, two very different transition models, each providing one of this abilities, are presented.
|
||||
|
||||
The first transition model is based upon random walks on a graph $G=(V,E)$ with vertices $\mVertexA \in V$ and undirected edges $\mEdgeAB \in E$, which is generated from the buildings floorplan \cite{Ebner-16}.
|
||||
Starting at the vertex of the position $\fPos{\mStateVec_{t-1}} = (x, y,z)^T$ a new particle is sampled by walking along adjacent nodes into a given walking-direction $\gHead$ until a distance $\gDist$ is reached \cite{Ebner-15}.
|
||||
During the random walk, each edge has its own probability $p(\mEdgeAB)$ which depends on the edge’s direction $\angle \mEdgeAB$ and the pedestrian’s current heading $\gHead$.
|
||||
The to-be-walked edge is thus drawn according to their resemblance:
|
||||
%
|
||||
\begin{equation}
|
||||
p(\mEdgeAB)_\text{head} = p(\mEdgeAB \mid \gHead) = \mathcal{N} (\angle \mEdgeAB \mid \gHead, \sigma_\text{head}^2)
|
||||
\enspace .
|
||||
\label{eq:transHeading}
|
||||
\end{equation}
|
||||
%
|
||||
While the distribution \refeq{eq:transHeading} does not integrate to $1.0$ due to circularity of angular data, in our case, the normal distribution can be assumed as sufficient for small enough $\sigma^2$.
|
||||
|
||||
To provide $\gHead$ and $\gDist$, steps and turns are detected using the smartphone's IMU, implemented as described in \cite{Ebner-15}.
|
||||
The number of steps detected since the last transition is used to estimate the to-be-walked distance $\gDist$ by assuming a fixed step-size with some deviation:
|
||||
%
|
||||
\begin{equation}
|
||||
\gDist = \mObs_{t-1}^{\mObsSteps} \cdot \mStepSize + \mathcal{N}(0, \sigma_{\gDist}^2)
|
||||
\enspace .
|
||||
\end{equation}
|
||||
%
|
||||
Turn-Detection supplies the magnitude of the detected heading change by integrating over the gyroscope's change since the last transition.
|
||||
Together with some deviation and the state's previous heading, the magnitude is used to estimate the current state's heading:
|
||||
%
|
||||
\begin{equation}
|
||||
\gHead = \mState_{t}^{\mStateHeading} = \mState_{t-1}^{\mStateHeading} + \mObs_{t-1}^{\mObsHeading} + \mathcal{N}(0, \sigma_{\gHead}^2) \\
|
||||
\end{equation}
|
||||
%
|
||||
All this promises a very focused propagation for new particles and draws only valid movements, as ambient conditions (walls, doors, stairs, etc.) are considered.
|
||||
Additionally, the graph-based approach offers plenty of scope for further extension as can be seen in \cite{Ebner2016OPN}.
|
||||
|
||||
The second transition model is very simple and thus often used in scenarios with little or rough information provided by sensors.
|
||||
Especially in cases without any regular statements about the pedestrian's movement.
|
||||
This continuous model just moves into a random direction, ignoring the graph and thus any floorplan knowledge:
|
||||
%
|
||||
\begin{equation}
|
||||
\begin{split}
|
||||
\mProb(\mStateVec_{t} \mid \mStateVec_{t-1}) &=
|
||||
\mathcal{N}\left(
|
||||
\fPos{\mStateVec_{t}}
|
||||
\mid{}
|
||||
\fPos{\mStateVec_{t-1}},
|
||||
\mat{\Sigma}_{\text{move}}
|
||||
\right),\\
|
||||
\mat{\Sigma}_{\text{move}} &=
|
||||
\begin{pmatrix}
|
||||
\sigma_{\text{move}} & 0 & 0\\
|
||||
0 & \sigma_{\text{move}} & 0\\
|
||||
0 & 0 & \sigma_{\text{floor}}\\
|
||||
\end{pmatrix}
|
||||
\end{split}
|
||||
\label{eq:simpleTrans}
|
||||
\end{equation}
|
||||
%
|
||||
The only restriction made, is that newly drawn particles need to be somewhere in between the graphs boundaries and therefore have a valid vertex for $\fPos{\mStateVec_{t}}$.
|
||||
If the particle does not satisfy this condition, the position of nearest available vertex is chosen instead.
|
||||
This ensures that a particle resides always on a valid vertex $\mVertexA$, what will be of importance for the upcoming IMMPF.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -2863,3 +2863,29 @@ volume = {150},
|
||||
year = {2003}
|
||||
}
|
||||
|
||||
@inproceedings{Fetzer2016OMC,
|
||||
author = {T. Fetzer and F. Ebner and L. K{\"o}ping and M. Grzegorzek and F. Deinzer},
|
||||
title = {{On Monte Carlo Smoothing in Multi Sensor Indoor Localisation}},
|
||||
booktitle = {Indoor Positioning and Indoor Navigation (IPIN), Int. Conf. on},
|
||||
editor = {},
|
||||
year = {2016},
|
||||
month = {October},
|
||||
publisher = {IEEE},
|
||||
pages = {},
|
||||
address = {Madrid, Spain},
|
||||
issn = {}
|
||||
}
|
||||
|
||||
@inproceedings{Ebner2016OPN,
|
||||
author = {F. Ebner and T. Fetzer and M. Grzegorzek and F. Deinzer},
|
||||
title = {{On Prior Navigation Knowledge in Multi Sensor Indoor Localisation}},
|
||||
booktitle = {International Conference on Information Fusion (FUSION 2016)},
|
||||
editor = {},
|
||||
year = {2016},
|
||||
month = {July},
|
||||
publisher = {IEEE},
|
||||
address = {Heidelberg, Germany},
|
||||
pages = {},
|
||||
isbn = {}
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user