added some comments. more to-do

This commit is contained in:
2018-09-17 19:31:03 +02:00
parent 93082818ef
commit ed46dd65dd
5 changed files with 67 additions and 31 deletions

View File

@@ -26,6 +26,7 @@ In the case of particle filters the MMSE estimate equals to the weighted-average
\hat{\mStateVec}_t := \frac{1}{W_t} \sum_{i=1}^{N} w^i_t \mStateVec^i_t \, \text{,}
\end{equation}
\commentByMarkus{Passt die Notation so?}
\commentByFrank{sieht fuer mich auf den ersten blick nach korrektem weighted average aller partikel aus}
where $W_t=\sum_{i=1}^{N}w^i_t$ is the sum of all weights.
While producing an overall good result in many situations, it fails when the posterior is multimodal.
In these situations the weighted-average estimate will find the estimate somewhere between the modes.

View File

@@ -13,7 +13,7 @@ The probability density of the state evaluation in \eqref{equ:bayesInt} is given
\label{eq:evalBayes}
\end{equation}
%
where every component refers to a probabilistic sensor model and are statistical independent.
where every component refers to a probabilistic sensor model which are statistical independent.
The barometer readings are used to determine the current activity $\mObsActivity$, which is then evaluated using $p(\vec{o}_t \mid \vec{q}_t)_\text{act}$.
Absolute positioning information is given by $p(\vec{o}_t \mid \vec{q}_t)_\text{wifi}$ for \docWIFI{}.
@@ -42,6 +42,8 @@ The comparison between a single RSSI measurement $\mRssi_i$ and the reference is
\label{eq:wifiProb}
\end{equation}
\commentByFrank{ich wuerde einfach $\sigma_\text{wifi}$ nehmen. es haengt nicht von der pos $\mPosVec$ ab, und wir hatten immer fuer jeden AP das gleiche}
\noindent where $\mu_{i,\mPosVec}$ denotes the (predicted) average signal strength and $\sigma_{i,\mPosVec}^2$ a corresponding standard deviation for the \docAPshort{} identified by $i$, regarding the location $\mPosVec$.
Within this work $\mu_{\mPosVec}$ is calculated by a modified version of the wall-attenuation-factor model as presented in \cite{Ebner-17}. Here, the prediction depends on the 3D distance $d$ from the \docAPshort{} and the number of floors $\Delta f$ between the \docAPshort{} and $\mPosVec$ of the state-in-question:
@@ -49,6 +51,21 @@ Within this work $\mu_{\mPosVec}$ is calculated by a modified version of the wal
\mu_{\mPosVec} = \mTXP - 10 \mPLE \log_{10}{\frac{\mMdlDist}{\mMdlDist_0}} + \Delta{f} \mWAF
\label{eq:wallAtt}
\end{equation}
\commentByFrank{
hier sollte das $i$, das du vorher hattest, wohl wieder mit rein?
was genau $d$ bzw $d_i$ oder $d_{i,\mPosVec}$ ist, muessten wir vermutlich auch kurz erklären.
Ich hatte auch immer unterschieden zwischen der fraglichen position (z.B. $\vec{\rho}$)
und der position des access points (z.B. $\mPosVec_i$). also, zwei verschiedene zeichen, dass das klar wird.
ich weis aber nicht, ob $\vec{\rho}$ noch frei ist, bzw was auf den folgenden seiten nocht kommt.
eigentlich gehoert das $i$ dann auch noch ans $P_0$ und $\gamma$ und $d_0$.. aber der einfachheit halber, reicht das ja im text.
vorschlag wäre etwas wie:
}
\begin{equation}
\mu(i,\vec{\rho}) = \mTXP - 10 \mPLE \log_{10}{\frac{\mMdlDist}{\mMdlDist_0}} + \Delta{f} \mWAF
,\enskip
d = \| \vec{\rho} - \mPosVec_i \|
\label{eq:wallAtt}
\end{equation}
\noindent Here, $\mTXP$ is the \docAPshort{}'s signal strength measurable at a known distance $\mMdlDist_0$ (usually \SI{1}{\meter}) and $\mPLE$ denotes the signals depletion over distance, which depends on the \docAPshort{}'s surroundings like walls and other obstacles.
The attenuation per floor is given by $\mWAF$.
@@ -57,7 +74,7 @@ For example, a viable choice for steel enforced concrete floors is $\mWAF \appro
Of course, the environmental parameters $\mTXP$, $\mPLE$ and $\mWAF$ need to be known beforehand and often vary greatly between single \docAPshort{}'s.
Nevertheless, for simplicity's sake it is common practice to use some fixed empirically chosen values, the same for every \docAPshort{}.
This might already provide enough accuracy for some use-cases and buildings, but fails in complex scenarios, as discussed in section \ref{sec:intro}.
Therefore, instead of using a pure empiric model, we deploy a optimization scheme to find a well-suited set of parameters ($\mPosAPVec{}, \mTXP{}, \mPLE{}, \mWAF{}$) per \docAPshort{}, where $\mPosAPVec{} = (x,y,z)^T$ denotes the \docAPshort{}'s position.
Therefore, instead of using a pure empiric model, we deploy an optimization scheme to find a well-suited set of parameters ($\mPosAPVec{}, \mTXP{}, \mPLE{}, \mWAF{}$) per \docAPshort{}, where $\mPosAPVec{} = (x,y,z)^T$ denotes the \docAPshort{}'s estimated position.
The optimization is based on a few reference measurements $s_{\mPosVec}$ throughout the building, e.g. every \SI{3}{} to \SI{5}{\meter} centred within a corridor and between \SI{1}{} and \SI{4}{} references per room, depending on the room's size.
Compared to classical fingerprinting, where reference measurements are recorded on small grids between \SI{1}{} to \SI{2}{\meter}, this highly reduces their required number and thus the overall setup-time.
@@ -65,7 +82,7 @@ The target function to optimize the $6$ model parameters for one \docAPshort{} i
\begin{equation}
\epsilon^* =
\argmin_{\mPosAPVec, \mTXP, \mPLE, \mWAF}
\min_{\mPosAPVec, \mTXP, \mPLE, \mWAF}
\sum_{s_{\mPosVec} \in \vec{s}}
(s_{\mPosVec} - \mu_{\mPosVec})^2
\enskip,\enskip\enskip
@@ -74,9 +91,22 @@ The target function to optimize the $6$ model parameters for one \docAPshort{} i
\enspace .
\label{eq:optTarget}
\end{equation}
\commentByFrank{argmin liefert die argumente, nicht den fehler. da muesste nur min stehen}
\commentByFrank{hier muesste dann auch das $i$ rein, bzw die funktion $\mu()$. vorschlag waere dann:}
\begin{equation}
(\mPosAPVec, \mTXP, \mPLE, \mWAF)_i =
\argmin_{\mPosVec, \mTXP, \mPLE, \mWAF}
\sum_{s_{i,\vec{\rho}} \in \vec{s}_i}
\big(s_{i,\vec{\rho}} - \mu(i,\mPosVec) \big)^2
\enspace .
\label{eq:optTarget}
\end{equation}
%\commentByFrank{argmin liefert die argumente, nicht den fehler. da muesste nur min stehen}
\commentByFrank{
hier braucht es drigend eine unterscheidung zwischen den beiden positionen. der vom fingerprint und der vom ap
$\mPosAPVec$ ist, wegen dem $\hat{ }$ einfach nur die \emph{beste}. aber sie ist halt generell anders als der fingerprint.
deshalb brauchen wir da zwei formel zeichen.
und wir muessen einheitlich machen, ob wir das $i$ jetzt mitnehmen, oder nicht. sonst wirkt es verwirrend
}
\noindent Here, one reduces the squared error between reference measurements $s_{\mPosVec} \in \vec{s}$ with well-known location $\mPosVec$ and corresponding model predictions $\mu_{\mPosVec}$ (cf. eq. \eqref{eq:wallAtt}).
The number of floors between $\mPosVec$ and $\mPosAPVec$ is again given by $\Delta f$.
As discussed by \cite{Ebner-17}, optimizing all 6 parameters, especially the unknown \docAPshort{} position $\mPosAPVec$, usually results in optimizing a non-convex, discontinuous function.
@@ -89,7 +119,8 @@ During each iteration, the best \SI{25}{\percent} of the population are kept.
The remaining entries are then re-created by modifying the best entries with uniform random values within $\pm$\SI{10}{\percent} of the known limits.
Inspired by {\em cooling} known from simulated annealing \cite{Kirkpatrick83optimizationby}, the result is stabilized by narrowing the allowed modification limits over time and thus decrease in the probability of accepting worse solutions.
\todo{Wollen wir das mal genauer beschreiben? Also wie genau funktioniert das cooling. Das ist ja alles sehr wischi waschi gehalten}
\commentByToni{Wollen wir das mal genauer beschreiben? Also wie genau funktioniert das cooling. Das ist ja alles sehr wischi waschi gehalten}
\commentByFrank{ich wuerde es so lassen. da gibts genug in der literatur ueber ideen und potentielle ansaetze}
To further improve the results, we optimize a model for each floor of the building instead of a single global one, using only the reference measurements that belong to the corresponding floor.
The reason for this comes from the assumptions made in eq. \eqref{eq:wallAtt}.
@@ -145,9 +176,10 @@ Recognizing if the pedestrian is standing or walking requires less prior data, t
Therefore, $\vec{\omega}_\text{l, acc}$ is recommended between \SI{1}{\second} and \SI{2}{\second}, while $\vec{\omega}_\text{l, baro}$ between \SI{3}{\second} and \SI{5}{\second}.
It should be noted, that the window size is a classic trade-off between flexibility and robustness.
The larger the window, the slower changes become noticeable and vice versa.
Of course, the above suggested values are depended upon the particular requirements and used sensors.
Of course, the above suggested values are dependent upon the particular requirements and used sensors.
However, they should be valid for many modern commercially available smartphones.
\commentByFrank{hier hast du quotes um die activitites. in der intro noch nicht. vlt einheitlich machen ueber macros?}
The activity is now evaluated using $p(\vec{o}_t \mid \vec{q}_t)_\text{act}$ by providing a probability based on whether the 3D location $\mPosVec$ of the state-in-question is on a staircase, in an elevator or on the floor.
If the current activity $\mObsActivity$ is recognized as "standing", a $\mPosVec$ located on the floor results in a probability given by $\kappa$, otherwise $1 - \kappa$.
The same applies to "walking up" and "walking down", here a $\mPosVec$ located on one of the possible staircases or elevators provides $\kappa$ and those who remain on the floor $1 - \kappa$.

View File

@@ -22,20 +22,21 @@ We also use a novel approach for finding an exact estimation of the pedestrian's
Many historical buildings, especially bigger ones like castles, monasteries or churches, are built of massive stone walls and have annexes from different historical periods out of different construction materials.
This leads to problems for methods using received signal strengths (RSS) from \docWIFI{} or Bluetooth, due to a high signal attenuation between different rooms.
Many unknown quantities like the walls definitive material or thickness make it expensive to determine important parameters, \eg{} the signal's depletion over distance.
Many unknown quantities, like the walls definitive material or thickness, make it expensive to determine important parameters, \eg{} the signal's depletion over distance.
Additionally, most wireless approaches are based on a line-of-sight assumption.
Thus, the performance will be even more limited due to the irregularly shaped spatial structure of such buildings.
Our approach tries to avoid those problems.
We distribute a small number of simple and cheap \docWIFI{} beacons over the whole building and instead of measuring their position, we use an optimization scheme based on some reference measurements.
We distribute a small number of simple and cheap \docWIFI{} beacons over the whole building and instead of measuring their position, we use an optimization scheme based on a few reference measurements.
An optimization scheme also avoids inaccuracies like wrongly positioned access points or fingerprints caused by outdated or inaccurate building plans.
\commentByFrank{warum fingerprints? das verwirrt mich an der stelle. willst du sagen, dass opt. besser ist, als ueberhaupt fingerprints zu nehmen? dann kommt es nicht so rueber. unsicher, deshalb kein direkter fix sondern comment}
It is obvious, that this could be solved by re-measuring the building, however this is a very time-consuming process requiring specialized hardware and a surveying engineer.
Clearly, this is contrary to most costumers expectations of a fast to deploy and low-cost solution.
In addition, this is not only a question of costs incurred, but also for buildings under monumental protection, what does not allow for larger construction measures.
In addition, this is not only a question of costs incurred, but also for buildings under monumental protection, not allowing for larger construction measures.
To sum up, this work presents a smartphone-based localization system using a particle filter to incorporate different probabilistic models.
We omit time-consuming approaches like classic fingerprinting or measuring the exact positions of access-points.
Instead we use a simple optimization scheme based on reference measurements to estimate a corresponding Wi-Fi model.
Instead we use a simple optimization scheme based on reference measurements to estimate a corresponding \docWIFI{} model.
The pedestrian's movement is modeled realistically using a navigation mesh, based on the building's floorplan.
A barometer based activity recognition enables going into the third dimension and problems occurring from multimodalities and impoverishment are taken into account.

View File

@@ -51,16 +51,16 @@ they usually suffer from reduced accuracy for large open spaces, as many impleme
We therefore present a novel technique based on continuous walks along a navigation mesh.
Like the graph, the mesh, consisting of triangles sharing adjacent edges,
is created once during an offline phase, based on the buildings 3D floorplan.
is created once during an offline phase, based on the building's 3D floorplan.
Using large triangles reduces the memory footprint dramatically (a few megabytes for large buildings)
while still increasing the quality (triangle-edges directly adhere to architectural-edges) and allows
for truly continuous transitions along the surface spanned by all triangles.
%eval - wifi, fingerprinting
The outcomes of the state evaluation process depend highly on the used sensors.
Most smartphone-based systems are using received signal strength indications (RSSI) given by Wi-Fi or Bluetooth as a source for absolute positioning information.
At this, one can mainly differ between fingerprinting and signal-strength prediction model based solutions \cite{Ebner-17}.
Indoor localization using Wi-Fi fingerprints was first addressed by \cite{radar}.
Most smartphone-based systems are using received signal strength indications (RSSI) given by \docWIFI{} or Bluetooth as a source for absolute positioning information.
At this, one can mainly distinguish between fingerprinting and signal-strength prediction model based solutions \cite{Ebner-17}.
Indoor localization using \docWIFI{} fingerprints was first addressed by \cite{radar}.
During a one-time offline-phase, a multitude of reference measurements are conducted.
During the online-phase the pedestrian's location is then inferred by comparing those prior measurements against live readings.
Based on this pioneering work, many further improvements where made within this field of research \cite{PropagationModelling, ProbabilisticWlan, meng11}.
@@ -70,20 +70,20 @@ Using robots instead of human workforce might thus be a viable choice, still thi
%wifi, signal strength
Signal strength prediction models are a well-established field of research to determine signal strengths for arbitrary locations by using an estimation model instead of real measurements.
While many of them are intended for outdoor and line-of-sight purposes \cite{PredictingRFCoverage, empiricalPathLossModel}, they are often applied to indoor use-cases as well \cite{Ebner-17, farid2013recent}.
Besides their solid performance in many different localization solutions, a complex scenario requires a equally complex signal strength prediction model.
Besides their solid performance in many different localization solutions, a complex scenario requires an equally complex signal strength prediction model.
As described in section 1, historical buildings represent such a scenario and thus the model has to take many different constraints into account.
An example is the wall-attenuation-factor model \cite{PathLossPredictionModelsForIndoor}.
It introduces an additional parameter to the well-known log distance model \cite{IntroductionToRadio}, which considers obstacles between (line-of-sight) the AP and the location in question by attenuating the signal with a constant value.
It introduces an additional parameter to the well-known log-distance model \cite{IntroductionToRadio}, which considers obstacles between (line-of-sight) the access point (AP) and the location in question by attenuating the signal with a constant value.
Depending on the use-case, this value describes the number and type of walls, ceilings, floors etc. between both positions.
For obstacles, this requires an intersection-test of each obstacle with the line-of-sight, which is costly for larger buildings.
Thus \cite{Ebner-17} suggests to only consider floors/ceilings, which can be calculated without intersection checks and allows for real-time use-cases running on smartphones.
%wifi optimization
To further reduce the setup-time, \cite{WithoutThePain} introduces an approach that works without any prior knowledge.
They use a genetic optimization algorithm to estimate the parameters for a signal strength prediction, including the access points (AP) position, and the pedestrian's locations during the walk.
They use a genetic optimization algorithm to estimate the parameters for a signal strength prediction, including access point positions, and the pedestrian's locations during the walk.
The estimated parameters can be refined using additional walks.
Within this work we present a similar optimization approach for estimating the AP's location in 3D.
However, instead of taking multiple measuring walks, the locations are optimized based only on some reference measurements, what further decreases the setup-time.
However, instead of taking multiple measuring walks, the locations are optimized based only on some reference measurements, further decreasing the setup-time.
Additionally, we will show that such an optimization scheme can partly compensate for the above abolished intersection-tests.
%immpf
@@ -91,20 +91,21 @@ Besides well chosen probabilistic models, the system's performance is also highl
They are often caused by restrictive assumptions about the dynamic system, like the aforementioned sample impoverishment.
The authors of \cite{Sun2013} handled the problem by using an adaptive number of particles instead of a fixed one.
The key idea is to choose a small number of samples if the distribution is focused on a small part of the state space and a large number of particles if the distribution is much more spread out and requires a higher diversity of samples.
The problem of sample impoverishment is then encountered by adapting the number of particles depend upon the systems current uncertainty \cite{Fetzer-17}.
The problem of sample impoverishment is then encountered by adapting the number of particles dependent upon the system's current uncertainty \cite{Fetzer-17}.
\commentByFrank{ich glaube encountered ist das falsche wort. du willst doch auf 'es wird gefixed' raus, oder? addressed? mitigated?}
In practice sample impoverishment is often a problem of environmental restrictions and system dynamics.
In practice, sample impoverishment is often a problem of environmental restrictions and system dynamics.
Therefore, the method above fails, since it is not able to propagate new particles into the state space due to environmental restrictions e.g. walls or ceilings.
In \cite{Fetzer-17} we deployed an interacting multiple model particle filter (IMMPF) to solve sample impoverishment in such restrictive scenarios.
We combine two particle filter using a non-trivial Markov switching process, depending upon the Kullback-Leibler divergence between both.
However, deploying a IMMPF is in many cases not necessary and produces additional processing overhead.
Thus a much simpler, but heuristic method is presented within this paper.
However, deploying an IMMPF is in many cases not necessary and produces additional processing overhead.
Thus, a much simpler, but heuristic method is presented within this paper.
%estimation
Finally, as the name recursive state estimation says, it requires to find the most probable state within the state space, to provide the "best estimate" of the underlying problem.
In the discrete manner of a particle representation this is often done by providing a single value, also known as sample statistic, to serve as a best guess \cite{Bullmann-18}.
Examples are the weighted-average over all particles or the particle with the highest weight.
However in complex scenarios like a multimodal representation of the posterior, such methods fail to provide an accurate statement about the most probable state.
However, in complex scenarios like a multimodal representation of the posterior, such methods fail to provide an accurate statement about the most probable state.
Thus, in \cite{Bullmann-18} we present a rapid computation scheme of kernel density estimates (KDE).
Recovering the probability density function using an efficient KDE algorithm yields a promising approach to solve the state estimation problem in a more profound way.

View File

@@ -84,7 +84,7 @@
\newcommand{\stepSize}{\mathcal{S}}
This data structure yields room for various strategies to be applied within the transition step.
The most simple approach uses an average pedestrian step size together with the
number of detected steps $\mObsSteps$ together and change in heading $\mObsHeading$
number of detected steps $\mObsSteps$ and change in heading $\mObsHeading$
gathered from sensor observations $\mObsVec_{t-1}$.
Combined with previously estimated position $(x,y)^T$ and heading $\mStateHeading$
%from $\mStateVec_{t-1}$
@@ -95,8 +95,9 @@
%
\begin{equation}
\begin{aligned}
x_t &=& \overbrace{x_{t-1}}^{\text{old pos.}}& & &+& \overbrace{\mObsSteps \cdot \stepSize}^{\text{distance}}& & &\cdot& \overbrace{\cos(\mStateHeading + \turnNoise)}^{\text{direction}}& & ,\enskip \turnNoise &\sim \mathcal{N}(\mObsHeading, \sigma_\text{turn}^2) \\
y_t &=& y_{t-1}\phantom{.}& & &+& \mObsSteps \cdot \stepSize& & &\cdot& \sin(\mStateHeading + \turnNoise)& & ,\enskip \stepSize &\sim \mathcal{N}(\SI{70}{\centi\meter}, \sigma_\text{step}^2)
x_t &=& \overbrace{x_{t-1}}^{\text{old pos.}}& & &+& \overbrace{\mObsSteps \cdot \stepSize}^{\text{distance}}& & &\cdot& \overbrace{\cos(\mStateHeading_{t})}^{\text{direction}}& & ,\enskip \turnNoise &\sim \mathcal{N}(\mObsHeading, \sigma_\text{turn}^2) \\
y_t &=& y_{t-1}\phantom{.}& & &+& \mObsSteps \cdot \stepSize& & &\cdot& \sin(\mStateHeading_{t})& & ,\enskip \stepSize &\sim \mathcal{N}(\SI{70}{\centi\meter}, \sigma_\text{step}^2) \\
\mStateHeading_{t} &=& \mStateHeading_{t-1} + \turnNoise\\
\end{aligned}
\end{equation}
\noindent{}with
@@ -105,7 +106,7 @@
\enskip\enskip\enskip
\text{and}
\enskip\enskip\enskip
x_{t-1},y_{t-1},\mStateHeading \in \mStateVec_{t-1}
x_{t-1},y_{t-1},\mStateHeading_{t-1} \in \mStateVec_{t-1}
\enskip.
\end{equation*}
@@ -120,6 +121,6 @@
that might be reachable. Increasing $\sigma_\text{step}$ and $\sigma_\text{turn}$ for those cases might also be a viable choice.
Likewise, just using some random position, omitting heading/steps might be viable as well.
\commentByFrank{es gaebe noch ganz andere ansaetze etc. aber wir haben wohl nicht mehr genug platz :P}
\commentByToni{ich denke aber auch, es langt.}
%\commentByFrank{es gaebe noch ganz andere ansaetze etc. aber wir haben wohl nicht mehr genug platz :P}
%\commentByToni{ich denke aber auch, es langt.}