This commit is contained in:
2017-04-25 11:39:33 +02:00
parent aea38def0b
commit 8a3de63075
4 changed files with 131 additions and 125 deletions

View File

@@ -350,4 +350,13 @@ dem einen ausgewählten AP
wenn noch zeit ist: wie aendert sich die model prediction wenn man z.B. nur die haelfte der referenzmessungen nimmt?
\todo{anfaenglich falsches heading ist gift, wegen rel. heading, weil sich dann alles verlaeuft. fix: anfaenglich große heading variation erlauben}
\todo{NICHT MEHR AKTUELL: abs-head ist in der observation besser, weil es beim resampling mehr bringt und dafuer srogt, dass die richtigen geloescht werden!}
\todo{ deutlich machen
wenn man nur die fingerprints des floors nimmt in dem gelaufen wird, ist alles gut
sobald man andere floors drueber/drunter dazu nimmt, ist es nicht mehr gnaz so gut, oder wird schlechter
das spricht dafuer dass das modell nicht gut passt
koennte man zeigen indem man den durchschnittlichen fehler je fingerprint plottet???
}

View File

@@ -1,15 +1,10 @@
\section{Indoor Positioning System}
\label{sec:system}
Our smartphone-based indoor localization system estimates the current location and heading
using recursive density estimation.
A graph based movement model provides the transition,
%$p(\mStateVec_{t} \mid \mStateVec_{t-1}, \mObsVec_{t-1})$
while the smartphone's accelerometer, gyroscope, magnetometer provide the observations
for the following evaluation step to infer the hidden state, namely the pedestrian's location and heading
\cite{Ebner-16, Fetzer-16}.
Our smartphone-based indoor localization system estimates a pedestrian's current location and heading
using recursive density estimation seen in \refeq{eq:recursiveDensity}.
\begin{equation}
\begin{equation}
\arraycolsep=1.2pt
\begin{array}{ll}
&p(\mStateVec_{t} \mid \mObsVec_{1:t}) \propto\\
@@ -20,7 +15,16 @@
\label{eq:recursiveDensity}
\end{equation}
The hidden state $\mStateVec$ is given by
A movement model, based on random walks on a graph, samples only those transitions,
that are allowed by the buildings floorplan.
%$p(\mStateVec_{t} \mid \mStateVec_{t-1}, \mObsVec_{t-1})$
The smartphone's accelerometer, gyroscope, magnetometer, GPS- and \docWIFI{}-module provide
the observations for both, the transition and the following evaluation step to infer the hidden state,
namely the pedestrian's location and heading
\cite{Ebner-16, Fetzer-16}.
This hidden state $\mStateVec$ is given by
\begin{equation}
\mStateVec = (x, y, z, \mStateHeading),\enskip
x, y, z, \mStateHeading \in \R \enspace,
@@ -28,54 +32,54 @@
%
where $x, y, z$ represent the pedestrian's position in 3D space
and $\mStateHeading$ his current (absolute) heading.
%
The corresponding observation vector is defined as
%
\begin{equation}
\mObsVec = (\mRssiVecWiFi{}, \mObsSteps, \mObsHeadingRel, \mObsHeadingAbs, \mObsGPS) \enspace.
\end{equation}
%
$\mRssiVecWiFi$ contains the measurements of all nearby \docAP{}s (\docAPshort{}s),
$\mRssiVecWiFi$ contains the signal strength measurements of all \docAP{}s (\docAPshort{}s) currently visible to the smartphone,
$\mObsSteps$ describes the number of steps detected since the last filter-step,
$\mObsHeadingRel$ the (relative) angular change since the last filter-step,
$\mObsHeadingAbs$ the current, vague absolute heading and
$\mObsGPS = ( \mObsGPSlat, \mObsGPSlon )$ the current location (if available) given by the GPS.
$\mObsGPS = ( \mObsGPSlat, \mObsGPSlon, \mObsGPSaccuracy)$ the current location (if available) given by the GPS.
Assuming statistical independence, the state evaluation's density can be written as
Assuming statistical independence, the state-evaluation density can be written as
%
\begin{equation}
%\begin{split}
p(\vec{o}_t \mid \vec{q}_t) =
p(\vec{o}_t \mid \vec{q}_t)_\text{wifi}\enskip
p(\vec{o}_t \mid \vec{q}_t)_\text{gps}
\enspace.
p(\vec{o}_t \mid \vec{q}_t) =
p(\vec{o}_t \mid \vec{q}_t)_\text{wifi}\enskip
p(\vec{o}_t \mid \vec{q}_t)_\text{gps}
\enspace.
\label{eq:evalDensity}
\end{equation}
%
The remaining observations,
The remaining observations, derived from aforementioned smartphone sensors,
namely: detected steps, relative- and absolute heading are
used within the transition model, where potential movements
$p(\mStateVec_{t} \mid \mStateVec_{t-1}, \mObsVec_{t-1})$ are sampled
based on those sensor values.
$p(\mStateVec_{t} \mid \mStateVec_{t-1}, \mObsVec_{t-1})$
are not only constrained by the buildings floorplan but also by
those additional observations.
As this work focuses on \docWIFI{} optimization, not all parts of
the localization system are discussed in detail.
For missing explanations please refer to \cite{Ebner-16}.
%
Since then, absolute heading and GPS have been added as additional sensors
to further enhance the localization by comparing the sensor values
using some distribution.
{\bf Since then}, absolute heading and GPS have been added as additional sensors
to further enhance the localization. Their values are incorporated by simply
comparing the sensor readings against a distribution that models the sensor's uncertainty.
\todo{neues resampling?}
\todo{neues resampling? je nach dem was sich noch in der eval zeigt}
\todo{ueberleitung}
\todo{
die absolute positionierung kommt aus dem wlant,
dafür braucht man entweder viele fingerprints oder ein modell
}
As GPS will only work outdoors, e.g. when moving from one building into another,
the system's absolute position indoors is solely provided by the \docWIFI{} component.
Therefore its crucial for this component to provide location estimations
that are as accurate as possible, while ensuring fast setup and
maintenance times.
\todo{ueberleitung holprig?}

View File

@@ -1,14 +1,14 @@
\section{WiFi Location Estimation}
\label{sec:optimization}
The WiFi sensor infers the pedestrian's current location based on a comparison between live measurements
(the smartphone continuously scans for nearby \docAP{}s) and reference measurements / predictions
with well known location.
The \docWIFI{} sensor infers the pedestrian's current location based on a comparison between recent measurements
(the smartphone continuously scans for nearby \docAP{}s) and reference measurements or
signal strength predictions for well known locations:
\begin{equation}
p(\vec{o}_t \mid \vec{q}_t)_\text{wifi} =
p(\mRssiVecWiFi \mid \mPosVec) =
\prod p(\mRssi_{i} \mid \mPosVec),\enskip
\prod_{\mRssi_{i} \in \mRssiVec{}} p(\mRssi_{i} \mid \mPosVec),\enskip
%\mPos = (x,y,z)^T
\mPosVec \in \R^3
\label{eq:wifiObs}
@@ -20,16 +20,18 @@
\label{eq:wifiProb}
\end{equation}
In \refeq{eq:wifiProb} $\mu_{i,\mPosVec}$ denotes the average signal strength for the \docAPshort{} identified by $i$,
that should be measurable given the location $\mPosVec = (x,y,z)^T$. This value can be determined using various
In \refeq{eq:wifiProb}, $\mu_{i,\mPosVec}$ and $\sigma_{i,\mPosVec}$ denote the average signal strength
and corresponding standard deviation for the \docAPshort{} identified by $i$,
that should be measurable given the location $\mPosVec = (x,y,z)^T$. Those two value can be determined using various
methods. Most common, as of today, seems fingerprinting, where hundreds of locations throughout the building
are scanned beforehand, and the received \docAP{}s including their signal strength denote the location's fingerprint.
\todo{cite}
are scanned beforehand. The received \docAP{}s including their average signal strength and deviation
denote each location's fingerprint \cite{radar}.
%
While allowing for highly accurate location estimations, given enough fingerprints, such a setup is costly.
We therefore use a model prediction instead, that just relies on the \docAPshort{}'s position
$\mPosAPVec{} = (x,y,z)^T$
and some parameters.
While allowing for highly accurate location estimations, given enough fingerprints, such a setup is costly,
as fingerprinting is a manual process.
%
We therefore use a model to predict the average signal strength for each location,
based on the \docAPshort{}'s position $\mPosAPVec{} = (x,y,z)^T$ and a few additional parameters.
\subsection{Signal Strength Prediction Model}
@@ -39,86 +41,95 @@
\label{eq:logDistModel}
\end{equation}
The log distance model \todo{cite} in \refeq{eq:logDistModel} is a commonly used signal strength prediction model that
The log distance model \cite{TODO} in \refeq{eq:logDistModel} is a commonly used signal strength prediction model that
is intended for line-of-sight predictions. However, depending on the surroundings, the model is versatile enough
to also serve for indoor purposes.
%
This model predicts an \docAP{}'s signal strength
It predicts an \docAP{}'s signal strength
for an arbitrary location $\mPosVec{}$ given the distance between both and two environmental parameters:
The \docAPshort{}'s signal strength \mTXP{} measurable at a known distance $d_0$ (usually \SI{1}{\meter}) and
the signal's depletion over distance \mPLE{}, which depends on the \docAPshort{}'s surroundings like walls
and other obstacles.
\mGaussNoise{} is a zero-mean Gaussian noise and models the uncertainty.
As \mPLE{} depends on the architecture around the transmitter, the model is bound to homogenous surroundings
like one floor, solely divided by drywalls of the same thickness and material.
%
The log normal shadowing model is a slight modification, to adapt the log distance model to indoor use cases.
It introduces an additional parameter, that models obstacles between (line-of-sight) the \docAPshort{} and the
It introduces an additional parameter, that considers obstacles between (line-of-sight) the \docAPshort{} and the
location in question by attenuating the signal with a constant value.
%
Depending on the use case, this value describes the number and type of walls, ceilings, floors etc. between both locations.
Depending on the use case, this value describes the number and type of walls, ceilings, floors etc. between both positions.
For obstacles, this requires an intersection-test of each obstacle with the line-of-sight, which is costly
for larger buildings. For real-time use on a smartphone, a (discretized) model pre-computation might thus be necessary
\todo{cite competition}.
\todo{cite competition}. Furthermore this requires a detailed floorplan, that includes material information
for walls, doors, floors and ceilings.
Throughout this work, we thus use a tradeoff between both models, where walls are ignored and only floors/ceilings are considered.
Assuming buildings with even floor levels, the number of floors/ceilings between two position can be determined
without costly intersection checks and thus allows for real-time use cases.
\begin{equation}
x = \mTXP{} + 10 \mPLE{} + \log_{10} \frac{d}{d_0} + \numFloors{} \mWAF{} + \mGaussNoise{}
\label{eq:logNormShadowModel}
\end{equation}
Throughout this work, walls are ignored and only floors/ceilings are considered.
In \refeq{eq:logNormShadowModel}, those
are included using a constant attenuation factor \mWAF{} multiplied by the number of floors/ceilings \numFloors{}
between sender and the location in question. Assuming \todo{passendes wort?} buildings, this number can be determined
without costly intersection checks and thus allows for real-time use cases.
The attenuation \mWAF{} per element depends on the building's architecture and for common, steel enforced concrete floors
$\approx 8.0$ might be a viable choice \todo{cite}.
In \refeq{eq:logNormShadowModel}, those are included using a constant attenuation factor \mWAF{}
multiplied by the number of floors/ceilings \numFloors{} between sender and the location in question.
The attenuation \mWAF{} (per element) depends on the building's architecture and for common,
steel enforced concrete floors $\approx 8.0$ is a viable choice \cite{TODO}.
\subsection {Model Parameters}
As previously mentioned, for the prediction model to work, one needs to know the location $\mPosAPVec_i$ for every
permanently installed \docAP{} $i$ within the building plus its environmental parameters.
%
While it is possible to use empiric values for \mTXP{}, \mPLE{} and \mWAF{} \cite{Ebner-15}, the positions are mandatory.
permanently installed \docAP{} $i$ within the building to derive the distance $d$, plus its environmental parameters
\mTXP{}, \mPLE{} and \mWAF{}.
While it is possible to use empiric values for those environmental parameters \cite{Ebner-15}, the positions are mandatory.
For many installations, there should be floorplans that include the locations of all installed transmitters.
For many buildings, there should be floorplans that include the locations of all installed transmitters.
If so, a model setup takes only several minutes to (vaguely) position the \docAPshort{}s within a virtual
map and assigning them some fixed, empirically chosen parameters for \mTXP{}, \mPLE{} and \mWAF{}.
Depending on the building's architecture this might already provide enough accuracy for some use-cases
Depending on the building's architecture this might already provide enough accuracy for some use-cases,
where a vague location information is sufficient.
\subsection{Model Parameter Optimization}
As a compromise between fingerprinting and pure empiric model parameters, one can optimize
the model parameters based on a few reference measurements throughout the building.
For systems that demand a higher accuracy, one can choose a compromise between fingerprinting and
pure empiric model parameters where (some) model parameters are optimized,
based on a few reference measurements throughout the building.
Obviously, the more parameters are unknown ($\mPosAPVec{}, \mTXP{}, \mPLE{}, \mWAF{}$) the more
reference measurements are necessary to provide a viable optimization.
reference measurements are necessary to provide a stable optimization.
Depending on the desired accuracy, setup time and whether the transmitter positions are known or unknown,
several optimization strategies arise, where not all 6 parameters are optimized, but only some of them.
Just optimizing \mTXP{} and \mPLE{} usually means optimizing a convex function
as can be seen in figure \ref{fig:wifiOptFuncTXPEXP}. For such functions,
algorithms like gradient descent \todo{cite} and (downhill) simpelx \todo{cite}
are well suited.
Just optimizing \mTXP{} and \mPLE{} with constant \mWAF{} and known transmitter position
usually means optimizing a convex function as can be seen in figure \ref{fig:wifiOptFuncTXPEXP}.
For such error functions, algorithms like gradient descent \cite{TODO} and (downhill) simpelx \cite{TODO}
are well suited and will provide the global minima:
\todo{formel fuer opt: was wird optimiert?}
\begin{equation}
argmin_{bla} blub()
TODO TODO TODO
\end{equation}
\begin{figure}
\input{gfx/wifiop_show_optfunc_params}
\label{fig:wifiOptFuncTXPEXP}
\caption{
The average error (in \SI{}{\decibel}) between reference measurements and model predictions
The average error (in \SI{}{\decibel}) between all reference measurements and corresponding model predictions
for one \docAPshort{} dependent on \docTXP{} \mTXP{} and \docEXP{} \mPLE{}
[fixed position $\mPosAPVec{}$ and \mWAF{}] denotes a convex function.
[known position $\mPosAPVec{}$, fixed \mWAF{}] denotes a convex function.
}
\end{figure}
However, optimizing the transmitter's position usually means optimizing a non-convex function,
especially when the $z$-coordinate, that influences the number of attenuating floors/ceilings,
is involved. While the latter can be mitigated by introducing a continuous function for the
number $n$ of floors/ceilings, like a sigmoid, this will still not work for all situations.
However, optimizing an unknown transmitter position usually means optimizing a non-convex, discontinuous
function, especially when the $z$-coordinate, that influences the number of attenuating floors/ceilings,
is involved.
While the latter can be mitigated by introducing a continuous function for the
number $n$ of floors/ceilings, like a sigmoid, the function is not necessarily convex.
As can be seen in figure \ref{fig:wifiOptFuncPosYZ}, there are two local minima and only one of
both also is a global one.
@@ -134,15 +145,15 @@
Such functions demand for optimization algorithms, that are able to deal with non-convex functions,
like genetic approaches. However, initial tests indicated that while being superior to simplex
and similar algorithms, the results were not satisfactorily.
and similar algorithms, the results were not satisfactorily and the optimization often did not converge.
As the Range of the six to-be-optimized parameters is known ($\mPosAPVec{}$ within the building,
\mTXP{}, \mPLE{}, \mWAF{} within a sane interval), we used some modifications.
The initial population is uniformly sampled from the known range. During each iteration
The algorithms initial population is uniformly sampled from the known range. During each iteration
the best \SI{25}{\percent} of the population are kept and the remaining entries are
re-created by modifying the best entries with uniform random values within
\SI{10}{\percent} of the known range. To stabilize the result, the allowed modification range
is adjusted over time, known as cooling \todo{cite}.
$\pm$\SI{10}{\percent} of the known range. To stabilize the result, the allowed modification range
(starting at \SI{10}{\percent}) is reduced over time, known as cooling \cite{todo}.
\subsection{Modified Signal Strength Model}
@@ -157,16 +168,37 @@
%To the contrary, there were several situations throughout the testing walks, where
%the inferred location was more erroneous than before.
As the used model does not consider walls, it is expected to provide erroneous values
for regions that are heavily attenuated by e.g. concrete or metallised glass.
As the used model tradeoff does not consider walls, it is expected to provide erroneous values
for regions that are heavily shrouded by e.g. steel-enforced concrete or metallised glass.
\subsection{\docWIFI{} quality factor}
\todo{wifi quality factor??}
\todo{formel für toni}
Past evaluations showed, that there are many situations where the \docWIFI{} location estimation
is highly erroneous. Either when the signal strength prediction model does not match real world
conditions or the received measurements are ambiguous and there is more than one location
within the building that matches those readings. Both cases can occur e.g. in areas surrounded by
concrete walls where the model does not match the real world conditions as those walls are not considered,
and the smartphone barely receives some \docAPshort{}s due to the high attenuation.
If such a sensor error occurs only for a short time period, the recursive density estimation
\refeq{eq:recursiveDensity} is able to compensate those errors using other sensors and the movement
model. However, if the error persists for a longer time period, the error will slowly distort
the posterior distribution. As our movement model depends on the actual floorplan, the density
might get trapped e.g. within a room if the other sensors are not able to compensate for
the \docWIFI{} error.
Thus, we try to determine the quality of received \docWIFI{} measurements, which allows for
temporarily disabling \docWIFI{}'s contribution within the evaluation \refeq{eq:evalDensity}
for situations where the quality is insufficient.
In \refeq{eq:wifiQuality} we use the average signal strength of all \docAP{}s seen within one measurement
and scale this value to match a region of $[0, 1]$ depending on an upper- and lower bound.
If the returned quality falls below a certain threshold, \docWIFI{} is ignored within
the evaluation.
\begin{equation}
\newcommand{\leMin}{l_\text{min}}
\newcommand{\leMax}{l_\text{max}}
@@ -184,12 +216,11 @@
\label{eq:wifiQuality}
\end{equation}
\subsection {VAP grouping}
\label{sec:vap}
Assuming normal conditions, the received signal strength at one location will (strongly) vary
due to environmental conditions like temperature, humidity, open/closed doors, RF interference.
Assuming normal conditions, the received signal strength at one location will also (strongly) vary
due to environmental conditions like temperature, humidity, open/closed doors and RF interference.
Fast variations can be addressed by averaging several consecutive measurements at the expense
of a delay in time.
To prevent this delay we use the fact, that many buildings use so called virtual access points
@@ -202,14 +233,7 @@
function like average, median or maximum.
\todo{abs-head ist in der observation besser, weil es beim resampling mehr bringt und dafuer srogt, dass die richtigen geloescht werden!}
\todo{anfaenglich falsches heading ist gift, wegen rel. heading, weil sich dann alles verlaeuft. fix: anfaenglich große heading variation erlauben}
\todo{wifi-veto erklaeren. wir fragen die 4 laut model an jeder pos staerksten APs ab und vergleichen die mit dem scan.
weichen min 50 prozent um mehr als 20 dB ab, oder sind im scan nicht gesehen worden, wird fuer diese position ein veto eingelegt:
0.001 vs 0.999. das geht auch nur so, da ja an jeder position andere APs die staerksten waeren.. direkt mit deren wahrscheinlichkeiten
zu arbeiten wuerde also aepfel mit birnen vergleichen}
wie wird optimiert
a) bekannte pos + empirische params
@@ -217,38 +241,6 @@ b) bekannte pos + opt params (fur alle APs gleich) [simplex]
c) bekannte pos + opt params (eigene je AP) [simplex]
d) alles opt: pos und params (je ap) [range-random]
wenn man nur die fingerprints des floors nimmt in dem gelaufen wird, ist alles gut
sobald man andere floors drueber/drunter dazu nimmt, ist es nicht mehr gnaz so gut, oder wird schlechter
das spricht dafuer dass das modell nicht gut passt
koennte man zeigen indem man den durchschnittlichen fehler je fingerprint plottet???
the optimization-result also depends on the optimzation target:
a) the (average) error between measurment and model prediction
b) the (average) probability for the model's prediction given the fingerprint, ...
c) ...
probleme bei der optimierung beschreiben. convex usw..
wo geht simplex gut, wo eher nicht
harte WAF übergänge scheinen beim optimieren als auch beim matchen nicht so gut
gleitende übergänge mittels sigmoid wirken besser
war eine wichtige erkenntnis
die vom AP bekannte position wird NICHT als input fuer die alles-OPT funktion benutzt
die ist wirklich 'irgendwo'
range-random algo
domain bekannt [map groesse, txp/exp/waf in etwa]
genetic refinement mit cooling [= erst grob, dann fein]
optimierung ist tricky. auch wegen dem WAF der ja sprunghaft dazu kommt, sobald messung und AP in zwei unterschiedlichen
stockwerken liegen.. und das selbst wenn hier vlt sichtkontakt möglich wäre, da der test 2D ist und nicht 3D

View File

@@ -45,6 +45,7 @@
\newcommand{\mObsGPS}{\vec{g}}
\newcommand{\mObsGPSlat}{\text{lat}}
\newcommand{\mObsGPSlon}{\text{lon}}
\newcommand{\mObsGPSaccuracy}{\text{accuracy}}