This repository has been archived on 2020-04-08. You can view files and clone it, but cannot push or open issues or pull requests.
Files
OTHER2017/tex/chapters/work.tex
2017-06-29 15:37:38 +02:00

338 lines
18 KiB
TeX
Executable File

\section{WiFi Location Estimation}
\label{sec:optimization}
The \docWIFI{} sensor infers the pedestrian's current location based on a comparison between live observations
(the smartphone continuously scans for nearby \docAP{}s) and fingerprints or
signal strength predictions for well-known locations. The location that fits the observations best,
is the pedestrian's current location. Assuming statistical independence of all transmitters
installed within a building, this matching probability can be written as
\begin{equation}
p(\vec{o}_t \mid \vec{q}_t)_\text{wifi} =
p(\mRssiVecWiFi \mid \mPosVec) =
\prod_{\mRssi_{i} \in \mRssiVec{}} p(\mRssi_{i} \mid \mPosVec),\enskip
%\mPos = (x,y,z)^T
\mPosVec \in \R^3
\enskip ,
\label{eq:wifiObs}
\end{equation}
\noindent where matching a single signal strength observation against the reference is given by
\begin{equation}
p(\mRssi_i \mid \mPosVec) =
\mathcal{N}(\mRssi_i \mid \mu_{i,\mPosVec}, \sigma_{i,\mPosVec}^2)
\enskip .
\label{eq:wifiProb}
\end{equation}
In \refeq{eq:wifiProb}, $\mu_{i,\mPosVec}$ and $\sigma_{i,\mPosVec}$ denote the average signal strength
and corresponding standard deviation for the \docAPshort{} identified by $i$,
that should be measurable given the location $\mPosVec = (x,y,z)^T$. Those two values can be determined using various
methods. Most common and accurate, as of today, is fingerprinting, where hundreds of locations throughout the building
are scanned beforehand. The received \docAP{}s including their (average) signal strength and deviation
denote each location's fingerprint.
%
To prevent the time-consuming setup process, we use a model to predict the average signal strength for each location,
based on the \docAPshort{}'s position $\mPosAPVec{} = (x,y,z)^T$ and a few additional parameters.
\subsection{Signal Strength Prediction Model}
\label{sec:sigStrengthModel}
\begin{equation}
\mRssi = \mTXP{} - 10 \mPLE{} + \log_{10} \frac{d}{d_0} + \mGaussNoise{}
\label{eq:logDistModel}
\end{equation}
The log distance model \cite{IntroductionToRadio, WirelessCommunications} in \refeq{eq:logDistModel} is a commonly
used signal strength prediction model that
is intended for line-of-sight predictions. However, depending on the surroundings, the model is versatile enough
to also serve for indoor purposes.
%
It predicts an \docAP{}'s signal strength
for an arbitrary location,
%$\mPosVec{}$
given the distance $d$ between both and two environmental parameters:
The \docAPshort{}'s signal strength \mTXP{} measurable at a known distance $d_0$ (usually \SI{1}{\meter}) and
the signal's depletion over distance \mPLE{}, which depends on the \docAPshort{}'s surroundings like walls
and other obstacles.
\mGaussNoise{} is a zero-mean Gaussian noise and models the uncertainty.
As \mPLE{} depends on the architecture around the transmitter, the model is bound to homogenous surroundings
like one floor, solely divided by drywalls of the same thickness and material.
%
The log normal shadowing-, or wall-attenuation-factor model \cite{PathLossPredictionModelsForIndoor}
is a slight modification, to adapt the log distance model to indoor use-cases.
It introduces an additional parameter, that considers obstacles between (line-of-sight) the \docAPshort{} and the
location in question by attenuating the signal with a constant value.
%
Depending on the use-case, this value describes the number and type of walls, ceilings, floors etc. between both positions.
For obstacles, this requires an intersection-test of each obstacle with the line-of-sight, which is costly
for larger buildings. For real-time use on a smartphone, a (discretized) model pre-computation might thus be necessary
\cite{competition2016}.
%Furthermore this requires a detailed floorplan, that includes material information
%for walls, doors, floors and ceilings.
Throughout this work, we thus use a tradeoff between both models, where walls are ignored and only floors/ceilings are considered.
Assuming buildings with even floor levels, the number of floors/ceilings between two position can be determined
without costly intersection checks and thus allows for real-time use-cases running on smartphones.
\begin{equation}
\mRssi = \mTXP{} - 10 \mPLE{} + \log_{10} \frac{d}{d_0} + \numFloors{} \mWAF{} + \mGaussNoise{}
\label{eq:logNormShadowModel}
\end{equation}
In \refeq{eq:logNormShadowModel}, a constant attenuation factor \mWAF{} is
multiplied by the number \numFloors{} of floors/ceilings between sender and the location in question.
The attenuation \mWAF{} (per element) depends on the building's architecture and for common,
steel enforced concrete floors $\mWAF \approx \SI{-8.0}{\decibel}$ is a viable choice \cite{ElectromagneticPropagation}.
\subsection {Model Parameters}
As previously mentioned, for the prediction model to work, it is necessary to know the location $\mPosAPVec_i$ for every
permanently installed \docAP{} $i$ within the building to derive the distance $d$, plus its environmental parameters
\mTXP{}, \mPLE{} and \mWAF{}.
While it is possible to use empiric values for those environmental parameters \cite{Ebner-15}, the positions are mandatory.
For many buildings there should be floorplans that include the locations of all installed transmitters.
If so, a model setup takes only several minutes to (vaguely) position the \docAPshort{}s within a virtual
map and assign some fixed, empirically chosen parameters for \mTXP{}, \mPLE{} and \mWAF{}.
Depending on the building's architecture this might already provide enough accuracy for some use-cases,
where a vague location information is sufficient.
\subsection{Model Parameter Optimization}
%\begin{figure}
% \input{gfx/wifiop_show_optfunc_params}
% \caption{
% The average error (in \SI{}{\decibel}) between all reference measurements and corresponding model predictions
% for one \docAPshort{} dependent on \docTXP{} \mTXP{} and \docEXP{} \mPLE{}
% [known position $\mPosAPVec{}$, fixed \mWAF{}] denotes a convex function.
% }
% \label{fig:wifiOptFuncTXPEXP}
%\end{figure}
For systems that demand a higher accuracy, one can choose a compromise between fingerprinting and
aforementioned pure empiric model parameters by optimizing those parameters
based on a few reference measurements throughout the building.
The more parameters are staged for optimization ($\mPosAPVec{}, \mTXP{}, \mPLE{}, \mWAF{}$) the more
reference measurements are necessary to provide a stable result.
Depending on the desired accuracy, setup time and whether the transmitter positions are known or unknown,
several optimization strategies arise, where not all 6 parameters are optimized, but only some of them.
The target function \refeq{eq:optTarget} optimizes the model-parameters for one \docAP{} by reducing the squared error between
reference measurements $s_{\mPosVec} \in \vec{s}$ with well-known location $\mPosVec$ and corresponding
model predictions $\mu_{\mPosVec}$.
The number of floors between $\mPosVec$ and the transmitter's location $\mPosAPVec$ is
$\text{floors}(\mPosVec,\mPosAPVec)$.
\begin{equation}
\epsilon^* =
\argmin_{\mPosAPVec, \mTXP, \mPLE, \mWAF}
\sum_{s_{\mPosVec} \in \vec{s}}
(s_{\mPosVec} - \mu_{\mPosVec})^2
\enskip,\enskip\enskip
\mu_{\mPosVec} =
\mTXP{} + 10 \mPLE{} + \log_{10} \| \mPosVec-\mPosAPVec \| + \text{floors}(\mPosVec,\mPosAPVec) \mWAF{}
\label{eq:optTarget}
\end{equation}
Just optimizing \mTXP{} and \mPLE{} with constant \mWAF{} and known transmitter position
usually means optimizing a convex function, as can be seen in \reffig{fig:wifiOptFuncTXPEXP}.
For such error functions, algorithms like gradient descent and simplex \cite{gradientDescent, downhillSimplex1, downhillSimplex2}
are well suited and will provide the global minimum.
However, optimizing an unknown transmitter position usually means optimizing a non-convex, discontinuous
function, especially when the $z$-coordinate, that influences the number of attenuating floors/ceilings,
is involved.
While the latter can be mitigated by introducing a continuous function for the
number $n$, e.g. a sigmoid, the function is not necessarily convex.
\reffig{fig:wifiOptFuncPosYZ} depicts two local minima and only one of both also is a global one.
\begin{figure*}
\centering
\begin{subfigure}{0.48\textwidth}
%\centering
\input{gfx2/wifiop_show_optfunc_params}
\caption{
Modifying \docTXP{} \mTXP{} and \docEXP{} \mPLE{}
[known position $\mPosAPVec{}$, fixed \mWAF{}] denotes a convex function.
}
\label{fig:wifiOptFuncTXPEXP}
\end{subfigure}%
\enskip\enskip
\begin{subfigure}{0.48\textwidth}
%\centering
\input{gfx2/wifiop_show_optfunc_pos_yz}
\caption{
Modifying $y$- and $z$-position [fixed $x$, \mTXP{}, \mPLE{} and \mWAF{}]
denotes a non-convex function with multiple local minima.
}
\label{fig:wifiOptFuncPosYZ}
\end{subfigure}
\caption{
Average error (in \SI{}{\decibel}) between all reference measurements and corresponding model predictions
for one \docAPshort{}.
}
\end{figure*}
%\begin{figure}
% \input{gfx/wifiop_show_optfunc_pos_yz}
% \caption{
% The average error (in \SI{}{\decibel}) between reference measurements and model predictions
% for one \docAPshort{} dependent on $y$- and $z$-position [fixed $x$, \mTXP{}, \mPLE{} and \mWAF{}]
% usually denotes a non-convex function with multiple [here: two] local minima.
% }
% \label{fig:wifiOptFuncPosYZ}
%\end{figure}
Such functions demand for optimization algorithms, that are able to deal with non-convex functions.
We thus used a genetic algorithm to perform this task \cite{goldberg89}.
However, initial tests indicated that while being superior to simplex
and similar algorithms, the results were not yet satisfying as the optimization often did not converge.
As the range of the six to-be-optimized parameters is known ($\mPosAPVec{}$ within the building,
\mTXP{}, \mPLE{}, \mWAF{} within a sane interval around empiric values), we slightly modified the
genetic algorithm: The initial population is now uniformly sampled from the known range. During each iteration,
the best \SI{25}{\percent} of the population are kept and the remaining entries are
re-created by modifying the best entries with uniform random values within
$\pm$\SI{10}{\percent} of the known range.
Inspired by {\em cooling} known from simulated annealing \cite{Kirkpatrick83optimizationby},
the result is stabilized by narrowing the allowed modification range
%(starting at \SI{10}{\percent})
over time.
\subsection{Modified Signal Strength Model}
%\todo{nicht: during initial eval, sondern gleich sagen, dass die vermutung nahe liegt, dass das modell
%nicht gut klappen wird, weil waende und unser metall-glas nicht beruecksichtigt werden. deshalb
%versuchen wir ein anderes modell das immernoch live arbeiten kann}
%During the initial eval, some issues were discovered. While aforementioned optimization was able to
%reduce the error between reference measurements and model estimations to \SI{50}{\percent},
%the position estimation \ref{eq:wifiProb} did not benefit from improved model parameters.
%To the contrary, there were several situations throughout the testing walks, where
%the inferred location was more erroneous than before.
As the used model tradeoff does not consider walls, it is expected to provide erroneous values
for regions that are heavily shrouded, e.g. by steel-enforced concrete or metallized glass.
Instead of using only one optimized model per \docAP{}, we use several instances with different
parameters that are limited to some region within the building. By reducing the area
that the model has to describe, we expect the limited number of model parameters to
provide better (local) results.
\begin{itemize}
\item{
{\em \optPerFloor{}} will use one model for each story, that is optimized using
only the fingerprints that belong to the corresponding floor. During evaluation,
the $z$-value from $\mPosVec{}$ in \refeq{eq:wifiProb} is used to select the correct model
for this location's signal strength estimation.
}
\item{
{\em \optPerRegion{}} works similar, except that each model is limited to a predefined,
axis-aligned bounding box. This approach allows for an even more refined distinction between
several areas like in- and outdoor regions or locations that are expected to highly differ
from their surroundings.
}
\end{itemize}
Especially the second model imposes a potential issue we need to address:
If an \docAPshort{} is seen only once or twice within such a bounding box, it is impossible
to optimize its parameters, just like a line cannot be defined using one single point.
However, due to \refeq{eq:wifiProb}, we need each model to provide the same number of
\docAP{}s. Otherwise regions with less known transmitters would automatically be more
likely than others. We therefore use fixed model parameters,
$\mTXP = \SI{-100}{\decibel{}m}$, $\mPLE = 0$ and $\mWAF = \SI{0}{\decibel}$ for every
transmitter with less than three reference measurements per region. This yields
a model that always returns \SI{-100}{\decibel{}m}, independent of the distance from the transmitter.
While this most probably is not the correct reading for all locations, it works
for most cases, as usual smartphones are unable to measure signals below this threshold.
%\todo{AP wird in einer region nur dann beruecksichtigt, wenn mindestanzahl an messungen vorhanden ist!}
%\todo{das heißt aber, dass an unterschiedlichen stellen unterschiedlich viele APs verglichen werden. das geht ned. deshalb feste -100}
\subsection{\docWIFI{} quality factor}
\label{sec:wifiQuality}
Evaluations within previous works showed, that there are many situations where the overall \docWIFI{} location estimation
is highly erroneous. Either when the signal strength prediction model does not match real-world
conditions, or the received measurements are ambiguous and there is more than one location
within the building that matches those readings. Both cases can occur e.g. in areas surrounded by
concrete walls, where the model does not match the real-world conditions as those walls are not considered,
and the smartphone barely receives \docAPshort{}s due to the high attenuation.
If such a sensor error occurs only for a short time period, the recursive density estimation from
\refeq{eq:recursiveDensity} is able to compensate using other observations and the transition
model. However, if the sensor-fault persists for a longer time period, such an error will slowly distort
the posterior distribution. As our movement model depends on the actual floorplan, the density
might get trapped e.g. within a room if the other sensors are unable to compensate for
the \docWIFI{} error.
Thus, we try to determine the quality of received measurements, which allows for
temporarily disabling \docWIFI{}'s contribution within the evaluation \refeq{eq:evalDensity}
if the quality is insufficient.
In \refeq{eq:wifiQuality} we use the average signal strength $\bar\mRssi$ among all \docAP{}s seen within one measurement
$\mRssiVec$ and scale this value to match a region of $[0, 1]$ depending on an upper and lower bound.
If the returned quality is below a certain threshold, \docWIFI{} is ignored within the evaluation.
Lower and upper bound are chosen empirically by looking at the usual range of \docWIFI{} signal strengths,
that still provide persistent data-connections to clients. The threshold is also determined empirically by examining
the results of \refeq{eq:wifiQuality} for some places with good and bad \docWIFI{} location estimations, respectively.
\begin{equation}
\newcommand{\leMin}{l_\text{min}}
\newcommand{\leMax}{l_\text{max}}
\text{quality}(\mRssiVec) =
\max \left(0,
\min \left(
\frac{
\bar\mRssi - \leMin
}{
\leMax - \leMin
},
1
\right)
\right)
,\enskip
\bar\mRssi = \frac{1}{n} \sum_{i = 1}^{n} \mRssi_i
\label{eq:wifiQuality}
\end{equation}
\subsection {Virtual \docAP{}s (VAP)}
\label{sec:vap}
Assuming normal conditions, the received signal strength at one location will also (strongly) vary over time
due to environmental conditions like temperature, humidity, open/closed doors and RF interference.
Fast variations can be addressed by averaging several consecutive measurements at the expense
of a delay in time.
To prevent this delay, we use the fact, that many buildings use so called virtual access points
where one physical hardware \docAP{} provides more than one virtual network to connect to.
They can usually be identified, as only the last digit of the MAC address is altered among the virtual networks.
%
As those normally share the same frequency, they are unable to transmit at the same instant in time.
When scanning for \docAPshort{}s, one will thus receive several responses from the same hardware, all with
a very small delay (micro- to milliseconds). Such measurements may be grouped using some aggregate
function like average, median or maximum instead of using each single measurement.
Furthermore, VAP grouping can be used to suppress unlikely observations: If a physical hardware is known
to provide six virtual networks, it is unlikely for the smartphone to only see one of those networks.
This is due to temporal effects or multipath signal propagation and the received signal strength will often be far from
the normal average. It thus makes sense to just omit such unlikely observations, focusing on the remaining, stable ones.
%\todo{???
%aps sind (statistisch) unaebhaengig. d.h., jeder AP kann fuer sich optimiert werden.
%optimierung des gesamtsystems ist nicht notwendig.
%}