This repository has been archived on 2020-04-08. You can view files and clone it, but cannot push or open issues or pull requests.
Files
IPIN2018/tex_review/chapters/relatedwork.tex
2018-10-16 16:38:49 +02:00

109 lines
11 KiB
TeX

\section{Related Work}
\label{sec:relatedWork}
We consider indoor localization to be a time-sequential, non-linear and non-Gaussian state estimation problem.
Such problems are often solved using Bayesian filters, which update a state estimation recursively
with every new incoming measurement.
A powerful group of methods to obtain numerical results for this approach are particle filter.
In context of indoor localization, particle filter approximate a probability distribution describing the pedestrian's possible whereabouts by using a set of weighted random samples (particles).
Here, new particles are drawn according to some importance distribution, often represented by the state transition, which models the dynamics of the system.
%\todo{statt dynamics of the system vlt: the pedestrian's movement?}
Those particles are then weighted by the state evaluation given different sensor measurements.
A resampling step is deployed to prevent that only a small number of particles have a significant weight \cite{chen2003bayesian}.
Most localization approaches differ mainly in how the transition and evaluation steps are implemented and the sensors are incorporated \cite{Fetzer-16, Ebner-16, Hilsenbeck2014}.
%\todo{hier ist irgendwie ein harter cut zu dem nächsten satz}
%Additionally, within this paper we present a method, which is designed to run solely on a commercial smartphone.
%In its most basic form, the state transition is given by.. einfach distanz und heading.. intersection with walls usw.
%\todo{nochmal mit frank klären was wir jetzt GENAU machen.}
The system's dynamics describe a pedestrian's potential movement within the building.
This can be formulated as the question \emph{``Given the pedestrian's current position and heading are known, where could he be after a certain amount of time?''}.
Obviously, the answer to this question depends on the pedestrian's walking behavior, any nearby architecture and thus the building's floorplan.
%
Assuming the pedestrian to walk almost straight towards his current heading with a known, constant walking speed, the most basic form of state transition simply rejects all movements, where the line-of-sight between current position and potential destination is blocked by an obstacle \cite{Ebner-15}.
%
Despite its simplicity, this approach suffers from several drawbacks.
The intersection-test can be costly, depending on the number of used particles and the complexity of the building.
Furthermore, it is limited mainly to 2D transitions within the plane.
Smooth 3D transitions, like walking stairs, would require much more complex intersection tests \cite{Afyouni2012}.
To overcome both limitations, the building's floorplan can be used to derive a graph-based structure, like voronoi diagrams or fixed-distance grids, moving all costly intersection tests into a one-time offline phase \cite{Ebner-16, Hilsenbeck2014}.
Hereafter, graph-based random walks along the created data-structure can be used as a fast transition approximation.
Smooth transitions in 3D space can be achieved by generating nodes and edges along stairs and elevators.
Furthermore, the nodes can be used to store additional information, like their distance towards a pedestrian's desired destination.
Such information can be included during the transitions step, \eg{} increasing the likelihood of all potential movements that approach this destination \cite{Ebner-16}.
However, the graph-based approach also imposes some potential issues. When using a gridded graph, the spacing between adjacent
nodes directly represents the transition's accuracy. Likewise, the amount of required memory to represent the floorplan
scales about quadratically with this spacing. Even though nodes/edges are only created for actually walkable areas (like a sparse cube),
large buildings require millions of nodes and might not fit into memory at once.
Furthermore, (large) outdoor regions between adjacent buildings require unnecessarily large amounts
of memory to be modeled \cite{Afyouni2012}. While voronoi diagrams have the ability to mitigate this issue to some degree,
they usually suffer from reduced accuracy for large open spaces, as many implementations only use the edges to estimate potential movements \cite{Hilsenbeck2014}.
We therefore present a novel technique based on continuous walks along a navigation mesh.
Like the graph, the mesh, consisting of triangles sharing adjacent edges,
is created once during an offline phase, based on the building's 3D floorplan.
Using large triangles reduces the memory footprint dramatically (a few megabytes for large buildings)
while still increasing the quality (triangle-edges directly adhere to architectural-edges) and allows
for truly continuous transitions along the surface spanned by all triangles.
%eval - wifi, fingerprinting
The outcomes of the state evaluation process depend highly on the used sensors.
Most smartphone-based systems are using received signal strength indications (RSSI) given by \docWIFI{} or Bluetooth as a source for absolute positioning information.
At this, one can mainly distinguish between fingerprinting and signal strength prediction model based solutions \cite{Ebner-17}.
Indoor localization using \docWIFI{} fingerprints was first addressed by \cite{radar}.
During a one-time offline-phase, a multitude of reference measurements are conducted.
During the online-phase the pedestrian's location is then inferred by comparing those prior measurements against live readings.
Based on this pioneering work, many further improvements where made within this field of research \cite{PropagationModelling, ProbabilisticWlan, meng11}.
However, despite a very high accuracy up to \SI{1}{\meter}, fingerprinting approaches suffer from tremendous setup- and maintenance times.
Using robots instead of human workforce might thus be a viable choice, still this seems not to be a valid option for old buildings with limited accessibility due to uneven grounds and small stairs.
%wifi, signal strength
Signal strength prediction models are a well-established field of research to determine signal strengths for arbitrary locations by using an estimation model instead of real measurements.
While many of them are intended for outdoor and line-of-sight purposes \cite{PredictingRFCoverage, empiricalPathLossModel}, they are often applied to indoor use-cases as well \cite{Ebner-17, farid2013recent}.
Besides their solid performance in many different localization solutions, a complex scenario requires an equally complex signal strength prediction model.
As described in section 1, historical buildings represent such a scenario and thus the model has to take many different constraints into account.
An example is the wall-attenuation-factor model \cite{PathLossPredictionModelsForIndoor}.
It introduces an additional parameter to the well-known log-distance model \cite{IntroductionToRadio}, which considers obstacles between (line-of-sight) the access point (AP) and the location in question by attenuating the signal with a constant value.
Depending on the use-case, this value describes the number and type of walls, ceilings, floors etc. between both positions.
For obstacles, this requires an intersection-test of each obstacle with the line-of-sight, which is costly for larger buildings.
Thus \cite{Ebner-17} suggests to only consider floors/ceilings, which can be calculated without intersection checks and allows for real-time use-cases running on smartphones.
%wifi optimization
To further reduce the setup-time, \cite{WithoutThePain} introduces an approach that works without any prior knowledge.
They use a genetic optimization algorithm to estimate the parameters for a signal strength prediction, including access point positions, and the pedestrian's locations during the walk.
The estimated parameters can be refined using additional walks.
Within this work we present a similar optimization approach for estimating the AP's location in 3D.
However, instead of taking multiple measuring walks, the locations are optimized based only on some reference measurements, further decreasing the setup-time.
Additionally, we will show that such an optimization scheme can partly compensate for the above abolished intersection-tests.
%immpf
Besides well chosen probabilistic models, the system's performance is also highly affected by handling problems which are based on the nature of \add{a} particle filter.
They are often caused by restrictive assumptions about the dynamic system, like seen from the aforementioned problem of sample impoverishment.
The authors of \cite{Sun2013} handled the problem by using an adaptive number of particles instead of a fixed one.
The key idea is to choose a small number of samples if the distribution is focused on a small part of the state space and a large number of particles if the distribution is much more spread out and requires a higher diversity of samples.
The problem of sample impoverishment is then addressed by adapting the number of particles dependent upon the system's current uncertainty \cite{Fetzer-17}.
%\commentByFrank{ich glaube encountered ist das falsche wort. du willst doch auf 'es wird gefixed' raus, oder? addressed? mitigated?}
In practice, sample impoverishment is often a problem of environmental restrictions and system dynamics.
Therefore, the method above fails, since it is not able to propagate new particles into the state space due to environmental restrictions e.g. walls or ceilings.
In \cite{Fetzer-17} we deployed an interacting multiple model particle filter (IMMPF) to solve sample impoverishment in such restrictive scenarios.
We combine two particle filter using a non-trivial Markov switching process, depending upon the Kullback-Leibler divergence between both.
However, deploying an IMMPF is in many cases not necessary and produces additional processing overhead.
Thus, a much simpler, but heuristic method is presented within this paper.
%estimation
Finally, as the name recursive state estimation says, it requires to find the most probable state within the state space, to provide the "best estimate" of the underlying problem.
In the discrete manner of a particle representation this is often done by providing a single value, also known as sample statistic, to serve as a best guess \cite{Bullmann-18}.
Examples are the weighted-average over all particles or the particle with the highest weight.
However, in complex scenarios like a multimodal representation of the posterior, such methods fail to provide an accurate statement about the most probable state.
Thus, in \cite{Bullmann-18} we present a \del{rapid computation} \add{approximation} scheme of kernel density estimates (KDE).
Recovering the probability density function using an efficient KDE algorithm yields a promising approach to solve the state estimation problem in a more profound way.