added some comments. more to-do

This commit is contained in:
2018-09-17 19:31:03 +02:00
parent 93082818ef
commit ed46dd65dd
5 changed files with 67 additions and 31 deletions

View File

@@ -51,16 +51,16 @@ they usually suffer from reduced accuracy for large open spaces, as many impleme
We therefore present a novel technique based on continuous walks along a navigation mesh.
Like the graph, the mesh, consisting of triangles sharing adjacent edges,
is created once during an offline phase, based on the buildings 3D floorplan.
is created once during an offline phase, based on the building's 3D floorplan.
Using large triangles reduces the memory footprint dramatically (a few megabytes for large buildings)
while still increasing the quality (triangle-edges directly adhere to architectural-edges) and allows
for truly continuous transitions along the surface spanned by all triangles.
%eval - wifi, fingerprinting
The outcomes of the state evaluation process depend highly on the used sensors.
Most smartphone-based systems are using received signal strength indications (RSSI) given by Wi-Fi or Bluetooth as a source for absolute positioning information.
At this, one can mainly differ between fingerprinting and signal-strength prediction model based solutions \cite{Ebner-17}.
Indoor localization using Wi-Fi fingerprints was first addressed by \cite{radar}.
Most smartphone-based systems are using received signal strength indications (RSSI) given by \docWIFI{} or Bluetooth as a source for absolute positioning information.
At this, one can mainly distinguish between fingerprinting and signal-strength prediction model based solutions \cite{Ebner-17}.
Indoor localization using \docWIFI{} fingerprints was first addressed by \cite{radar}.
During a one-time offline-phase, a multitude of reference measurements are conducted.
During the online-phase the pedestrian's location is then inferred by comparing those prior measurements against live readings.
Based on this pioneering work, many further improvements where made within this field of research \cite{PropagationModelling, ProbabilisticWlan, meng11}.
@@ -70,20 +70,20 @@ Using robots instead of human workforce might thus be a viable choice, still thi
%wifi, signal strength
Signal strength prediction models are a well-established field of research to determine signal strengths for arbitrary locations by using an estimation model instead of real measurements.
While many of them are intended for outdoor and line-of-sight purposes \cite{PredictingRFCoverage, empiricalPathLossModel}, they are often applied to indoor use-cases as well \cite{Ebner-17, farid2013recent}.
Besides their solid performance in many different localization solutions, a complex scenario requires a equally complex signal strength prediction model.
Besides their solid performance in many different localization solutions, a complex scenario requires an equally complex signal strength prediction model.
As described in section 1, historical buildings represent such a scenario and thus the model has to take many different constraints into account.
An example is the wall-attenuation-factor model \cite{PathLossPredictionModelsForIndoor}.
It introduces an additional parameter to the well-known log distance model \cite{IntroductionToRadio}, which considers obstacles between (line-of-sight) the AP and the location in question by attenuating the signal with a constant value.
It introduces an additional parameter to the well-known log-distance model \cite{IntroductionToRadio}, which considers obstacles between (line-of-sight) the access point (AP) and the location in question by attenuating the signal with a constant value.
Depending on the use-case, this value describes the number and type of walls, ceilings, floors etc. between both positions.
For obstacles, this requires an intersection-test of each obstacle with the line-of-sight, which is costly for larger buildings.
Thus \cite{Ebner-17} suggests to only consider floors/ceilings, which can be calculated without intersection checks and allows for real-time use-cases running on smartphones.
%wifi optimization
To further reduce the setup-time, \cite{WithoutThePain} introduces an approach that works without any prior knowledge.
They use a genetic optimization algorithm to estimate the parameters for a signal strength prediction, including the access points (AP) position, and the pedestrian's locations during the walk.
They use a genetic optimization algorithm to estimate the parameters for a signal strength prediction, including access point positions, and the pedestrian's locations during the walk.
The estimated parameters can be refined using additional walks.
Within this work we present a similar optimization approach for estimating the AP's location in 3D.
However, instead of taking multiple measuring walks, the locations are optimized based only on some reference measurements, what further decreases the setup-time.
However, instead of taking multiple measuring walks, the locations are optimized based only on some reference measurements, further decreasing the setup-time.
Additionally, we will show that such an optimization scheme can partly compensate for the above abolished intersection-tests.
%immpf
@@ -91,20 +91,21 @@ Besides well chosen probabilistic models, the system's performance is also highl
They are often caused by restrictive assumptions about the dynamic system, like the aforementioned sample impoverishment.
The authors of \cite{Sun2013} handled the problem by using an adaptive number of particles instead of a fixed one.
The key idea is to choose a small number of samples if the distribution is focused on a small part of the state space and a large number of particles if the distribution is much more spread out and requires a higher diversity of samples.
The problem of sample impoverishment is then encountered by adapting the number of particles depend upon the systems current uncertainty \cite{Fetzer-17}.
The problem of sample impoverishment is then encountered by adapting the number of particles dependent upon the system's current uncertainty \cite{Fetzer-17}.
\commentByFrank{ich glaube encountered ist das falsche wort. du willst doch auf 'es wird gefixed' raus, oder? addressed? mitigated?}
In practice sample impoverishment is often a problem of environmental restrictions and system dynamics.
In practice, sample impoverishment is often a problem of environmental restrictions and system dynamics.
Therefore, the method above fails, since it is not able to propagate new particles into the state space due to environmental restrictions e.g. walls or ceilings.
In \cite{Fetzer-17} we deployed an interacting multiple model particle filter (IMMPF) to solve sample impoverishment in such restrictive scenarios.
We combine two particle filter using a non-trivial Markov switching process, depending upon the Kullback-Leibler divergence between both.
However, deploying a IMMPF is in many cases not necessary and produces additional processing overhead.
Thus a much simpler, but heuristic method is presented within this paper.
However, deploying an IMMPF is in many cases not necessary and produces additional processing overhead.
Thus, a much simpler, but heuristic method is presented within this paper.
%estimation
Finally, as the name recursive state estimation says, it requires to find the most probable state within the state space, to provide the "best estimate" of the underlying problem.
In the discrete manner of a particle representation this is often done by providing a single value, also known as sample statistic, to serve as a best guess \cite{Bullmann-18}.
Examples are the weighted-average over all particles or the particle with the highest weight.
However in complex scenarios like a multimodal representation of the posterior, such methods fail to provide an accurate statement about the most probable state.
However, in complex scenarios like a multimodal representation of the posterior, such methods fail to provide an accurate statement about the most probable state.
Thus, in \cite{Bullmann-18} we present a rapid computation scheme of kernel density estimates (KDE).
Recovering the probability density function using an efficient KDE algorithm yields a promising approach to solve the state estimation problem in a more profound way.