eine neue runde eine neue reviewfahrt

This commit is contained in:
toni
2018-11-07 16:42:06 +01:00
parent ef775e60ba
commit 5fc4de78d6
6 changed files with 35 additions and 13 deletions

View File

@@ -30,7 +30,7 @@ Care was taken to have at least two beacons in each room and a third beacon visi
Due to the difficult architecture and the extremely thick walls of the museum, we decided on this procedure, which explains the rather large number of \SI{42}{} transmitters compared to modern buildings.
Another reason for the high number of beacons is that we did not want to analyze the quality of the Wi-Fi signal coverage for further improvements, as this can be a very time-consuming task.
In many areas of the building an improvement would not even be possible due to the lack of power sockets.
To compensate that, battery powered beacons could be used but we consider this approach less practicable, so we did not take this option.
%To compensate that, battery powered beacons could be used but we consider this approach less practicable, so we did not take this option.
The power sockets are located at different heights ranging from \SI{0.2}{\meter} to \SI{2.5}{\meter}.
Consequently, there were no prior requirements on how a single beacon should be placed exactly and its position is dictated by the socket's position.
Considering all the above, the beacons were placed more or less freely and to the best of our knowledge.}
@@ -40,7 +40,7 @@ The positions of the fingerprints are set within our 3D map editor (see fig. \re
The reference points were placed every \SI{3}{\meter} to \SI{7}{\meter} from each other, however as can be seen in fig. \ref{fig:apfingerprint} not necessarily accurate.
As the optimization scheme does not require equally spaced reference points, doing so would result in superfluous effort.
Furthermore, it is not easy to adopt the exact position to take the reference measurements in the building later on.
Of course, this could be achieved with appropriate hardware (e.g. laser-scanner), but again, this requires more time and care, which in our opinion does not justify a presumably increased accuracy of some decimeters.}
Of course, this could be achieved with appropriate hardware (e.g. laser-scanner), but again, this requires more time and care, which in our opinion does not justify a presumably increased accuracy of some decimeters.} \addy{Therefore, we accept the resulting inaccuracy between the (reference) position stored on the map and the actual position where the measurement took place, due to the enormous time saving.}
\add{Summing up the above, the following initial steps are required to utilize our localization system in a building:
\begin{enumerate}
@@ -59,9 +59,10 @@ Creating the floor plan including walls and stairs took us approximately \SI{40}
Adding knowledge like semantic information such as room numbers would of course take additional time.
All remaining steps were performed on-site using our smartphone app for localization, which can be seen in fig. \ref{fig:yasmin}.
As the museum did not provide any Wi-Fi infrastructure, we installed \SI{42}{} beacons as explained above.
With the help of the museum's janitor, this step took only \SI{30}{\minute}, as he was well aware of all available power outlets and also helped plugging them in.
After that, each of the \SI{133}{} reference points was scanned 30 times ($\approx \SI{25}{\second}$ scan time) using a Motorola Nexus 6 at the \SI{2.4}{GHz} Wi-Fi band.
This took \SI{85}{\minute}, as all measurements were conducted using the same smartphone.
With the help of the museum's janitor, this step took only \SI{30}{\minute}, as he was well aware of all available power outlets and also helped plugging them in.}
\addy{After that, \SI{30}{} Wi-Fi scans were conducted and recorded for each of the \SI{133}{} reference points using a Motorola Nexus 6 at the \SI{2.4}{GHz} Wi-Fi band. This took approximately \SI{25}{\second} per point, as the Android OS restricts the scan rate.}
%After that, each of the \SI{133}{} reference points was scanned 30 times ($\approx \SI{25}{\second}$ scan time) using a Motorola Nexus 6 at the \SI{2.4}{GHz} Wi-Fi band.
\add{In total, this took \SI{85}{\minute}, as all measurements were conducted using the same smartphone.
The optimized Wi-Fi model and the mesh can be created automatically within a negligible amount of time directly on the smartphone, which then enables the pedestrian to start the localization.
Of course, for the experiments conducted below several additional knowledge was obtained to evaluate the quality of the proposed methods and the overall localization error.
Thus the above provided times were measured for a pure localization installation, as for example a customer would order, while the experiments were performed in a 2-day period.
@@ -235,7 +236,7 @@ However, as the overall error suggests, this is not always an advantage, which w
%warum ist die optimierung tdz. ganz gut?
As mentioned above, some areas are heavily attenuated by big walls, what simply does not fit the used signal strength prediction model.
As discussed in section \ref{sec:relatedWork} and \ref{sec:wifi}, we only consider ceilings within the model to avoid computational expensive wall intersection-tests.
A far higher number of reference measurements in bad areas can therefore only increase the accuracy to a limited extent.
A far higher number of reference measurements in bad areas can therefore only increase the accuracy to a limited extent, \addy{whereas increasing the number of reference points could compensate for this, however requires additional setup time, what is then contrary to a fast deploy time.}
Nevertheless, by optimizing all parameters (\mPosAPVec{}, \mTXP{}, \mPLE{} and \mWAF{}) the system provides far better localization results compared to using the \docAPshort{}'s real positions with empirical values or even optimized values only for \mTXP{}, \mPLE{} and \mWAF{}.
The reason for this is obvious.
The optimized parameters fit the (unrealistic) signal strength prediction model much better than the real ones and thus provide for a smaller error between measured RSSI and predicted RSSI.
@@ -273,7 +274,7 @@ The 4 chosen walking paths can be seen in fig. \ref{fig:floorplan}.
Walk 0 is \SI{152}{\meter} long and took about \SI{2.30}{\minute} to walk.
Walk 1 has a length of \SI{223}{\meter} and Walk 2 a length of \SI{231}{\meter}, both required about \SI{6}{\minute} to walk.
Finally, walk 3 is \SI{310}{\meter} long and takes \SI{10}{\minute} to walk.
All walks were carried out by 4 different male testers using either a Samsung Note 2, Google Pixel One or Motorola Nexus 6 for recording the measurements.
\addy{Each of the single walks was} carried out by 4 different male testers using either a Samsung Note 2, Google Pixel One or Motorola Nexus 6 for recording the measurements.
All in all, we recorded \SI{28}{} distinct measurement series, \SI{7}{} for each walk.
The picked walks intentionally contain erroneous situations, in which many of the above treated problems occur.
\del{This allows us to discuss everything in detail.}

View File

@@ -13,7 +13,7 @@ Since 1936, the \SI{2500}{\square\meter} building acts as a museum of the mediev
Such buildings are often full of nooks and crannies, what makes it hard for dynamical models using any kind of pedestrian dead reckoning (PDR). Here, the error accumulates not only over time, but also with the number of turns and steps made \cite{Ebner-15}.
\del{There is also a higher chance of detecting false or misplaced turns,} \add{There is also a higher probability of detecting a wrong turn,} what can cause the position estimation to lose track or getting stuck within a demarcated area.
Thus, this paper presents a \del{robust but realistic} \add{continuous} movement model using a three-dimensional navigation mesh based on triangles.
\add{In addition, a novel threshold-based activity-recognition is used to allow for smooth floor changes.}
\add{In addition, a \del{novel} threshold-based activity-recognition is used to allow for smooth floor changes.}
%In addition, this allows for very small map sizes, consuming little storage space.
In localization systems using a sample based density representation, like particle filters, aforementioned problems can further lead to more advanced problems like sample impoverishment \cite{Fetzer-17} or multimodalities \cite{Fetzer-16}.
@@ -51,7 +51,7 @@ In the here presented scenario, the beacons do not establish a wireless network
To sum up, \add{this work presents an updated version of the winning localization system of the smartphone-based competition at IPIN 2016 \cite{Ebner-15}, including the improvements and newly developed methods that have been made since then \cite{Ebner-16, Ebner-17, Fetzer-17, Bullmann-18}.
This is the first time that all these previously acquired findings have been fully combined and applied simultaneously.
During the here presented update, the following novel contributions will be presented and added to the system:
During the here presented update, the following contributions will be presented and added to the system:
\begin{itemize}
\item The pedestrian's movement is modelled in a more realistic way using a navigation mesh, generated from the building's floor plan. This only allows movements that are actually feasible, e.g. no walking through walls. Compared to the gridded-graph structure we used before \cite{Ebner-16}, the mesh allows continuous transitions and reduces the required storage space drastically.
\item To enabled more smooth floor changes, a threshold-based activity recognition using barometer and accelerometer readings is added to the state evaluation process of the particle filter. The method is able to distinguish between standing, walking, walking up and walking down.
@@ -69,9 +69,12 @@ The existing Wi-Fi infrastructure can consist of the aforementioned Wi-Fi beacon
The combination of both technologies is feasible.
Nevertheless, the museum considered in this work has no Wi-Fi infrastructure at all, not even a single access point.
Thus, we distributed a set of \SI{42}{beacons} throughout the complete building by simply plugging them into available power outlets.
In addition to evaluating the novel contributions and the overall performance of the system, we have carried out further experiments to determine the performance of our Wi-Fi optimization in such a complex scenario as well as a detailed comparison between KDE-based and weighted-average position estimation.}
In addition to evaluating the contributions and the overall performance of the system, we have carried out further experiments to determine the performance of our Wi-Fi optimization in such a complex scenario as well as a detailed comparison between KDE-based and weighted-average position estimation.}
%novel experiments to previous methods due to the complex scenario blah und blub.}
%Finally, it should be mentioned that the here presented work is an highly updated version of the winner of the smartphone-based competition at IPIN 2016 \cite{Ebner-15}.
\blfootnote{We would like to take this opportunity to thank Dr. Helmuth M\"ohring and all other employees of the Reichsstadtmuseum Rothenburg for the great cooperation and the provision of their infrastructure and resources. }
%großer vorteil vom navmesh. gleichbleibende genauigkeit bei viel viel geringerem speicherbedarf. man muss auch nichts einstellen wie die grid size. einmal noch hier in der intro und in der transition hinzufügen. die genauigkeiten zwischen navmesh und grid sind sehr ähnlich <- in der eval nochmal schreiben.

View File

@@ -51,6 +51,23 @@ Using large triangles reduces the memory footprint dramatically (a few megabytes
while still increasing the quality (triangle-edges directly adhere to architectural-edges) and allows
for truly continuous transitions along the surface spanned by all triangles.
%activity recognition
\addy{To go into the third dimension, cf. walking continuously along stairs, using barometer. The most basic approach is using absolute pressure, however this value highly differs between single buildings. Thus relative approaches initializing with a zero pressure or using a sliding window are more often integrated into the movement model, as it provides continuous updates with every incoming barometer reading. Thanks to the underlying mesh, the pedestrians current activity can also be used to provide continues floor changes. This is done by assigning different types to the triangles of the mesh, e.g. stair, floor or elevator. Depending on the recognized activity, the system is now able to allow or restrict the movement in certain areas of the building. }
\addy{In recent years, many different activity recognition approaches could be presented for wearable sensors \cite{}.
They occur in a wide variety of scenarios, such as in sports or in the health sector.
As modern smartphones become more and more powerful, classical approaches to pattern recognition can now be adapted directly.
Nevertheless, in context of this work
%
%
Many different activity recognition approaches
%
a aufwendige trainingsphase, as they are ... based on classical vorgehensweise von pattern recognition
%
In contrast to raw barometer data... more stable ...}
%eval - wifi, fingerprinting
The outcomes of the state evaluation process depend highly on the used sensors.
Most smartphone-based systems are using received signal strength indications (RSSI) given by \docWIFI{} or Bluetooth as a source for absolute positioning information.

View File

@@ -155,6 +155,7 @@
%comments for sensors journal
\newcommand{\del}[1]{\textcolor{red}{\hcancel{#1}}}
\newcommand{\add}[1]{\textcolor{blue}{#1}}
\newcommand{\addy}[1]{\textcolor{purple}{#1}}

Binary file not shown.

View File

@@ -20,9 +20,9 @@ Using a MCL you do not need to mix observations and actions in the same concept
From my point of view formulation of transition model T should be tackled using actions (steps) and observations (s_wifi) should be used like an observation model V (described like "Evaluation" in section 5).
-> Within this work we do not incorporate a smoothing step, as smoothing requires high computational power. The basic sequential Monte Carlo scheme (particle filter) is however the same as in all our previous works. As we have a pattern recognition background, we often refer to the CONDENSATION algorithm instead of equivalent methods such as the bootstrap particle filter. Despite the fact, that CONDENSATION is often used within the field of visual tracking, it is based on the same assumptions as bootstrap or MCL, and thus equivalent. All three are assuming that the proposal distribution (sometimes called importance distribution), of the general sampling importance resampling (SIR) particle filter, is the state transition p(q_t | q_t-1). In order to incorporate previous observations, we extend the transition to p(q_t | q_t-1, o_t-1), incorporating previous observations. The validity of this statement can be easily proven.
We are not sure, what you exactly mean by "researchers do not access to the agent information". At the end, sensor data as well as preprocessed data (e.g. the activity) can be seen as part of the observation. The MCL you are referring to, is highly adapted to control theory and thus robot motion. However, we maintain a more probabilistic view of particle filtering, allowing for a more general formulation. Of course, it would also be possible to introduce some control (actions) command, as for example Sebastian Thrun did in his work "Probabilistic Robotics". However, this results in the same probability density, as the control (actions) would have the same data as the observation mentioned earlier. We highly appreciate your suggestion of reformulating the particle filtering, despite that we refrain from pursuing it. Finally, we do not use any out-of-the-box implementation from OpenCV or matlab. As stated at the beginning of our experiments, we developed a C++ backend for localization, running on both desktop and smartphone.
-> Within this work we do not incorporate a smoothing step, as smoothing requires high computational power. The basic sequential Monte Carlo scheme (particle filter) is however the same as in all our previous works. As we have a pattern recognition background, we often refer to the CONDENSATION algorithm instead of equivalent methods such as the bootstrap particle filter. Despite the fact, that CONDENSATION is often used within the field of visual tracking, it is based on the same assumptions as bootstrap or MCL, and thus equivalent. All three are assuming that the proposal distribution (sometimes called importance distribution), of the general sampling importance resampling (SIR) particle filter, is the state transition p(q_t | q_t-1). In order to incorporate previous observations, we extend the transition to p(q_t | q_t-1, o_t-1), incorporating previous observations. The validity of this statement can be easily proven. For us, a particle filter algorithm is thus not defined by the area in which it is used, but by its statistical properties (e.g. the choice of the proposal or by using auxiliary variables).
We are not sure, what you exactly mean by "researchers do not access to the agent information". At the end, sensor data as well as preprocessed data (e.g. the activity) can be seen as part of the observation. The MCL you are referring to, is highly adapted to control theory and thus robot motion. However, we maintain a more probabilistic view of particle filtering, allowing for a more general formulation. Of course, it would also be possible to introduce some control (actions) command, as for example Sebastian Thrun did in his work "Probabilistic Robotics". However, this results in the same probability density, as the control (actions) would have the same data as the observation mentioned earlier. Again, we have chosen the filtering methodology because of its statistical properties and the general formulation, not by application.
We highly appreciate your suggestion of reformulating the particle filtering, despite that we refrain from pursuing it. Finally, we do not use any out-of-the-box implementation from OpenCV or matlab. As stated at the beginning of our experiments, we developed a C++ backend for localization, running on both desktop and smartphone.
line 226: What is z_t? Is an observation o_t? nomenclature should be unified through the whole paper