More small fixes

This commit is contained in:
MBulli
2018-09-16 20:31:21 +02:00
parent 1efb98ab3f
commit e72bcae7af
3 changed files with 11 additions and 11 deletions

View File

@@ -22,7 +22,7 @@ However, similar to our previous, award-winning system, the setup is able to run
The experiments are separated into four sections: The experiments are separated into four sections:
At first, we discuss the performance of the novel transition model and compare it to a grid-based approach. At first, we discuss the performance of the novel transition model and compare it to a grid-based approach.
In section \ref{sec:exp:opti} we have a look at \docWIFI{} optimization and how the real \docAPshort{} positions differ from it. In section \ref{sec:exp:opti} we have a look at \docWIFI{} optimization and how the real \docAPshort{} positions differ from it.
Following, we conducted several test walks throughout the building to examine the estimation accuracy (in \SI{}{\meter}) of the localisation system and discuss the here presented solutions for sample impoverishment. Following, we conducted several test walks throughout the building to examine the estimation accuracy (in \SI{}{\meter}) of the localization system and discuss the here presented solutions for sample impoverishment.
Finally, the respective estimation methods are discussed in section \ref{sec:eval:est}. Finally, the respective estimation methods are discussed in section \ref{sec:eval:est}.
\subsection{Transition} \subsection{Transition}
@@ -56,7 +56,7 @@ Finally, the respective estimation methods are discussed in section \ref{sec:eva
\label{fig:transitionEval:d} \label{fig:transitionEval:d}
\end{subfigure} \end{subfigure}
\caption{Simple staircase scenario to compare the graph-based model with the navigation mesh. The black line indicates the current position and the green line gives the estimated path until 25 or 180 steps, both using weighted average. The particles are coloured according to their height. A pedestrian walks up and down the stairs several times in a row. After 25 steps, both methods produce good results, although there are already some outliers (blue particles). After 180 steps, the outliers using the graph have multiplied, leading to a multimodal situation. In contrast, the mesh offers the possibility to remove particles that hit a wall and can thus prevent such a situation.} \caption{Simple staircase scenario to compare the graph-based model with the navigation mesh. The black line indicates the current position and the green line gives the estimated path until 25 or 180 steps, both using weighted average. The particles are colored according to their height. A pedestrian walks up and down the stairs several times in a row. After 25 steps, both methods produce good results, although there are already some outliers (blue particles). After 180 steps, the outliers using the graph have multiplied, leading to a multimodal situation. In contrast, the mesh offers the possibility to remove particles that hit a wall and can thus prevent such a situation.}
\label{fig:transitionEval} \label{fig:transitionEval}
\end{figure} \end{figure}
@@ -70,7 +70,7 @@ We chose a simple, yet effective strategy: whenever a destination is unreachable
Of course, the graph does not require for such a rule, since particles are only allowed to move on nodes and search for neighbours. Of course, the graph does not require for such a rule, since particles are only allowed to move on nodes and search for neighbours.
Fig. \ref{fig:transitionEval:a} and \ref{fig:transitionEval:b} illustrate the results after \SI{25}{steps} for each method. Fig. \ref{fig:transitionEval:a} and \ref{fig:transitionEval:b} illustrate the results after \SI{25}{steps} for each method.
The particles are coloured according to their height and the walking path (green line) is estimated using weighted-average. The particles are colored according to their height and the walking path (green line) is estimated using weighted-average.
It can be seen that both methods provide similar results. It can be seen that both methods provide similar results.
Due to the discrete grid structure, the purple particles on the graph scatter more strongly, while the mesh provides a truly continuous structure and thus a more compact representation. Due to the discrete grid structure, the purple particles on the graph scatter more strongly, while the mesh provides a truly continuous structure and thus a more compact representation.
It is important to note that outliers have already appeared in both scenarios (blue particles). It is important to note that outliers have already appeared in both scenarios (blue particles).
@@ -119,7 +119,7 @@ This allows a comparison with the optimized \docAPshort{} positions, what can al
\centering \centering
\def\svgwidth{\columnwidth} \def\svgwidth{\columnwidth}
\input{gfx/optimization/wifiOptTopView.eps_tex} \input{gfx/optimization/wifiOptTopView.eps_tex}
\caption{Ground level of the building in the $xz$-plane from above. Includes the locations of the reference points, the ground truth and the optimized \docAPshort{}s. The grey line connects an \docAPshort{} with the corresponding optimization. The coloured squares are areas of special interest and are discussed within the text. The corresponding pictures on the right side show the museum in these places.} \caption{Ground level of the building in the $xz$-plane from above. Includes the locations of the reference points, the ground truth and the optimized \docAPshort{}s. The grey line connects an \docAPshort{} with the corresponding optimization. The colored squares are areas of special interest and are discussed within the text. The corresponding pictures on the right side show the museum in these places.}
\label{fig:apfingerprint} \label{fig:apfingerprint}
\end{figure} \end{figure}
%Positionsfehler und wo? %Positionsfehler und wo?
@@ -140,7 +140,7 @@ Again, the highest errors occur from \docAPshort{}s within the red and purple ar
%global vs local %global vs local
Thus, the per-floor optimization scheme provides a smaller overall error, whereby the positioning error is higher compared to the global one. Thus, the per-floor optimization scheme provides a smaller overall error, whereby the positioning error is higher compared to the global one.
The reason for the latter can be found within the purple area. The reason for the latter can be found within the purple area.
It marks a vaulted cellar, that is \SI{1.7}{\meter} deeper then ground level and connect by a narrow staircase. It marks a vaulted cellar, that is \SI{1.7}{\meter} deeper than ground level and connect by a narrow staircase.
Here, RSSI measurements taken from outside the ground level are strongly attenuated, while measurements taken from above are more moderately attenuated. Here, RSSI measurements taken from outside the ground level are strongly attenuated, while measurements taken from above are more moderately attenuated.
Since the per-floor scheme uses only references from the current floor in question, while the global scheme uses all available references and thus more meaningful information in this area. Since the per-floor scheme uses only references from the current floor in question, while the global scheme uses all available references and thus more meaningful information in this area.
However, as the overall error suggests, this is not always an advantage, which we will see later on in the localization. However, as the overall error suggests, this is not always an advantage, which we will see later on in the localization.
@@ -149,7 +149,7 @@ However, as the overall error suggests, this is not always an advantage, which w
As mentioned above, some areas are heavily attenuated by big walls, what simply does not fit the used signal strength prediction model. As mentioned above, some areas are heavily attenuated by big walls, what simply does not fit the used signal strength prediction model.
As discussed in section \ref{sec:relatedWork} and \ref{sec:wifi}, we only consider ceilings within the model to avoid computational expensive wall intersection-tests. As discussed in section \ref{sec:relatedWork} and \ref{sec:wifi}, we only consider ceilings within the model to avoid computational expensive wall intersection-tests.
A far higher number of reference measurements in bad areas can therefore only increase the accuracy to a limited extent. A far higher number of reference measurements in bad areas can therefore only increase the accuracy to a limited extent.
Nevertheless, by optimizing all parameters (\mPosAPVec, \mTXP{}, \mPLE{} and \mWAF{}) the system provides far better localization results compared to using the \docAPshort{}'s real positions with empirical values or even optimized values only for \mTXP{}, \mPLE{} and \mWAF{}. Nevertheless, by optimizing all parameters (\mPosAPVec{}, \mTXP{}, \mPLE{} and \mWAF{}) the system provides far better localization results compared to using the \docAPshort{}'s real positions with empirical values or even optimized values only for \mTXP{}, \mPLE{} and \mWAF{}.
The reason for this is obvious. The reason for this is obvious.
The optimized parameters fit the (unrealistic) signal strength prediction model much better than the real ones and thus provide for a smaller error between measured RSSI and predicted RSSI. The optimized parameters fit the (unrealistic) signal strength prediction model much better than the real ones and thus provide for a smaller error between measured RSSI and predicted RSSI.
Since walls are ignored by the model, optimizing the position of the access points can compensate for the resulting effects. Since walls are ignored by the model, optimizing the position of the access points can compensate for the resulting effects.
@@ -205,7 +205,7 @@ Therefore, errors in $z$-direction are penalized by tripling the $z$-value.
%computation und monte carlo runs %computation und monte carlo runs
For each walk we deployed 100 runs using \SI{5000}{particles} and set $N_{\text{eff}} = 0.85$ for resampling. For each walk we deployed 100 runs using \SI{5000}{particles} and set $N_{\text{eff}} = 0.85$ for resampling.
Instead of an initial position and heading, all walks start with a uniform distribution (random position and heading) as prior. Instead of an initial position and heading, all walks start with a uniform distribution (random position and heading) as prior.
The overall localisation results can be see in table \ref{table:overall}. The overall localization results can be see in table \ref{table:overall}.
Here, we differ between the respective anti-impoverishment techniques presented in chapter \ref{sec:impo}. Here, we differ between the respective anti-impoverishment techniques presented in chapter \ref{sec:impo}.
The simple anti-impoverishment method is added to the resampling step and thus uses the transition method presented in chapter \ref{sec:transition}. The simple anti-impoverishment method is added to the resampling step and thus uses the transition method presented in chapter \ref{sec:transition}.
In contrast, the $D_\text{KL}$-based method extends the transition and thus uses a standard cumulative resampling step. In contrast, the $D_\text{KL}$-based method extends the transition and thus uses a standard cumulative resampling step.
@@ -369,7 +369,7 @@ Only with new measurements coming from the hallway or other parts of the buildin
This leads to the conclusion, that a weighted-average approach provides a more smooth representation of the estimated locations and thus a higher robustness. This leads to the conclusion, that a weighted-average approach provides a more smooth representation of the estimated locations and thus a higher robustness.
A comparison between both methods is illustrated in fig. \ref{fig:estimationcomp} using a measuring sequence of walk 2. A comparison between both methods is illustrated in fig. \ref{fig:estimationcomp} using a measuring sequence of walk 2.
We have highlighted some interesting areas with coloured squares. We have highlighted some interesting areas with colored squares.
The greatest difference between the respective estimation methods can be seen inside the green square, the gallery wing of the museum. The greatest difference between the respective estimation methods can be seen inside the green square, the gallery wing of the museum.
While the weighted-average (blue) produces a very straight estimated path, the KDE-based method (red) is much more volatile. While the weighted-average (blue) produces a very straight estimated path, the KDE-based method (red) is much more volatile.
This can be explained by the many small rooms that pedestrians pass through. This can be explained by the many small rooms that pedestrians pass through.
@@ -395,7 +395,7 @@ We hope to further improve such situations in future work by enabling the transi
\centering \centering
\def\svgwidth{0.8\columnwidth} \def\svgwidth{0.8\columnwidth}
{\input{gfx/estimationPath2/est.eps_tex}} {\input{gfx/estimationPath2/est.eps_tex}}
\caption{Estimation results of walk 2 using the KDE method (blue) and the weighted-average (orange). While the latter provides a more smooth representation of the estimated locations, the former provides a better idea of the quality of the underlying processes. In order to keep a better overview, the top level of the last floor was hidden. The coloured squares are used as references within the text.} \caption{Estimation results of walk 2 using the KDE method (blue) and the weighted-average (orange). While the latter provides a more smooth representation of the estimated locations, the former provides a better idea of the quality of the underlying processes. In order to keep a better overview, the top level of the last floor was hidden. The colored squares are used as references within the text.}
\label{fig:estimationcomp} \label{fig:estimationcomp}
\end{figure} \end{figure}

View File

@@ -47,7 +47,7 @@ Especially, methods using relative measurements like pedestrian dead reckoning a
Nevertheless, this method is very easy to implement and we expect that the system should be able to recover from nearly every situation regardless of the cause. Nevertheless, this method is very easy to implement and we expect that the system should be able to recover from nearly every situation regardless of the cause.
A second method we suggest within this paper is a simplified version of our approach presented in \cite{Fetzer-17}. A second method we suggest within this paper is a simplified version of our approach presented in \cite{Fetzer-17}.
Here, we used an additional, very simple particle filter to monitor if our primary (localisation) filter suffers from sample impoverishment. Here, we used an additional, very simple particle filter to monitor if our primary (localization) filter suffers from sample impoverishment.
If that is true, both filters are combined by exchanging particles among each other. If that is true, both filters are combined by exchanging particles among each other.
This allows the primary filter to recover, while retaining prior knowledge. This allows the primary filter to recover, while retaining prior knowledge.
However, we believe that such a combination of two independent filters is not necessary for most scenarios and thus the resulting overhead can be avoided. However, we believe that such a combination of two independent filters is not necessary for most scenarios and thus the resulting overhead can be avoided.

View File

@@ -14,7 +14,7 @@
\newcommand{\mPosAP}{\hat\varrho} % char for access point position vector \newcommand{\mPosAP}{\hat\varrho} % char for access point position vector
\newcommand{\mPos}{\varrho} % char for positions \newcommand{\mPos}{\varrho} % char for positions
\newcommand{\mPosVec}{\vec{\mPos}} % position vector \newcommand{\mPosVec}{\vec{\mPos}} % position vector
\newcommand{\mPosAPVec}{\vec{\mPosAP}} % AP position vector \newcommand{\mPosAPVec}{\ensuremath{\vec{\mPosAP}}} % AP position vector
\newcommand{\mRssiVec}{\vec{s}} % client signal strength measurements \newcommand{\mRssiVec}{\vec{s}} % client signal strength measurements