small changes
This commit is contained in:
@@ -59,18 +59,20 @@ The authors of \cite{Sun2013} handled the problem by using an adaptive number of
|
||||
The key idea is to choose a small number of samples if the distribution is focused on a small part of the state space and a large number of particles if the distribution is much more spread out and requires a higher diversity of samples.
|
||||
The problem of sample impoverishment is then encountered by adapting the number of particles depend upon the systems current uncertainty \cite{Fetzer-17}.
|
||||
|
||||
However, in practice sample impoverishment is often a problem of environmental restrictions and system dynamics.
|
||||
Therefore, such a method fails, since it is not able to propagate new particles into the state space due to environmental restrictions e.g. walls or ceilings.
|
||||
In \cite{Fetzer-17} we deployed an interacting multiple model particle filter (IMMPF) to solve the sample impoverishment.
|
||||
In practice sample impoverishment is often a problem of environmental restrictions and system dynamics.
|
||||
Therefore, the method above fails, since it is not able to propagate new particles into the state space due to environmental restrictions e.g. walls or ceilings.
|
||||
In \cite{Fetzer-17} we deployed an interacting multiple model particle filter (IMMPF) to solve sample impoverishment in such restrictive scenarios.
|
||||
We combine two particle filter using a non-trivial Markov switching process, depending upon the Kullback-Leibler divergence between both.
|
||||
|
||||
However, deploying a IMMPF is in most cased not a necessary step, thus we present i much simple, but also very heuristic model within this paper.
|
||||
However, deploying a IMMPF is in many cases not necessary and produces additional processing overhead.
|
||||
Thus a much simpler, but very heuristic method is presented within this paper.
|
||||
|
||||
%estimation
|
||||
Finally, as the name recursive state estimation states, it requires to find the most probable state within the state space, to provide the “best estimate” of the underlying problem.
|
||||
In the discrete manner of a sample representation this is often done by providing a single value, also known as sample statistic, to serve as a “best guess”.
|
||||
This value is then calculated by means of simple parametric point estimators, e.g. the weighted-average over all samples, the sample with the highest weight or by assuming other parametric statistics like normal distributions
|
||||
However in complex situtations like a multimodal representatio of the posterior, such methods fail to provide an accurate statement about the most probable state.
|
||||
Finally, as the name recursive state estimation says, it requires to find the most probable state within the state space, to provide the "best estimate" of the underlying problem.
|
||||
In the discrete manner of a particle representation this is often done by providing a single value, also known as sample statistic, to serve as a best guess \cite{Bullmann-18}.
|
||||
Examples are the weighted-average over all particles or the particle with the highest weight.
|
||||
However in complex scenarios like a multimodal representation of the posterior, such methods fail to provide an accurate statement about the most probable state.
|
||||
Thus, in \cite{} we present a rapid computation scheme
|
||||
|
||||
A well known solution is KDE.
|
||||
For example \cite{} used a ... in .... However it is obvious that this method has a massive computation time and is thus not practicle for smartphone-based solutions.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user