Fixed many bugs

This commit is contained in:
2018-02-27 10:49:05 +01:00
parent 9d4927a365
commit 1fb9461a5f
8 changed files with 67 additions and 68 deletions

View File

@@ -4,8 +4,7 @@
Sensor fusion approaches are often based upon probabilistic descriptions like particle filters, using samples to represent the distribution of a dynamical system.
To update the system recursively in time, probabilistic sensor models process the noisy measurements and a state transition function provides the system's dynamics.
Therefore a sample or particle is a representation of one possible system state, e.g. the position of a pedestrian within a building.
In most real world scenarios one is then interested in finding the most probable state within the state space, to provide the \qq{best estimate} of the underlying problem.
Generally speaking, solving the state estimation problem.
In most real world scenarios one is then interested in finding the most probable state within the state space, to provide the best estimate of the underlying problem, generally speaking, solving the state estimation problem.
In the discrete manner of a sample representation this is often done by providing a single value, also known as sample statistic, to serve as a \qq{best guess}.
This value is then calculated by means of simple parametric point estimators, e.g. the weighted-average over all samples, the sample with the highest weight or by assuming other parametric statistics like normal distributions \cite{Fetzer2016OMC}.
%da muss es doch noch andere methoden geben... verflixt und zugenäht... aber grundsätzlich ist ein weighted average doch ein point estimator? (https://www.statlect.com/fundamentals-of-statistics/point-estimation)
@@ -17,9 +16,9 @@ As a result, those techniques are not able to provide an accurate statement abou
For example, in a localization scenario where a bimodal distribution represents the current posterior, a reliable position estimation is more likely to be at one of the modes, instead of somewhere in-between, like provided by a simple weighted-average estimation.
Additionally, in most practical scenarios the sample size and therefore the resolution is limited, causing the variance of the sample based estimate to be high \cite{Verma2003}.
It is obvious, that a computation of the full posterior could solve the above, but finding such an analytical solution is an intractable problem, what is the reason for applying a sample representation in the first place.
It is obvious, that a computation of the full posterior could solve the above, but finding such an analytical solution is an intractable problem, which is the reason for applying a sample representation in the first place.
Another promising way is to recover the probability density function from the sample set itself, by using a non-parametric estimator like a kernel density estimation (KDE).
With this, it is easy to find the \qq{real} most probable state and thus to avoid the aforementioned drawbacks.
With this, it is easy to recover the \qq{real} most probable state and thus to avoid the aforementioned drawbacks.
However, non-parametric estimators tend to consume a large amount of computational time, which renders them unpractical for real time scenarios.
Nevertheless, the availability of a fast processing density estimate might improve the accuracy of today's sensor fusion systems without sacrificing their real time capability.
@@ -34,7 +33,7 @@ By the central limit theorem, multiple recursion of a box filter yields an appro
This process converges quite fast to a reasonable close approximation of the ideal Gaussian.
In addition, a box filter can be computed extremely fast by a computer, due to its intrinsic simplicity.
While the idea to use several box filter passes to approximate a Gaussian has been around for a long time, the application to obtain a fast KDE is new.
Especially in time critical and time sequential sensor fusion scenarios, the here presented approach outperforms other state of the art solutions, due to a fully linear complexity \landau{N} and a negligible overhead, even for small sample sets.
Especially in time critical and time sequential sensor fusion scenarios, the here presented approach outperforms other state of the art solutions, due to a fully linear complexity and a negligible overhead, even for small sample sets.
In addition, it requires only a few elementary operations and is highly parallelizable.