fixed introduction
This commit is contained in:
@@ -3,16 +3,17 @@
|
||||
Sensor fusion approaches are often based upon probabilistic descriptions like particle filters, using samples to represent the distribution of a dynamical system.
|
||||
To update the system recursively in time, probabilistic sensor models process the noisy measurements and a state transition function provides the system's dynamics.
|
||||
Therefore a sample or particle is a representation of one possible system state, e.g. the position of a pedestrian within a building.
|
||||
In most real world scenarios one is then interested in finding the most probable state within the state space, to provide the \qq{best estimate} of the underlying problem.
|
||||
In the discrete manner of a sample representation this is often done by providing a single value, also known as sample statistic, to serve as a \qq{best guess}.
|
||||
This value is then calculated by means of simple parametric point estimators, e.g. the weighted-average over all samples, the sample with the highest weight or by assuming other parametric statistics like normal distributions \cite{}.
|
||||
In most real world scenarios one is then interested in finding the most probable state within the state space, to provide the \qq{best estimate} of the underlying problem.
|
||||
Generally speaking, solving the state estimation problem.
|
||||
In the discrete manner of a sample representation this is often done by providing a single value, also known as sample statistic, to serve as a \qq{best guess}.
|
||||
This value is then calculated by means of simple parametric point estimators, e.g. the weighted-average over all samples, the sample with the highest weight or by assuming other parametric statistics like normal distributions \cite{Fetzer2016OMC}.
|
||||
%da muss es doch noch andere methoden geben... verflixt und zugenäht... aber grundsätzlich ist ein weighted average doch ein point estimator? (https://www.statlect.com/fundamentals-of-statistics/point-estimation)
|
||||
%Für related work brauchen wir hier definitiv quellen. einige berechnen ja auch https://en.wikipedia.org/wiki/Sample_mean_and_covariance oder nehmen eine gewisse verteilung für die sample menge and und berechnen dort die parameter
|
||||
|
||||
While such methods are computational fast and suitable most of the time, it is not uncommon that they fail to recover the state in more complex scenarios.
|
||||
Especially time-sequential, non-linear and non-Gaussian state spaces, depending upon a high number of different sensor types, frequently suffer from a multimodal representation of the posterior distribution.
|
||||
As a result, those techniques are not able to provide an accurate statement about the most probable state, rather causing misleading or false outcomes.
|
||||
For example in a localization scenario where a bimodal distribution represents the current posterior, a reliable position estimation is more likely to be at one of the modes, instead of somewhere in-between, like provided by a simple weighted-average estimation.
|
||||
For example, in a localization scenario where a bimodal distribution represents the current posterior, a reliable position estimation is more likely to be at one of the modes, instead of somewhere in-between, like provided by a simple weighted-average estimation.
|
||||
Additionally, in most practical scenarios the sample size and therefore the resolution is limited, causing the variance of the sample based estimate to be high \cite{Verma2003}.
|
||||
|
||||
It is obvious, that a computation of the full posterior could solve the above, but finding such an analytical solution is an intractable problem, what is the reason for applying a sample representation in the first place.
|
||||
@@ -31,7 +32,7 @@ By the central limit theorem, multiple recursion of a box filter yields an appro
|
||||
|
||||
This process converges quite fast to a reasonable close approximation of the ideal Gaussian.
|
||||
In addition, a box filter can be computed extremely fast by a computer, due to its intrinsic simplicity.
|
||||
While the idea to use several box filter passes to approximate a Gaussian has been around for a long, the application to obtain a fast KDE is new.
|
||||
While the idea to use several box filter passes to approximate a Gaussian has been around for a long time, the application to obtain a fast KDE is new.
|
||||
Especially in time critical and time sequential sensor fusion scenarios, the here presented approach outperforms other state of the art solutions, due to a fully linear complexity \landau{N} and a negligible overhead, even for small sample sets.
|
||||
In addition, it requires only a few elementary operations and is highly parallelizable.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user