Going thru changes

This commit is contained in:
2018-03-13 15:58:41 +01:00
parent 9f098887db
commit 7c407f950e
7 changed files with 42 additions and 42 deletions

View File

@@ -1,29 +1,32 @@
\section{Experiments}
\subsection{Mean Integrated Squared Error}
We now empirically evaluate the feasibility of our BoxKDE method by analyzing its approximation error.
In order to evaluate the error the KDE and various approximations of it are computed and compared using the mean integrated squared error (MISE).
A synthetic sample set $\bm{X}$ with $N=1000$ obtained from a bivariate mixture normal density $f$ provides the basis of the comparison.
For each method an estimate is computed and the MISE of it relative to $f$ is calculated.
The specific structure of the underlying distribution clearly affects the error in the estimate, but only the closeness of the approximation to the KDE is of interest.
We empirically evaluate the feasibility of our BoxKDE method by analyzing its approximation error.
In order to evaluate the deviation of the estimate from the original density, the mean integrated squared error (MISE) is used.
Those errors are compared for the KDE and its various approximations.
To match the requirements of our application, a synthetic sample set $\mathcal{X}$ with $N=5000$ drawn from a bivariate mixture normal density $f$ given by \eqref{eq:normDist} provides the basis of the comparison.
For each method the estimate is computed and the MISE relative to $f$ is calculated.
The specific structure of the underlying distribution clearly affects the error in the estimate, but only the closeness of our approximation to the KDE is of interest.
Hence, $f$ is of minor importance here and was chosen rather arbitrary to highlight the behavior of the BoxKDE.
\begin{equation}
\label{eq:normDist}
\begin{split}
\bm{X} \sim & ~\G{\VecTwo{0}{0}}{0.5\bm{I}} + \G{\VecTwo{3}{0}}{\bm{I}} + \G{\VecTwo{0}{3}}{\bm{I}} \\
&+ \G{\VecTwo{-3}{0} }{\bm{I}} + \G{\VecTwo{0}{-3}}{\bm{I}} \text{,}
&+ \G{\VecTwo{-3}{0} }{\bm{I}} + \G{\VecTwo{0}{-3}}{\bm{I}}
\end{split}
\end{equation}
where the majority of the probability mass lies in the range $[-6; 6]^2$.
%where the majority of the probability mass lies in the range $[-6; 6]^2$.
\begin{figure}[t]
\input{gfx/error.tex}
\caption{MISE relative to the ground truth as a function of $h$. While the error curves of the BKDE (red) and the BoxKDE based on the extended box filter (orange dotted line) resemble the overall course of the error of the exact KDE (green), the regular BoxKDE (orange) exhibits noticeable jumps due to rounding.} \label{fig:errorBandwidth}
\caption{MISE relative to the ground truth as a function of $h$. While the error curves of the BKDE (red) and the BoxKDE based on the extended box filter (orange dotted line) resemble the overall course of the error of the KDE (green), the regular BoxKDE (orange) exhibits noticeable jumps due to rounding.} \label{fig:errorBandwidth}
\end{figure}
Four estimates are computed with varying bandwidth using the exact KDE, BKDE, BoxKDE, and ExBoxKDE, which uses the extended box filter.
Four estimates are computed with varying bandwidth using the KDE, BKDE, BoxKDE, and ExBoxKDE, which uses the extended box filter.
All estimates are calculated at $30\times 30$ equally spaced points.
%Evaluated at $50^2$ points the exact KDE is compared to the BKDE, BoxKDE, and extended box filter approximation, which are evaluated at a smaller grid with $30^2$ points.
The graphs of the MISE between $f$ and the estimates as a function of $h\in[0.15; 1.0]$ are given in \figref{fig:errorBandwidth}.
The graphs of the MISE between $f$ and the estimates as a function of $h\in[0.15, 1.0]$ are given in \figref{fig:errorBandwidth}.
A minimum error is obtained with $h=0.35$, for larger values oversmoothing occurs and the modes gradually fuse together.
Both the BKDE and the ExBoxKDE resemble the error curve of the KDE quite well and stable.
@@ -31,7 +34,7 @@ They are rather close to each other, with a tendency to diverge for larger $h$.
In contrast, the error curve of the BoxKDE has noticeable jumps at $h=\{0.25, 0.40, 0.67, 0.82\}$.
These jumps are caused by the rounding of the integer-valued box width given by \eqref{eq:boxidealwidth}.
As the extended box filter is able to approximate an exact $\sigma$, such discontinuities don't appear.
As the extended box filter is able to approximate an exact $\sigma$, such discontinuities do not appear.
Consequently, it reduces the overall error of the approximation, even though only marginal in this scenario.
The global average MISE over all values of $h$ is $0.0049$ for the regular box filter and $0.0047$ in case of the extended version.
Likewise, the maximum MISE is $0.0093$ and $0.0091$, respectively.
@@ -44,7 +47,7 @@ However, both cases do not give a deeper insight of the error behavior of our me
\begin{figure}[t]
%\includegraphics[width=\textwidth,height=6cm]{gfx/tmpPerformance.png}
\input{gfx/perf.tex}
\caption{Logarithmic plot of the runtime performance with increasing grid size $G$ and bivariate data. The weighted-average estimate (blue) performs fastest followed by the BoxKDE (orange) approximation. Both the BKDE (red) and the FastKDE (green) are magnitudes slower, especially for $G<10^3$.}\label{fig:performance}
\caption{Logarithmic plot of the runtime performance with increasing grid size $G$ and bivariate data. The weighted-average estimate (blue) performs fastest followed by the BoxKDE (orange) approximation, which is magnitudes slower, especially for $G<10^3$.}\label{fig:performance}
\end{figure}
% kde, box filter, exbox in abhänigkeit von h (bild)
@@ -54,19 +57,18 @@ However, both cases do not give a deeper insight of the error behavior of our me
\subsection{Performance}
In the following, we underpin the promising theoretical linear time complexity of our method with empirical time measurements compared to other methods.
All tests are performed on an Intel Core \mbox{i5-7600K} CPU with a frequency of \SI{4.2}{\giga\hertz}, and \SI{16}{\giga\byte} main memory.
We compare our C++ implementation of the BoxKDE approximation as shown in algorithm~\ref{alg:boxKDE} to the \texttt{ks} R package and the FastKDE Python implementation \cite{oBrien2016fast}.
The \texttt{ks} package provides a FFT-based BKDE implementation based on optimized C functions at its core.
With state estimation problems in mind, we additionally provide a C++ implementation of a weighted-average estimator.
As both methods are not using a grid, an equivalent input sample set was used for the weighted-average and the FastKDE.
All tests are performed on an Intel Core \mbox{i5-7600K} CPU @ \SI{4.2}{\giga\hertz}, and \SI{16}{\giga\byte} main memory. %, supporting the AVX2 instruction set
We compare our C++ implementation of the BoxKDE approximation as shown in algorithm~\ref{alg:boxKDE} to the R language \texttt{ks} package, which provides a FFT-based BKDE implementation based on optimized C functions at its core.
With state estimation problems in mind, we additionally provide a C++ implementation of a weighted average estimator.
An equivalently sized input sample set was used for the weighted average, as its runtime depends on the sample size and not the grid size.
The results of the performance comparison are presented in \figref{fig:performance}.
% O(N) gut erkennbar für box KDE und weighted average
The linear complexity of the BoxKDE and the weighted average is clearly visible.
% Gerade bei kleinen G bis 10^3 ist die box KDE schneller als R und FastKDE, aber das WA deutlich schneller als alle anderen
Especially for small $G$ up to $10^3$ the BoxKDE is much faster compared to BKDE and FastKDE.
Especially for small $G$ up to $10000$ grid points the BoxKDE is much faster compared to BKDE.
% Bei zunehmend größeren G wird der Abstand zwischen box KDE und WA größer.
Nevertheless, the simple weighted-average approach performs the fastest, and with increasing $G$ the distance to the BoxKDE grows constantly.
Nevertheless, the simple weighted average approach performs the fastest.
However, it is obvious that this comes with major disadvantages, like being prone to multimodalities, as discussed in section \ref{sec:intro}.
% (Das kann auch daran liegen, weil das Binning mit größeren G langsamer wird, was ich mir aber nicht erklären kann! Vlt Cache Effekte)
@@ -78,7 +80,7 @@ This behavior is caused by the underlying FFT algorithm.
% Daher wird die Laufzeit sprunghaft langsamer wenn auf eine neue power of two aufgefüllt wird, ansonsten bleibt sie konstant.
The FFT approach requires the input to be always rounded up to a power of two, what then causes a constant runtime behaviour within those boundaries and a strong performance deterioration at corresponding manifolds.
% Der Abbruch bei G=4406^2 liegt daran, weil für größere Gs eine out of memory error ausgelöst wird.
The termination of BKDE graph at $G=4406^2$ is caused by an out of memory error for even bigger $G$ in the \texttt{ks} package.
The termination of BKDE graph at $G\approx 1.9 \cdot 10^7$ is caused by an out of memory error in the \texttt{ks} package for bigger $G$.
% Der Plot für den normalen Box Filter wurde aus Gründen der Übersichtlichkeit weggelassen.
% Sowohl der box filter als auch der extended box filter haben ein sehr ähnliches Laufzeit Verhalten und somit einen sehr ähnlichen Kurvenverlauf.
@@ -87,14 +89,12 @@ Both discussed Gaussian filter approximations, namely box filter and extended bo
While the average runtime over all values of $G$ for the standard box filter is \SI{0.4092}{\second}, the extended one has an average of \SI{0.4169}{\second}.
To disambiguate \figref{fig:performance}, we only illustrated the results of the BoxKDE with the regular box filter.
The weighted-average has the great advantage of being independent of the dimensionality of the input and can be implemented effortlessly.
The weighted average has the great advantage of being independent of the dimensionality of the input and can be implemented effortlessly.
In contrast, the computation of the BoxKDE approach increases exponentially with increasing number of dimensions.
However, due to the linear time complexity and the very simple computation scheme, the overall computation time is still sufficiently fast for many applications and much smaller compared to other methods.
The BoxKDE approach presents a reasonable alternative to the weighted-average and is easily integrated into existing systems.
In addition, modern CPUs do benefit from the recursive computation scheme of the box filter, as the data exhibits a high degree of spatial locality in memory and the accesses are reliably predictable.
Furthermore, the computation is easily parallelized, as there is no data dependency between the one-dimensional filter passes in algorithm~\ref{alg:boxKDE}.
Hence, the inner loops can be parallelized using threads or SIMD instructions, but the overall speedup depends on the particular architecture and the size of the input.
\input{chapters/realworld}