From 0d4cd0ff319a10b9915b205a65708b0e876efa61 Mon Sep 17 00:00:00 2001 From: toni Date: Sat, 24 Feb 2018 13:38:49 +0100 Subject: [PATCH] fixed related work --- tex/chapters/relatedwork.tex | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/tex/chapters/relatedwork.tex b/tex/chapters/relatedwork.tex index 96d175f..80d0c1c 100644 --- a/tex/chapters/relatedwork.tex +++ b/tex/chapters/relatedwork.tex @@ -14,10 +14,10 @@ The selection of a \qq{good} bandwidth is still an open problem and heavily rese An extensive overview regarding the topic of automatic bandwith selection is given by \cite{heidenreich2013bandwidth}. %However, the automatic selection of the bandwidth is not subject of this work and we refer to the literature \cite{turlach1993bandwidth}. -The great flexibility of the KDE renders it very useful for many applications. -However, this comes at the cost of a relative slow computation speed. +The great flexibility of the KDE makes it very useful for many applications. +However, this comes at the cost of a slow computation speed. % -The complexity of a naive implementation of the KDE is \landau{MN}, given by $M$ evaluations of $N$ data samples. +The complexity of a naive implementation of the KDE is \landau{MN}, given by $M$ evaluations of $N$ data samples as input size. %The complexity of a naive implementation of the KDE is \landau{NM} evaluations of the kernel function, given $N$ data samples and $M$ points of the estimate. Therefore, a lot of effort was put into reducing the computation time of the KDE. Various methods have been proposed, which can be clustered based on different techniques. @@ -25,7 +25,7 @@ Various methods have been proposed, which can be clustered based on different te % k-nearest neighbor searching An obvious way to speed up the computation is to reduce the number of evaluated kernel functions. One possible optimization is based on k-nearest neighbour search performed on spatial data structures. -These algorithms reduce the number of evaluated kernels by taking the the spatial distance between clusters of data points into account \cite{gray2003nonparametric}. +These algorithms reduce the number of evaluated kernels by taking the distance between clusters of data points into account \cite{gray2003nonparametric}. % fast multipole method & Fast Gaus Transform Another approach is to reduce the algorithmic complexity of the sum over Gaussian functions, by employing a specialized variant of the fast multipole method. @@ -33,7 +33,7 @@ The term fast Gauss transform was coined by Greengard \cite{greengard1991fast} w % However, the complexity grows exponentially with dimension. \cite{Improved Fast Gauss Transform and Efficient Kernel Density Estimation} % FastKDE, passed on ECF and nuFFT -Recent methods based on the \qq{self-consistent} KDE proposed by Bernacchia and Pigolotti \cite{bernacchia2011self} allow to obtain an estimate without any assumptions, i.e. the kernel and bandwidth are both derived during the estimation. +Recent methods based on the self-consistent KDE proposed by Bernacchia and Pigolotti \cite{bernacchia2011self} allow to obtain an estimate without any assumptions, i.e. the kernel and bandwidth are both derived during the estimation. They define a Fourier-based filter on the empirical characteristic function of a given dataset. The computation time was further reduced by \etal{O'Brien} using a non-uniform fast Fourier transform (FFT) algorithm to efficiently transform the data into Fourier space \cite{oBrien2016fast}.