I have implemented an interpolation algorithm for data that is sampled on an uniform grid. There is already scipy.interpolate.RegularGridInterpolator
, but it tends to be very slow for two reasons
- for every evaluation, it has to search the nonuniform sampling grid.
- for cubic interpolation, it looks like every evaluation needs to touch a substantial portion of the input array.
My alternative uses convolutions with suitable kernels. This is nothing new; all image and video processing software does this when you request a bicubic resampling. It’s for example in OpenCV image transforms, but that API is not very practical for me.
I’d like to contribute my code to scipy. Would this, in principle, be welcome? (I’m brand new here; I just created an account to post this.)
Notes
- I use
numba.njit
for the performance-critical parts. I suppose that I will need to rewrite those in C? - I wrote my code as a part of my job. I’ll need to align with my employer.
- I’d have to do quite a bit more work on the code to make it suitable for general-purpose use and inclusion into scipy.
- A bicubic kernel is not equivalent to a 2D cubic spline interpolation, but it’s close enough for most applications that just require a continuously differentiable interpolation function.
- I’d like to support a few common kernels (bicubic variants; Lanczos) and allow to user to supply a custom kernel.
Benchmarks
Test on a laptop i7 CPU; 60x60 input samples; 1000 random evaluations; scipy 1.14. Time per evaluation:
- My 2D linear interpolation: 9 ns
RegularGridInterpolator, method='linear'
: 130 ns- 2D kernel interpolation on 4x4 (e.g. bicubic) or 6x6 kernel: 130-150 ns
RegularGridInterpolator, method='cubic'
: 85 μs (!!)