# algorithm - Inverse convolution of image

I have source and result image. I know, that some convolution matrix has been used on source to get result. Can this convolution matrix be computed ? Or at least not exact one, but very similar.

In principle, yes. Just convert both images to frequency space using an FFT and divide the FFT of the result image by that of the source image. Then apply the inverse FFT to get an approximation of the convolution kernel.

To see why this works, note that convolution in the spatial domain corresponds to multiplication in the frequency domain, and so deconvolution similarly corresponds to division in the frequency domain. In ordinary deconvolution, one would divide the FFT of the convolved image by that of the kernel to recover the original image. However, since convolution (like multiplication) is a commutative operation, the roles of the kernel and the source can be arbitrarily exchanged: convolving the source by the kernel is exactly the same as convolving the kernel by the source.

As the other answers note, however, this is unlikely to yield an exact reconstruction of your kernel, for the same reasons as ordinary deconvolution generally won't exactly reconstruct the original image: rounding and clipping will introduce noise into the process, and it's possible for the convolution to completely wipe out some frequencies (by multiplying them with zero), in which case those frequencies cannot be reconstructed.

That said, if your original kernel was of limited size (support), the reconstructed kernel should typically have a single sharp peak around the origin, approximating the original kernel, surrounded by low-level noise. Even if you don't know the exact size of the original kernel, it shouldn't be too hard to extract this peak and discard the rest of the reconstruction.

### Example:

Here's the Lenna test image in grayscale, scaled down to 256×256 pixels and convolved with a 5×5 kernel in GIMP:

I'll use Python with numpy/scipy for the deconvolution:

``````from scipy import misc
from numpy import fft

orig_f = fft.rfft2(orig)
blur_f = fft.rfft2(blur)

kernel_f = blur_f / orig_f         # do the deconvolution
kernel = fft.irfft2(kernel_f)      # inverse Fourier transform
kernel = fft.fftshift(kernel)      # shift origin to center of image
kernel /= kernel.max()             # normalize gray levels
misc.imsave('kernel.png', kernel)  # save reconstructed kernel
``````

The resulting 256×256 image of the kernel, and a zoom of the 7×7 pixel region around its center, are shown below:

Comparing the reconstruction with the original kernel, you can see that they look pretty similar; indeed, applying a cutoff anywhere between 0.5 and 0.68 to the reconstruction will recover the original kernel. The faint ripples surrounding the peak in the reconstruction are noise due to rounding and edge effects.

I'm not entirely sure what's causing the cross-shaped artifact that appears in the reconstruction (although I'm sure someone with more experience with these things could tell you), but off the top of my head, my guess would be that it has something to do with the image edges. When I convolved the original image in GIMP, I told it to handle edges by extending the image (essentially copying the outermost pixels), whereas the FFT deconvolution assumes that the image edges wrap around. This may well introduce spurious correlations along the x and y axes in the reconstruction.

Well, an estimate can be produced if an upper bound for the convolution matrix size is known. If it's N, select N*N points and attempt to solve a linear equation system against convolution coefficients, based on the data of source and destination. Given the rounding of color components, the system won't solve, but with linear programming you will be able to minimize the total offset from expected values by small alterations of those coefficients.

You can try to perform deconvolution with source image as kernel. But results could be unpredictable - deconvolution is very unstable process due to noise, edge effects, rounding errors etc.

I have rewritten @Ilmari Karonen answer to C/C++ using fftw3 for someone, who might find it handy:

Circular shift function

``````template<class ty>
void circshift(ty *out, const ty *in, int xdim, int ydim, int xshift, int yshift)
{
for (int i =0; i < xdim; i++)
{
int ii = (i + xshift) % xdim;
for (int j = 0; j < ydim; j++)
{
int jj = (j + yshift) % ydim;
out[ii * ydim + jj] = in[i * ydim + j];
}
}
}
``````

Now main code

``````int width = 256;
int height = 256;

int index = 0;

MyStringAnsi imageName1 = "C://ka4ag.png";
MyStringAnsi imageName2 = "C://KyPu2.png";

double * in1 = new double[width * height];
fftw_complex * out1 = new fftw_complex[width * height];

double * in2 = new double[width * height];
fftw_complex * out2 = new fftw_complex[width * height];

MyUtils::MyImage * im1 = MyUtils::MyImage::Load(imageName1, MyUtils::MyImage::PNG);
MyUtils::MyImage * im2 = MyUtils::MyImage::Load(imageName2, MyUtils::MyImage::PNG);

for (int i = 0; i < width * height; i++)
{
in1[i] = ((im1->Get(i).r / (255.0 * 0.5)) - 1.0);
in2[i] = ((im2->Get(i).r / (255.0 * 0.5)) - 1.0);
}

fftw_plan dft_plan1 = fftw_plan_dft_r2c_2d(width, height, in1, out1, FFTW_ESTIMATE);
fftw_execute(dft_plan1);
fftw_destroy_plan(dft_plan1);

fftw_plan dft_plan2 = fftw_plan_dft_r2c_2d(width, height, in2, out2, FFTW_ESTIMATE);
fftw_execute(dft_plan2);
fftw_destroy_plan(dft_plan2);

fftw_complex * kernel = new fftw_complex[width * height];

for (int i = 0; i < width * height; i++)
{
std::complex<double> c1(out1[i][0], out1[i][1]);
std::complex<double> c2(out2[i][0], out2[i][1]);

std::complex<double> div = c2 / c1;

kernel[i][0] = div.real();
kernel[i][1] = div.imag();
}

double * kernelOut = new double[width * height];

fftw_plan dft_planOut = fftw_plan_dft_c2r_2d(width, height, kernel, kernelOut, FFTW_ESTIMATE);
fftw_execute(dft_planOut);
fftw_destroy_plan(dft_planOut);

double * kernelShift = new double[width * height];

circshift(kernelShift, kernelOut, width, height, (width/2), (height/2));

double maxKernel = kernelShift[0];
for (int i = 0; i < width * height; i++)
{
if (maxKernel < kernelShift[i]) maxKernel = kernelShift[i];
}

for (int i = 0; i < width * height; i++)
{
kernelShift[i] /= maxKernel;
}

uint8 * res = new uint8[width * height];
for (int i = 0; i < width * height; i++)
{
res[i] = static_cast<uint8>((kernelShift[i]+ 1.0) * (255.0 * 0.5));
}

//now in res is similar result as in @Ilmari Karonen, but shifted by +128
``````

Code has no memory management, so you must clean your memory !

This is a classical problem of deconvolution. What you called a convolution matrix is usually called the "kernel". The convolution operation is often denoted with a star '*' (not to confuse with multiplication!). Using this notation

```Result = Source * Kernel ```

The answers above using the FFT are correct, but you can't really use FFT based deconvolution in the presence of noise. The right way to do it is using Richardson-Lucy deconvolution (see https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution)

It is quite straightforward to implement. This answer also provides a sample Matlab implementation: Would Richardson–Lucy deconvolution work for recovering the latent kernel?