## Laplacian Operator

Laplacian Operator is also a derivative operator which is used to find edges in an image. The major difference between Laplacian and other operators like Prewitt, Sobel, Robinson and Kirsch is that these all are first order derivative masks but Laplacian is a second order derivative mask. In this mask we have two further classifications one is Positive Laplacian Operator and other is Negative Laplacian Operator.

Another difference between Laplacian and other operators is that unlike other operators Laplacian didnt take out edges in any particular direction but it take out edges in following classification.

Lets see that how Laplacian operator works.

## Java DIP - Kirsch Operator

Kirsch compass masks are yet another type of derivative mask which are used for edge detection. This operator is also known as direction mask. In this operator we take one mask and rotate it in all the eight compass directions to get edges of the eight directions.

We are going to use OpenCV function filter2D to apply Kirsch operator to images. It can be found under Imgproc package. Its syntax is given below:

`filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );`

The function arguments are described below:

src

It is source image.

dst

It is destination image.

ddepth

It is the depth of dst. A negative value (such as -1) indicates that the depth is the same as the source.

kernel

It is the kernel to be scanned through the image.

anchor

It is the position of the anchor relative to its kernel. The location Point (-1, -1) indicates the center by default.

delta

It is a value to be added to each pixel during the convolution. By default it is 0.

BORDER_DEFAULT

We let this value by default.

Apart from the filter2D() method, there are other methods provided by the Imgproc class. They are described briefly:

cvtColor(Mat src, Mat dst, int code, int dstCn)

It converts an image from one color space to another.

dilate(Mat src, Mat dst, Mat kernel)

It dilates an image by using a specific structuring element.

equalizeHist(Mat src, Mat dst)

It equalizes the histogram of a grayscale image.

 filter2D(Mat src, Mat dst, int ddepth, Mat kernel, Point anchor, double delta) It convolves an image with the kernel.

GaussianBlur(Mat src, Mat dst, Size ksize, double sigmaX)

It blurs an image using a Gaussian filter.

integral(Mat src, Mat sum)

It calculates the integral of an image.

## Java DIP - Robinson Operator

Robinson compass masks are yet another type of derivative masks which are used for edge detection. This operator is also known as direction mask. In this operator we take one mask and rotate it in all the eight major directions to get edges of the eight directions.

We are going to use OpenCV function filter2D to apply Robinson operator to images. It can be found under Imgproc package. Its syntax is given below:

`filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );`

The function arguments are described below:

src

It is source image.

dst

It is destination image.

ddepth

It is the depth of dst. A negative value (such as -1) indicates that the depth is the same as the source.

kernel

It is the kernel to be scanned through the image.

anchor

It is the position of the anchor relative to its kernel. The location Point(-1, -1) indicates the center by default.

delta

It is a value to be added to each pixel during the convolution. By default it is 0.

BORDER_DEFAULT

We let this value by default.

Apart from the filter2D method, there are other methods provided by the Imgproc class. They are described briefly:

cvtColor(Mat src, Mat dst, int code, int dstCn)

It converts an image from one color space to another.

dilate(Mat src, Mat dst, Mat kernel)

It dilates an image by using a specific structuring element.

equalizeHist(Mat src, Mat dst)

It equalizes the histogram of a grayscale image.

 filter2D(Mat src, Mat dst, int ddepth, Mat kernel, Point anchor, double delta) It convolves an image with the kernel.

GaussianBlur(Mat src, Mat dst, Size ksize, double sigmaX)

It blurs an image using a Gaussian filter.

integral(Mat src, Mat sum)

It calculates the integral of an image.

## Java DIP - Laplacian Operator

Laplacian Operator is also a derivative operator which is used to find edges in an image. The major difference between Laplacian and other operators like Prewitt, Sobel, Robinson, and Kirsch is that these all are first order derivative masks but Laplacian is a second order derivative mask.

We use OpenCV function filter2D to apply Laplacian operator to images. It can be found under Imgproc package. Its syntax is given below:

`filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );`

The function arguments are described below:

src

It is source image.

dst

It is destination image.

ddepth

It is the depth of dst. A negative value (such as -1) indicates that the depth is the same as the source.

kernel

It is the kernel to be scanned through the image.

anchor

It is the position of the anchor relative to its kernel. The location Point (-1, -1) indicates the center by default.

delta

It is a value to be added to each pixel during the convolution. By default it is 0.

BORDER_DEFAULT

We let this value by default.

Apart from the filter2D() method, there are other methods provided by the Imgproc class. They are described briefly:

cvtColor(Mat src, Mat dst, int code, int dstCn)

It converts an image from one color space to another.

dilate(Mat src, Mat dst, Mat kernel)

It dilates an image by using a specific structuring element.

equalizeHist(Mat src, Mat dst)

It equalizes the histogram of a grayscale image.

 filter2D(Mat src, Mat dst, int ddepth, Mat kernel, Point anchor, double delta) It convolves an image with the kernel.

GaussianBlur(Mat src, Mat dst, Size ksize, double sigmaX)

It blurs an image using a Gaussian filter.

integral(Mat src, Mat sum)

It calculates the integral of an image.

## Difference with Prewitt Operator:

The major difference is that in sobel operator the coefficients of masks are not fixed and they can be adjusted according to our requirement unless they do not violate any property of derivative masks.

This mask works exactly same as the Prewitt operator vertical mask. There is only one difference that is it has 2 and -2 values in center of first and third column. When applied on an image this mask will highlight the vertical edges.

When we apply this mask on the image it prominent vertical edges. It simply works like as first order derivate and calculates the difference of pixel intensities in a edge region.

As the center column is of zero so it does not include the original values of an image but rather it calculates the difference of right and left pixel values around that edge. Also the center values of both the first and third column is 2 and -2 respectively.

This give more weight age to the pixel values around the edge region. This increase the edge intensity and it become enhanced comparatively to the original image.

Above mask will find edges in horizontal direction and it is because that zeros column is in horizontal direction. When you will convolve this mask onto an image it would prominent horizontal edges in the image. The only difference between it is that it have 2 and -2 as a center element of first and third row.

This mask will prominent the horizontal edges in an image. It also works on the principle of above mask and calculates difference among the pixel intensities of a particular edge. As the center row of mask is consist of zeros so it does not include the original values of edge in the image but rather it calculate the difference of above and below pixel intensities of the particular edge. Thus increasing the sudden change of intensities and making the edge more visible.

Now its time to see these masks in action:

Following is a sample picture on which we will apply above two masks one at time.

After applying vertical mask on the above sample image, following image will be obtained.

After applying horizontal mask on the above sample image, following image will be obtained

As you can see that in the first picture on which we apply vertical mask, all the vertical edges are more visible than the original image. Similarly in the second picture we have applied the horizontal mask and in result all the horizontal edges are visible.

So in this way you can see that we can detect both horizontal and vertical edges from an image. Also if you compare the result of sobel operator with Prewitt operator, you will find that sobel operator finds more edges or make edges more visible as compared to Prewitt Operator.

This is because in sobel operator we have allotted more weight to the pixel intensities around the edges.

Now we can also see that if we apply more weight to the mask, the more edges it will get for us. Also as mentioned in the start of the tutorial that there is no fixed coefficients in sobel operator, so here is another weighted operator

 -1 -5 -1

If you can compare the result of this mask with of the Prewitt vertical mask, it is clear that this mask will give out more edges as compared to Prewitt one just because we have allotted more weight in the mask.

Kirsch Compass Mask is also a derivative mask which is used for finding edges. This is also like Robinson compass find edges in all the eight directions of a compass. The only difference between Robinson and kirsch compass masks is that in Kirsch we have a standard mask but in Kirsch we change the mask according to our own requirements.

With the help of Kirsch Compass Masks we can find edges in the following eight directions.

We take a standard mask which follows all the properties of a derivative mask and then rotate it to find the edges.

For example lets see the following mask which is in North Direction and then rotate it to make all the direction masks.

 -3 -3 -3 -3 -3

As you can see that all the directions are covered and each mask will give you the edges of its own direction. Now to help you better understand the concept of these masks we will apply it on a real image. Suppose we have a sample picture from which we have to find all the edges. Here is our sample picture:

Now we will apply all the above filters on this image and we get the following result.

As you can see that by applying all the above masks you will get edges in all the direction. Result is also depends on the image. Suppose there is an image, which do not have any North East direction edges so then that mask will be ineffective.

We are going to perform a comparison between blurring masks and derivative masks.

A blurring mask has the following properties.

A derivative mask has the following properties.

The relationship between blurring mask and derivative mask with a high pass filter and low pass filter can be defined simply as.

The high pass frequency components denotes edges whereas the low pass frequency components denotes smooth regions.

This is the common example of low pass filter.

When one is placed inside and the zero is placed outside , we got a blurred image. Now as we increase the size of 1, blurring would be increased and the edge content would be reduced.

This is a common example of high pass filter.

When 0 is placed inside, we get edges , which gives us a sketched image. An ideal low pass filter in frequency domain is given below

The ideal low pass filter can be graphically represented as

Now lets apply this filter to an actual image and lets see what we got.

With the same way , an ideal high pass filter can be applied on an image. But obviously the results would be different as , the low pass reduces the edged content and the high pass increase it.