Skip to main content

OpenCV part-2 Blog

The beauty of OpenCV Vol-2:

In this article, i would like to talk about some more operations which we can perform using OpenCV. In my last article or the part-1 of this article, I've talked about the various operations which we can perform on images using OpenCV. Now in the second part of the article, i would like to share some more on OpenCV like Thresholding, Canny edge detection, Morphological transformations, Blurring and Smoothing and many more. So without wasting any time, let's get started.

***important thing-> Here's the link of the first part of this article, https://tdbtech.blogspot.com/2019/02/opencv-part-1-blog.html

Color Filtering in OpenCV:

Here, we are going to cover some color filtering operations using OpenCV, revisiting bitwise operation, and we will also specify a certain color in order to just show it.
Have you guys wondered how the area of green screen removes in the VFX or Superhero based movies?. Just like that, we will filter out some certain region and replace it with something else.
Now, first of all, we need to convert colors into HSV, which is Hue Saturation Value. This will actually help to point a specific color. Well, Hue is for colors and Saturation is for color ranges or strength of colors and Value is for light means how good can we describe that portion.
So here's the code:
import cv2
import numpy as np
cap=cv2.VideoCapture(#)->0 for your webcam and 'your pathto videofile like i have shown in previous blog'
while(1):
         _,frame=cap.read()
         hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
         lower_red=np.array([30,150,50])
         upper_red=np.array([255,255,180])
         mask=cv2.inRange(hsv,lower_red,upper_red)
         res=cv2.bitwise_and(frame,frame,mask=mask)
         cv2.imshow('original',frame)
         cv2.imshow('mask',mask)
         cv2.imshow('res',res)
         k=cv2.waitKey(5) & 0xFF
         if k==27:
                break
cv2.destroyAllWindows()
cap.release()

Here we have set the range of values to be 30-255, 150-255, 50-180. This range is particularly for red but feel free to use your range. The idea behind using HSV is that we want a range of colors and we generally want same-ish colors for this case. Many times a typical red has some green and bluish effect that's why we would have to allow some green and some blue, but then we'd want some full red. And that means we would get lower-light mixes of all colors at this point.
Now first, we have converted the frames into HSV, and i guess it doesn't seem a big deal for you. Then we've specified some range of values for color Red. Next, we create a mask which is set for Range values which is either True or False, Black or White. Then we have restored our redness by using bitwise operation and basically, we showed that there is a frame and the mask. Which will convert white part of a mask into pure white which is in red range and others will be converted to black one. Here's the result:


It is just an example, with red at the target.


Blurring and smoothing in OpenCV:

As you can see there are lots of dots where we'd preferred red. They are nothing but the noise in our filters. We can use blurring and smoothing filters in order to remove this bit.
here's the code:
import numpy as np
import cv2
cap=cv2.VideoCapture(#)-> As mentioned above
while(1):
       _,frame=cap.read()
       hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
       lower_red=np.array([30,150,50])
       upper_red=np.array([255,255,180])
       mask=cv2.inRange(hsv,lower_red,upper_red)
       res=cv2.bitwise_and(frame,frame,mask=mask)
       kernel=np.ones((15,15),np.float32)/255
       smoothed=cv2.filter2D(res,-1,kernel)
       blur=cv2.GaussianBlur(res,(15,15),0)
       median=cv2.medianBlur(res,15)
       cv2.imshow('original',frame)
       cv2.imshow('mask',mask)
       cv2.imshow('smooth',smoothed)
       cv2.imshow('gaussian',blur)
       cv2.imshow('median',median)
       k=cv2.waitKey(5) & 0xFF:
       if k==27:
             break
cv2.destroyAllWindows()
cap.release()
Here, we have applied simple smoothing, when we have average sort of smoothing per block of pixels. In our case, its a 15x15 square so we took 225 pixels.
Here's the result of all possible type of blurring and smoothing:

Here it doesn't seem a good result but if you test it by yourself, then you will surely get to know what's the difference. Just try it.....

Morphological transformations using OpenCV:

First, let me tell you what is morphological transformation. Morphological transformations are simple operations based on image shape. It is basically performed on binary images. It needs two inputs, one is the original image and the second one is the structuring kernel. Two basically morphological operators are Erosion and dilation. Erosion is where we will erode the edges. These will work with a kernel. First, we will slider a size. Let's say a size of 6x6 pixels. Then what happens if we slide the slider around, & eventually if we get pixels of white, then we get white otherwise black. Another method is dilation which is the opposite of erosion, i.e if the area isn't black, then it is converted into white. Let's get some hands on the code:

import cv2
import numpy as np
cap=cv2.VideoCapture(0)
while(1):
         _,frame=cap.read()
         hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
         lower_red=np.array([30,150,50])
         upper_red=np.array([255,255,180])
         mask=cv2.inRange(hsv,lower_red,upper_red)
         res=cv2.bitwise_and(frame,frame,mask=mask)
         kernel=np.ones((6,6),np.uint8)
         erosion=cv2.erode(mask,kernel,iterations=1)
         dilation=cv2.dilate(mask,kernel,iterations=1)
         cv2.imshow('original',frame)
         cv2.imshow('mask',mask)
         cv2.imshow('erode',erosion)
         cv2.imshow('dilate',dilation)
         k=cv2.waitKey(5) & 0xFF
         if k==27:
              break
cv2.destroyAllWindows()
cap.release()

I think i don't need to mention the explanation of the above code.
Now we will see some variants of the methods of morphological transformation, which is 'opening' and 'closing'. The use of opening is to remove the "False Positives". And the idea of using 'closing' is to remove "False negatives". Basically, the job of using opening and closing is to remove the noise from the image or frame. Like in the above image, we still have some black pixels within the object.

Now, for the code we just need to replace two lines from the above code with this one:
opening=cv2.morphologyEx(mask,cv2,MORPH_OPEN,kernel)
closing=cv2.morphologyEx(mask,cv2.MORPH_CLOSE,kernel)

Here's the result:


Canny Edge Detection and Gradients in OpenCV:

Here, we will be covering two main topics. One is gradient matching and other is Canny edge detection. As the name suggests, the canny edge is used to detect edges in the frame, and image gradient is used to measure directional intensity.
Here's the code for image gradient:
import numpy as np
import cv2
cap=cv2.VideoCapture(0)
while(1):
        _,frame=cap.read() 
        hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
        lower_red=np.array([30,150,50])
        upper_red=np.array([255,255,180])
        mask=cv2.inRange(hsv,lower_red,upper_red)
        res=cv2.bitwise_and(frame,frame,mask=mask)
        laplacian=cv2.Laplacian(frame,cv2.CV_64F)#cv2.CV_64F is the data type
        sobelx=cv2.Sobel(frame,cv2.CV_64F,1,0,ksize=6)#ksize is the kernel size
        sobely=cv2.Sobel(frame,cv2CV_64F,0,1,ksize=6)
        cv2.imshow('original',frame)
        cv2.imshow('laplacian',laplacian)
        cv2.imshow('xcor',sobelx)
        cv2.imshow('ycor',sobely)
        k=cv2.waitKey(0) & 0xFF
        if k== 27:
              break
cv2.destroyAllWindows() 
cap.release()

The output should be like this:
   

Here's the code for canny edge detection:
import numpy as np
import cv2
cap=cv2.VideoCapture(0)
while(1):
        _,frame=cap.read()
        hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
        lower_red=np.array([30,150,50])
        upper_red=np.array([255,255,180])
        mask=cv2.inRange(hsv,lower_red,upper_red)
        res=cv2.bitwise_and(frame,frame,mask=mask)
        cv2.imshow('original',frame)
        canny=cv2.Cany(frame,100,200)
        cv2.imshow('canny',canny)
        k=cv2.waitKey(5) & 0xFF
        if k==27:
               break
cv2.destroyAllWindows()
cap.release()

Here's the output of the code written above:


That's really good, isn't it? let's have more fun using OpenCV in the last article. In the last article, we will discuss Grab cut, MOG background, Detection Face & eye and many more. So hang on until i post the final article. Thank you.....
Here's the link of the last article:https://tdbtech.blogspot.com/2019/02/opencv-part-3-blog.html

Comments

Post a Comment

Popular posts from this blog

OpenCV part-1 Blog

The beauty of OpenCV: In this article, i have written about the basic but useful operations we can do with OpenCV. OpenCV is a major tool which we can use for major deep learning algorithms. This is part one of the whole topic, for the next one you can see the link at the end. In the part-1, i have discussed some operations on images. In the next part, i have discussed video source operation like canny edge detection, smoothing and blurring, morphological transformation and many more. So without wasting any time let's get started * **important thing:  I have used the jupyter notebook for deployment, if you wish to use your IDE then go on otherwise i will also write one more article on how to install Jupyter notebook using anaconda Introduction: OpenCV (Open source Computer Vision) is a Python programming function mainly aimed at real-time computer vision. This library majorly works on enabling computers to see, visualize and gives the output as the same human eye does. O

OpenCV part-3 Blog

The beauty of OpenCV Vol-3. In this final article, i would like to discuss some advance operation which we can perform using OpenCV. Here in this article, we are going to discuss Grabcut foreground extraction, Feature matching, Haar cascade object, MOG background. In my last two articles, i have discussed some basic operations which can be done by using OpenCV like reading an image, loading video source, Drawing and writing on images, thresholding, Canny Edge Detection etc, these were just basic operations which are basically in order to understand how things would look like when you are doing computer vision projects likewise, how does your own model would print the type of object at on a single frame, how do the bounding boxes print on a frame, how do we show the frames while running our model, how do we test our results on a video frame and many more. If you don't get this, i will write a blog in the future on how the computer vision projects work and what are the backend