Rectangle 27 16

OpenCV is now available as a framework for iOS. Just drag and drop into your project. It supports video capture too. See the article and get the example project here: opencv framework for ios

For the sake of transparency, I wrote this article and it is hosted on my company's website.

iPhone and OpenCV - Stack Overflow

iphone opencv
Rectangle 27 16

OpenCV is now available as a framework for iOS. Just drag and drop into your project. It supports video capture too. See the article and get the example project here: opencv framework for ios

For the sake of transparency, I wrote this article and it is hosted on my company's website.

iPhone and OpenCV - Stack Overflow

iphone opencv
Rectangle 27 24

Scene Text Detection Module in OpenCV 3

There are multiple ways to go about detecting text in an image.

I recommend looking at this question here, for it may answer your case as well. Although it is not in python, the code can be easily translated from c++ to python (Just look at the API and convert the methods from c++ to python, not hard. I did it myself when I tried their code for my own separate problem). The solutions here may not work for your case, but I recommend trying them out.

If I were to go about this I would do the following process:

Prep your image: If all of your images you want to edit are roughly like the one you provided, where the actual design consists of a range of gray colors, and the text is always black. I would first white out all content that is not black (or already white). Doing so will leave only the black text left.

# must import if working with opencv in python
import numpy as np
import cv2

# removes pixels in image that are between the range of
# [lower_val,upper_val]
def remove_gray(img,lower_val,upper_val):
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    lower_bound = np.array([0,0,lower_val])
    upper_bound = np.array([255,255,upper_val])
    mask = cv2.inRange(gray, lower_bound, upper_bound)
    return cv2.bitwise_and(gray, gray, mask = mask)

Now that all you have is the black text the goal is to get those boxes. As stated before, there are different ways of going about this.

The typical way to find text areas: you can find text regions by using stroke width transform as depicted in "Detecting Text in Natural Scenes with Stroke Width Transform " by Boris Epshtein, Eyal Ofek, and Yonatan Wexler. To be honest, if this is as fast and reliable as I believe it is, then this method is a more efficient method than my below code. You can still use the code above to remove the blueprint design though, and that may help the overall performance of the swt algorithm.

Here is a c library that implements their algorithm, but it is stated to be very raw and the documentation is stated to be incomplete. Obviously, a wrapper will be needed in order to use this library with python, and at the moment I do not see an official one offered.

The library I linked is CCV. It is a library that is meant to be used in your applications, not recreate algorithms. So this is a tool to be used, which goes against OP's want for making it from "First Principles", as stated in comments. Still, useful to know it exists if you don't want to code the algorithm yourself.

If you have meta data for each image, say in an xml file, that states how many rooms are labeled in each image, then you can access that xml file, get the data about how many labels are in the image, and then store that number in some variable say, num_of_labels. Now take your image and put it through a while loop that erodes at a set rate that you specify, finding external contours in the image in each loop and stopping the loop once you have the same number of external contours as your num_of_labels. Then simply find each contours' bounding box and you are done.

# erodes image based on given kernel size (erosion = expands black areas)
def erode( img, kern_size = 3 ):
    retval, img = cv2.threshold(img, 254.0, 255.0, cv2.THRESH_BINARY) # threshold to deal with only black and white.
    kern = np.ones((kern_size,kern_size),np.uint8) # make a kernel for erosion based on given kernel size.
    eroded = cv2.erode(img, kern, 1) # erode your image to blobbify black areas
    y,x = eroded.shape # get shape of image to make a white boarder around image of 1px, to avoid problems with find contours.
    return cv2.rectangle(eroded, (0,0), (x,y), (255,255,255), 1)

# finds contours of eroded image
def prep( img, kern_size = 3 ):    
    img = erode( img, kern_size )
    retval, img = cv2.threshold(img, 200.0, 255.0, cv2.THRESH_BINARY_INV) #   invert colors for findContours
    return cv2.findContours(img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) # Find Contours of Image

# given img & number of desired blobs, returns contours of blobs.
def blobbify(img, num_of_labels, kern_size = 3, dilation_rate = 10):
    prep_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count.
    while len(contours) > num_of_labels:
        kern_size += dilation_rate # add dilation_rate to kern_size to increase the blob. Remember kern_size must always be odd.
        previous = (prep_img, contours, hierarchy)
        processed_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count, again.
    if len(contours) < num_of_labels:
        return (processed_img, contours, hierarchy)
    else:
        return previous

# finds bounding boxes of all contours
def bounding_box(contours):
    bBox = []
    for curve in contours:
        box = cv2.boundingRect(curve)
    bBox.append(box)
    return bBox

The resulting boxes from the above method will have space around the labels, and this may include part of the original design, if the boxes are applied to the original image. To avoid this make regions of interest via your new found boxes and trim the white space. Then save that roi's shape as your new box.

Perhaps you have no way of knowing how many labels will be in the image. If this is the case, then I recommend playing around with erosion values until you find the best one to suit your case and get the desired blobs.

Or you could try find contours on the remaining content, after removing the design, and combine bounding boxes into one rectangle based on their distance from each other.

After you found your boxes, simply use those boxes with respect to the original image and you will be done.

As mentioned in the comments to your question, there already exists a means of scene text detection (not document text detection) in opencv 3. I understand you do not have the ability to switch versions, but for those with the same question and not limited to an older opencv version, I decided to include this at the end. Documentation for the scene text detection can be found with a simple google search.

The opencv module for text detection also comes with text recognition that implements tessaract, which is a free open-source text recognition module. The downfall of tessaract, and therefore opencv's scene text recognition module is that it is not as refined as commercial applications and is time consuming to use. Thus decreasing its performance, but its free to use, so its the best we got without paying money, if you want text recognition as well.

Honestly, I lack the experience and expertise in both opencv and image processing in order to provide a detailed way in implementing their text detection module. The same with the SWT algorithm. I just got into this stuff this past few months, but as I learn more I will edit this answer.

Detect text area in an image using python and opencv - Stack Overflow

python opencv image-processing ocr
Rectangle 27 25

There's already an example of how to do rectangle detection in OpenCV (look in samples/squares.c), and it's quite simple, actually.

Here's the rough algorithm they use:

0. rectangles <- {}
1. image <- load image
2. for every channel:
2.1  image_canny <- apply canny edge detector to this channel
2.2  for threshold in bunch_of_increasing_thresholds:
2.2.1   image_thresholds[threshold] <- apply threshold to this channel
2.3  for each contour found in {image_canny} U image_thresholds:
2.3.1   Approximate contour with polygons
2.3.2   if the approximation has four corners and the angles are close to 90 degrees.
2.3.2.1    rectangles <- rectangles U {contour}

Not an exact transliteration of what they are doing, but it should help you.

I am doing a similar project. I'm new to OpenCV so could you please post the source code to do these steps?

image processing - OpenCV Object Detection - Center Point - Stack Over...

image-processing opencv object-detection
Rectangle 27 25

There's already an example of how to do rectangle detection in OpenCV (look in samples/squares.c), and it's quite simple, actually.

Here's the rough algorithm they use:

0. rectangles <- {}
1. image <- load image
2. for every channel:
2.1  image_canny <- apply canny edge detector to this channel
2.2  for threshold in bunch_of_increasing_thresholds:
2.2.1   image_thresholds[threshold] <- apply threshold to this channel
2.3  for each contour found in {image_canny} U image_thresholds:
2.3.1   Approximate contour with polygons
2.3.2   if the approximation has four corners and the angles are close to 90 degrees.
2.3.2.1    rectangles <- rectangles U {contour}

Not an exact transliteration of what they are doing, but it should help you.

I am doing a similar project. I'm new to OpenCV so could you please post the source code to do these steps?

image processing - OpenCV Object Detection - Center Point - Stack Over...

image-processing opencv object-detection
Rectangle 27 21

First, check out /samples/c/squares.c in your OpenCV distribution. This example provides a square detector, and it should be a pretty good start on how to detect corner-like features. Then, take a look at OpenCV's feature-oriented functions like cvCornerHarris() and cvGoodFeaturesToTrack().

The above methods can return many corner-like features - most will not be the "true corners" you are looking for. In my application, I had to detect squares that had been rotated or skewed (due to perspective). My detection pipeline consisted of:

  • Find "rectangles" which were structures that: had polygonalized contours possessing 4 points, were of sufficient area, had adjacent edges were ~90 degrees, had distance between "opposite" vertices was of sufficient size, etc.

Step 7 was necessary because a slightly noisy image can yield many structures that appear rectangular after polygonalization. In my application, I also had to deal with square-like structures that appeared within, or overlapped the desired square. I found the contour's area property and center of gravity to be helpful in discerning the proper rectangle.

I need a little help with step 7, how to use cvCornerHarris, with my example, see the edited post, can you help me?

Is cvSmooth something like a Gaussian blur? Do you dilate the result from cvCanny? How do you approximate contours with, let's say 5 corners (deformated square because of shadows etc.) or suqares with a little ridge. Your approach is pretty much what I want to do, but I am struggeling very hard. Can you provide some code examples? Would be very helpful.

How to find corners on a Image using OpenCv - Stack Overflow

image-processing opencv
Rectangle 27 8

The example you tried to adapt is for the new python interface for OpenCV 2.0. This is probably the source of the confusion between the prefixed and non-prefixed function names (cv.cvSetData() versus cv.SetData()).

OpenCV 2.0 now ships with two sets of python bindings:

opencv.{cv,highgui,ml}
  • The new interface, a python C extension (cv.pyd), which wraps all the OpenCV functionalities (including the highgui and ml modules.)

The reason behind the error message is that the SWIG wrapper does not handle conversion from a python string to a plain-old C buffer. However, the SWIG wrapper comes with the opencv.adaptors module, which is designed to support conversions from numpy and PIL images to OpenCV.

The following (tested) code should solve your original problem (conversion from PIL to OpenCV), using the SWIG interface :

# PIL to OpenCV using the SWIG wrapper
from opencv import cv, adaptors, highgui
import PIL

pil_img = PIL.Image.open(filename)

cv_img = adaptors.PIL2Ipl(pil_img)

highgui.cvNamedWindow("pil2ipl")
highgui.cvShowImage("pil2ipl", cv_img)

However, this does not solve the fact that the cv.cvSetData() function will always fail (with the current SWIG wrapper implementation). You could then use the new-style wrapper, which allows you to use the cv.SetData() function as you would expect :

# PIL to OpenCV using the new wrapper
import cv
import PIL

pil_img = PIL.Image.open(filename)       

cv_img = cv.CreateImageHeader(pil_img.size, cv.IPL_DEPTH_8U, 3)  # RGB image
cv.SetData(cv_img, pil_img.tostring(), pil_img.size[0]*3)

cv.NamedWindow("pil2ipl")
cv.ShowImage("pil2ipl", cv_img)

A third approach would be to switch your OpenCV python interface to the ctypes-based wrapper. It comes with utility functions for explicit data conversion between e.g. python strings and C buffers. A quick look on google code search seems to indicate that this is a working method.

Concerning the third parameter of the cvSetData() function, size of the image buffer, but the image step. The step is the number of bytes in one row of your image, which is pixel_depth * number_of_channels * image_width. The pixel_depth parameter is the size in bytes of the data associated to one channel. In your example, it would be simply the image width (only one channel, one byte per pixel).

@sevas: I have not accepted your answer (yet), because I am using version 2.0 of OpenCV. The recipe on the page I linked to did not work at all until I changed cv.CreateImageHeader to cvCreateImage and cv.SetData to cvSetData, so I am still confused about that. I am going to try your approach with ctypes-opencv and if that works I will post my findings here.

@scrible: I added information about the two concurrent sets of bindings shipping with OpenCV 2.0. I will probably continue to look for a better solution though.

@scrible : I updated the answer using the latest information I could find (specifically, the adaptators module and the two sets of python bindings.)

for the third argument, is there a necessity to be a multiple of 4? in this site: opencv.willowgarage.com/wiki/PythonInterface , there is written something like that: "We think that SetData must have a step that is a multiple of 4"

I am trying to use the first method you are showing here, except that I am putting number 1 as the third parameter of the CreateImageHeader() function and the image that shows up in OpenCV window is all scrambled. I tried leaving the parameter 3 in there, too. Still the same. My loaded picture is a PNG. Any idea what could be wrong, please?

python - How do I create an OpenCV image from a PIL image? - Stack Ove...

python image-processing opencv python-imaging-library
Rectangle 27 2

Here is my real world example with trying out OCR from my old power meter. I would like to use your OpenCV code so that OpenCV does automatic cropping of image, and I'll do image cleaning scripts.

  • First image is original image (croped power meter numbers)
  • Second image is slightly cleaned up image in GIMP, around 50% OCR accuracy in tesseract

Did you clean the image using GIMP? I need a way to clean the image using the mobile device (opencv/ios)

This cleanup was done manually in GIMP, this was fastest way to see how good tesseract can be if I get clean image for it to process. Now there is much more legwork to make image clean with imagemagic and scripts. I'll also try Scan Tailor package I see has some great options for cleaning images.

Can you share the sample code doing all this ?

@valentt: can you share the code for cleaning the image using opencv?

@ParthDoshi image cleanup was done manually just for feasibility test... and we didn't go further with the project.

ios - Using tesseract to recognize license plates - Stack Overflow

ios objective-c opencv image-processing tesseract
Rectangle 27 4

It's really confusing to have both swig and new python binding. For example, in the OpenCV 2.0, cmake can accept both BUILD_SWIG_PYTHON_SUPPORT and BUILD_NEW_PYTHON_SUPPORT. But anyway, I kinda figured out most pitfalls.

In the case of using "import cv" (the new python binding), one more step is needed.

cv.SetData(cv_img, pil_img.tostring(), pil_img.size[0]*3)
cv.CvtColor(cv_img, cv_img, cv.CV_RGB2BGR)

The conversion is necessary for RGB images because the sequence is different in PIL and IplImage. The same applies to Ipl to PIL.

But if you use opencv.adaptors, it's already taken care of. You can look into the details in adaptors.py if interested.

python - How do I create an OpenCV image from a PIL image? - Stack Ove...

python image-processing opencv python-imaging-library
Rectangle 27 3

Take a look at mean-shift color segmentation (there's an example included with OpenCV in the samples directory). You can use this to separate your image into 2 classes (plant and soil) and use that to further process your data.

As for measurement, you may want to ignore occlusion effects and camera calibration initially and just look at portion of area in image that is the plant class.

If you want to get down to measuring individual leafs, you may take a "tracking" approach where you use the temporal information as well as the spatial information in the image. The temporal information may be the location and size of the leaf in the previous image. There's probably lots of techniques you could apply, but I'd start simple if I were you and see how far it gets you.

Thanks for the feedback. I updated my answer based on your input. It seems to work tough I had to play around a little bit with the mean shift segmentation parameters to get good results (I had to increase spatial window radius by a fair amount which makes the algorithm run fairly slow)

image processing - Optimal approach for detecting leaf like shapes in ...

image-processing opencv computer-vision
Rectangle 27 14

Convert your 2d point into a homogenous point (give it a third coordinate equal to 1) and then multiply by the inverse of your camera intrinsics matrix. For example

cv::Matx31f hom_pt(point_in_image.x, point_in_image.y, 1);
hom_pt = camera_intrinsics_mat.inv()*hom_pt; //put in world coordinates

cv::Point3f origin(0,0,0);
cv::Point3f direction(hom_pt(0),hom_pt(1),hom_pt(2));

//To get a unit vector, direction just needs to be normalized
direction *= 1/cv::norm(direction);

origin and direction now define the ray in world space corresponding to that image point. Note that here the origin is centered on the camera, you can use your camera pose to transform to a different origin. Distortion coefficients map from your actual camera to the pinhole camera model and should be used at the very beginning to find your actual 2d coordinate. The steps then are

@IanMedeiros do you mean to make it a unit vector or to make the 3rd element 1? The vector defined by those points is correct with or without any kind of normalizing, after multiplying by the inverse of the camera matrix the 3rd element can be interpreted as a z coordinate.

Actually, I've upvoted your answer but now I'm not sure if its correct. I think that you need to convert the 2D coordinates from pixels to world coordinates using the intrinsic matrix before doing the homogenization that you are talking about. Than you can transform it using the camera pose information to get the ray direction.

@IanMedeiros I intended camera_mat to be the intrinsics, I updated the answer to make that more clear. Is that what you are talking about?

The question states that him want unit vectors.

c++ - In OpenCV, converting 2d image point to 3d world unit vector - S...

c++ opencv computer-vision
Rectangle 27 3

If you can properly segment the image via foreground and background, then you can easily set a bounding box around the foreground. Graph cuts are very powerful methods of segmenting images. It appears that OpenCV provides easy to use implementations for it. So, for example, you provide some brush strokes which cover "foreground" and "background" pixels, and your image is converted into a digraph which is sliced optimally to split the two. Here is a fun example:

If you decide to continue down the edge detection route, then consider using Mathematical Morphology to "clean up" the lines you detect before trying to fit a bounding box or contour around the object.

You could train across a dataset containing TVs and use the viola jones algorithm for object detection. Traditionally it is used for face detection but you can adapt it for TVs given enough data. For example you could script downloading images of living rooms with TVs as your positive class and living rooms without TVs as your negative class.

You could perform image registration using cross correlation, like this nice MATLAB example demonstrates:

As for your template TV image which would be slid across the search image, you could obtain a bunch of pictures of TVs and create "Eigenscreens" similar to how Eigenfaces are used for facial recognition and generate an average TV image:

It appears OpenCV has plenty of fun tools for describing shape and structure features, which appears to be mainly what you're interested in. Worth a look if you haven't seen this already:

first let me thank you for your extensive answer. Currently I'm using some of the tools on 5) to do the job with an average effectiveness, but now I can see four more options to investigate. Adapting a face detection method was something I had thought, but I have not tried. I'll see this carefully on the next weeks and eventually I will mark this as the correct answer.

If you want immediate results for 1), check out the script here. It should take you minimal time to get it up and running: github.com/Itseez/opencv/blob/master/samples/python2/grabcut.py

c++ - How can I detect TV Screen from an Image with OpenCV or Another ...

c++ image opencv image-processing
Rectangle 27 26

Preprocess the image using cv::inRange() with the necessary color bounds to isolate red. You may want to transform to a color-space like HSV or YCbCr for more stable color bounds because chrominance and luminance are better separated. You can use cvtColor() for this. Check out my answer here for a good example of using inRange() with createTrackbar().

So, the basic template would be:

Mat redColorOnly;
inRange(src, Scalar(lowBlue, lowGreen, lowRed), Scalar(highBlue, highGreen, highRed), redColorOnly);
detectSquares(redColorOnly);

EDIT : Just use the trackbars to determine the color range you want to isolate, and then use the color intervals you find that work. You don't have to constantly use the trackbars.

EXAMPLE : So, for a complete example of the template here you go,

I created a simple (and ideal) image in GIMP, shown below:

Then I created this program to filter all but the red squares:

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>

using namespace std;
using namespace cv;

Mat redFilter(const Mat& src)
{
    assert(src.type() == CV_8UC3);

    Mat redOnly;
    inRange(src, Scalar(0, 0, 0), Scalar(0, 0, 255), redOnly);

    return redOnly;
}

int main(int argc, char** argv)
{
    Mat input = imread("colored_squares.png");

    imshow("input", input);
    waitKey();

    Mat redOnly = redFilter(input);

    imshow("redOnly", redOnly);
    waitKey();

    // detect squares after filtering...

    return 0;
}

NOTE : You will not be able to use these exact same filter intervals for your real imagery; I just suggest you tune the intervals with trackbars to see what is acceptable.

Voila! Only the red square remains :)

thank you but i will use this programme in real_time processing so i can't use a trackbar because the idea is to detecte a number in red color in polygone have you any idea who i can do !!

can you give me a tutorial or a code to help me!! please

Why not upgrade to OpenCV 2.3.1? What about this example doesn't work for you? inRange is available in OpenCV 2.1, so are Mat objects...

#include <opencv/cv.h>
<opencv2/*/*.hpp>

Detect RGB color interval with OpenCV and C++ - Stack Overflow

opencv
Rectangle 27 2

Create a graph with a NULL render. Also look at sample grabber example in directshow SDK. It shows how to grab a frame for a graph. You can then pass on the frame to openCV for processing.

Thank you, I'll examine sample grabber example. "Create a graph with a NULL render" - I don't get how should i do that, can you explain please?

Well when you normally render a graph, using say a simple renderfile command, you usually have the following graph: file reader->decoder->video render. You can alternatively manually create a graph to use a null renderer: reader->decoder->null render. A null renderer silently discards samples so that nothing shows up...no active movie window.

c++ - DirectShow and openCV. Read video file and process - Stack Overf...

c++ opencv directshow
Rectangle 27 2

Posting as an answer just to show the result of the OpenCV text detection example

Now you need to apply text recognition, using for example OCRHMMDecoder

You'll find a sample here

How to translate this image processing from Matlab to OpenCV? - Stack ...

matlab opencv computer-vision matlab-cvst opencv3.0
Rectangle 27 3

You have just to change the input with "v4l2-ctl" ! For example, I use Python on my raspberry to capture the video stream from S-Video with OpenCV :

import cv, os

    print "Initializing video capture from /dev/video0 ..."
    capture = cv.CaptureFromCAM(0)
    cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_WIDTH, 720)
    cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_HEIGHT, 576)

    print "Configuring video capture (V4L2) ..."
    os.system("v4l2-ctl -d /dev/video0 -i 5 -s 4 --set-fmt-video=width=720,height=576,pixelformat=4")

v4l2 - OpenCV is able to change to composite input? - Stack Overflow

opencv v4l2 v4l
Rectangle 27 9

First off, that example only shows you how to draw contours with the simple approximation. If you were to show that image now, you will only get what you see on the right hand side of that figure. If you want to get the image on the left hand side, you need to detect the full contour. Specifically, you need to replace the cv2.CHAIN_APPROX_SIMPLE flag with cv2.CHAIN_APPROX_NONE. Take a look at the OpenCV doc on findContours for more details: http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#findcontours

In addition, your code draws the contours onto the image, but it doesn't display the results. You'll need to call cv2.imshow for that.

# Your code
import numpy as np
import cv2

im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)

# Detect contours using both methods on the same image
contours1, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
contours2, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

# Copy over the original image to separate variables
img1 = im.copy()
img2 = im.copy()

# Draw both contours onto the separate images
cv2.drawContours(img1, contours1, -1, (255,0,0), 3)
cv2.drawContours(img2, contours2, -1, (255,0,0), 3)

Now, to get the figure you see above, there are two ways you can do this:

  • Create an image that stores both of these results together side by side, then show this combined image.
  • Use matplotlib, combined with subplot and imshow so that you can display two images in one window.

Simply stack the two images side by side, then show the image after:

out = np.hstack([img1, img2])

# Now show the image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()

I stack them horizontally so that they are a combined image, then show this with cv2.imshow.

You can use matplotlib:

import matplotlib.pyplot as plt

# Spawn a new figure
plt.figure()
# Show the first image on the left column
plt.subplot(1,2,1)
plt.imshow(img1[:,:,::-1])
# Turn off axis numbering
plt.axis('off')

# Show the second image on the right column
plt.subplot(1,2,2)
plt.imshow(img2[:,:,::-1])
# Turn off the axis numbering
plt.axis('off')

# Show the figure
plt.show()

This should display both images in separate subfigures within an overall figure window. If you take a look at how I'm calling imshow here, you'll see that I am swapping the RGB channels because OpenCV reads in images in BGR format. If you want to display images with matplotlib, you'll need to reverse the channels as the images are in RGB format (as they should be).

To address your question in your comments, you would take which contour structure you want (contours1 or contours2) and search the contour points. contours is a list of all possible contours, and within each contour is a 3D matrix that is shaped in a N x 1 x 2 format. N would be the total number of points that represent the contour. I'm going to remove the singleton second dimension so we can get this to be a N x 2 matrix. Also, let's use the full representation of the contours for now:

points = contours1[0].ravel().reshape((len(contours1[0]),2))

I am going to assume that your image only has one object, hence my indexing into contours1 with index 0. I unravel the matrix so that it becomes a single row vector, then reshape the matrix so that it becomes N x 2. Next, we can find the minimum point by:

min_x = np.argmin(points[:,0])
min_point = points[min_x,:]

np.argmin finds the location of the smallest value in an array that you supply. In this case, we want to operate along the x coordinate, or the columns. Once we find this location, we simply index into our 2D contour point array and extract out the contour point.

Oh my pleasure :) I didn't mean to steal it from runDOSrun, but in my defence, I was working on this answer while he posted. Good luck BTW!

I may sound impolite if I ask you a final question because this website says I must wait 90 minutes before asking again. I want to print the coordinates of the pixel which is the closet to Y axis (means its x coordinate is the smallest) and that belongs to the contours. May be this is too simple for you ? I am just beginner level

@Kabyle - My pleasure :) I like answering OpenCV questions because I'm still a bit new to it and if I can answer a question, it will flex my muscles. BTW, make sure you read the beginning of my answer again. I've changed a few things.

How do I display the contours of an image using OpenCV Python? - Stack...

python opencv image-processing computer-vision
Rectangle 27 4

I use makefiles in my projects to install OpenCV inside Python virtualenv. Below is boilerplate example. It requires that you already have OpenCV bindings present for your system Python (/usr/bin/python) which you can get using something like yum install opencv-python or apt-get install python-opencv.

Make first queries system Python's cv2 module and retrieves location of installed library file. Then it copies cv2.so into the virtualenv directory.

VENV_LIB = venv/lib/python2.7
VENV_CV2 = $(VENV_LIB)/cv2.so

# Find cv2 library for the global Python installation.
GLOBAL_CV2 := $(shell /usr/bin/python -c 'import cv2; print(cv2)' | awk '{print $$4}' | sed s:"['>]":"":g)

# Link global cv2 library file inside the virtual environment.
$(VENV_CV2): $(GLOBAL_CV2) venv
    cp $(GLOBAL_CV2) $@

venv: requirements.txt
    test -d venv || virtualenv venv
    . venv/bin/activate && pip install -r requirements.txt

test: $(VENV_CV2)
    . venv/bin/activate && python -c 'import cv2; print(cv2)'

clean:
    rm -rf venv

(You can copy-paste above snippet into a Makefile, but make sure to replace indentations with tab characters by running sed -i s:' ':'\t':g Makefile or similar.)

Now you can run the template:

echo "numpy==1.9.1" > requirements.txt
make
make test

Note that instead of symbolic link, we actually copy the .so file in order to avoid problem noted here: https://stackoverflow.com/a/19138136/1510289

linux - OpenCV and python/virtualenv? - Stack Overflow

python linux opencv virtualenv
Rectangle 27 3

I never solved this in Python, however I switched environment to C++ where you get more OpenCV examples and don't have to use a wrapper with less documentation.

And actually OpenCV-Python is a complete port of OpenCV-C++. Therefore, everything you can do in C++ can be done in Python as well except for some performance issue. Due to the poorly documented opencv-py 2.4.x, you can hardly find anything you need in the documentation. What's for sure is there is no DescriptorMatcher class, but there IS classes like FlannBasedMatcher in python, which is inherited from DescriptorMatcher and has add/train/match methods. You can try using this.

Where needed, you can utilize C++ tutorials and translate them into Python if you're code savvy... Python is just too beautiful to leave...

python - OpenCV feature matching for multiple images - Stack Overflow

python opencv sift flann
Rectangle 27 4

It seems you are staying within OpenCV so things are easy. If OpenCV is compiled properly it is capable of delegating io/coding/decoding to other libraries. Quicktime and others for example, but best is to use ffmpeg. You open, read and decode everything using the OpenCV API which gives you the video frame by frame.

Make sure your OpenCV is compiled with ffmpeg support and then read the OpenCV tutorial on how to read/write AVI files. It's really easy.

Getting OpenCV to be built with ffmpeg support might be hard though. You might want to switch to an older version of OpenCV if you can't get ffmpeg running with the current one.

Personally i would not spent time trying to read the video by yourself and delegate the task to OpenCV. That's how it is supposed to be used.

I think opencv 2.1 ffmpeg support is quite broken as of now (api change in ffmpeg). I had no opportunity to check on latest release so im curious if it has been fixed yet.

How to read .avi files C++ - Stack Overflow

c++ file-upload file opencv avi