Image for post
Image for post
Andres Vázquez Aviña | digital lab analyst | everis Mexico

3D Reconstruction with OpenCV and Python3D Reconstruction with OpenCV and Python

Learn how OpenCV supports 3D reconstructions, including a sample app that moves a robotic arm.

OpenCV is a library for real-time computer vision. It offers extremely powerful functions that facilitate the art of processing images and getting information about them. In this post, we will review some of the functions we used to make a 3D-reconstruction from an image in order to make an autonomous robotic arm.

OpenCV uses a pinhole camera model. This model projects 3D points onto the image plane using a perspective transformation.

OpenCV includes several features that help us accomplish our goal. These functions work with a chessboard model to calibrate the model, so the first step is to photograph a chessboard. We took several photographs in order to improve our calibration.

Image for post
Image for post

So, we have seen a fragment of code. Now let’s see what it does.

In the first lines, we import the necessary libraries. Next, we define the termination criteria for the iterative algorithms of OpenCV. We use them in the function cornerSubPix.

The next variables are for preparing the object points. We then define some empty arrays in order to store the object points and image points from all of our chessboard images.

Then, we open all of our images in grayscale, to pass them as parameters to the function findChessboardCorners(). This function has 3 parameters. The first parameter is the image, which must be an 8-bit grayscale. The second parameter is the number of inner corners per chessboard, which must be a tuple (points_per_row, points_per_column). The third parameter is the corners; because in this case we do not want to detect them, we put None.

The next step is to refine the corner locations with the cornerSubPix() function. Then we draw the corners on the chessboard. The next image shows where we found the chessboard corners, and the image in which they are drawn.

Image for post
Image for post

We have now identified our object points and images points. We are ready to proceed to the camera calibration, using the function calibrateCamera(). This function returns all the necessary parameters to make the 3D reconstruction — like the camera matrix, the distortion coefficients, the rotation vectors, etc.

Image for post
Image for post

We now have the camera parameters, and will use them for the 3D reconstruction.

So in the first line, we make a refinement to improve the location of the corners, then apply the camera parameters we obtained previously. In the function solvePnRansac(), OpenCV finds an object pose from 3D-2D point correspondences, using an iterative method to estimate the parameters of the mathematical model from a set of observed data that contains outliers. In the third line, we define our axes in order to make a box in the picture and project in the box. The next step is to project the points and then draw them using the draw() function.

Image for post
Image for post

In the image above, the image on the left shows the input image, and the image on the right shows the image with the axes added.

Case: Autonomous Robotic Arm

With these 2D-3D projections, we can identify the spatial coordinates of an object from an image. We chose to implement this OpenCV algorithm in order to make an autonomous robotic arm. In order to make it autonomous, we need to make use of an object detection model, which will allow the arm to identify the identity and location of objects to be picked up.

For this purpose, we use the MASK r-cnn model for object detection and instance segmentation on Keras and TensorFlow. Please visit this GitHub for more specific information on the model.

In addition to the MASK model, we use an Arduino Mega for the arm control, because we know the position of the objects and now need to know how to move the arm to grasp them. So we made an Arduino function to control each of the four arm engines.

Each motor has one potentiometer working as a goniometer. This is achieved by computing a relation between the voltage in the potentiometer and the engine position. It is very easy to obtain this relationship using a linear regression, which can be done in Excel or OriginLab. Now that we have the relationship, we use it in the Arduino. So let’s prepare a piece of code:

Image for post
Image for post

In the previous image, we see a piece of code of the functions that we used for the Arduino programming. Each engine has one of these functions.

Image for post
Image for post

The image above shows our setup of the robotic arm and the chessboard. We used a webcam positioned in front of the arm to conduct the camera calibration and the 3D reconstruction.

Exponential intelligence for exponential companies

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store