The difficulty of multi-camera vision system lies in the unity of multi-camera coordinate system.It can be divided into two categories: one is that there is no overlap between the camera's field of view, and the other is that there is overlap between the camera's field of view.
The situation where there is no overlapping part between the camera is mainly used for high-precision positioning and measurement of large format multi-camera, and the situation where there is overlapping part between the camera is mainly used for scene splicing, etc.
1. Unify coordinates using large calibration board
Scheme introduction:
This method uses a large calibration board to unify the coordinates of each camera. Each large calibration board has several small calibration boards. The positional relationship between each small calibration board is known, and each camera can photograph a small calibration board.The internal and external parameters of each camera can be calibrated through each small calibration plate, and the coordinates of each camera can be converted to the coordinate system of each small calibration plate, thereby unifying the coordinates of each camera.System structure:
The camera takes Mark images at various positions and obtains Mark coordinates through image processing
The above picture shows an image of a single calibration plate. The large calibration plate consists of several single calibration plates. The size and number of calibration plates are determined according to the actual measurement situation.font-family: "Helvetica Neue", Helvetica, Verdana, Arial, sans-serif; font-size: 14px; text-wrap: wrap; background-color: rgb(255, 255, 255);">Schematic diagram of combination of multiple calibration plates:
Case analysis:
(1) Detection target analysis
Measuring the product requires several measurement indicators. As shown in the figure below.
(2) Image acquisition Four cameras are used to complete the measurement of all projects, and the photos taken are shown in the figure below.
(3) Testing process
First use the two vertical edges of each picture to calculate their intersection points, then the obtained 4 intersection points can calculate the values of L1 and L2, as shown in the figure below (taking the camera in the lower right corner as an example).
Precautions: This method requires unifying the coordinates of multiple cameras into a coordinate system, and a single camera also needs to do its own calibration to ensure accuracy.According to the detection requirements, how many points should be taken as reference, but this affects the time of test and needs to be considered as appropriate.
Application field: Detection of panel sizes of mobile phones and tablets.font-family: "Helvetica Neue", Helvetica, Verdana, Arial, sans-serif; font-size: 14px; text-wrap: wrap; background-color: rgb(255, 255, 255);">2. Use relative motion uniform coordinates
Scheme introduction: This method uses the relative motion between the camera and the object to be measured to unify the coordinates of the camera. As long as one side of the camera and the object to be measured, the coordinates of each position can be recorded, and then the coordinates are unified through mathematical operations.Usually, the camera position is fixed, and the object to be measured is moved through mechanical hand and other moving devices, and then the camera coordinate system is reached to the origin of the robot hand and other moving devices.Overall structure:
Method introduction: image taken through the cameraMark points are positioned to calculate the deviation of the measured object relative to the standard position, including the angle deviation and displacement deviation, and finally determine the angle and distance of the rotation of the mechanical device. The positioning system of the mobile phone touch screen and the mobile phone case is used to introduce the algorithm principle. The positioning system of the mobile phone touch screen and the mobile phone case is used to position multiple cameras instead of one camera, so that the mobile phone touch screen and the mobile phone case are accurately assembled. The camera is divided into two groups, one set of two cameras, one set of shooting the mobile phone case (group 1), and one set of shooting the touch screen (group 2). The calibration methods of the two sets of cameras are the same. The following is the calibration of the two cameras that shoot the mobile phone case. The camera is fixed, and the robot holds the mobile phone case and moves. The two cameras take two Positioning holes, use the template matching method to identify the two positioning holes, as shown in the figure below:
(1) Obtain the coordinates used for calibration by moving the robot (three-point linear calibration method)
Group 1 Camera 1:
Move the Mark point into the camera's field of view, determine the initial position of the camera, and obtain the center coordinate of the initial Mark point Point11 (cRow11, cColumn11). After the robot moves a certain distance (5mm) along the X direction, the center coordinate of the Mark point Point12 (cRow12,cColumn12), the robot moves a certain distance (6mm) along the Y direction to obtain the center coordinates of the Mark point Point13 (cRow13, cColumn13). At the same time, the spatial coordinates of the robot in the above three positions can be obtained Robot11 (X1[0], Y1[0]), Robot12 (X1[1], Y1[1]), and Robot13 (X1[2], Y1[2]) are obtained.Group 1 Camera 2:
Same as above, move the same Mark point into the camera field of view, and obtain the coordinates through the same operation: Point21(cRow21, cColumn21), Point22 (cRow22, cColumn22), Point23 (cRow23, cColumn23), Robot21 (X2 [0], Y2 [0]), Robot22 (X2 [1], Y2 [1]), Robot23 (X2 [2], Y2 [2]).font-family: "Helvetica Neue", Helvetica, Verdana, Arial, sans-serif; font-size: 14px; text-wrap: wrap; background-color: rgb(255, 255, 255);">(2) Determination of the conversion ratio relationship between image distance and actual distance:
The proportional relationship can be calculated through mathematical operations.font-family: "Helvetica Neue", Helvetica, Verdana, Arial, sans-serif; font-size: 14px; text-wrap: wrap; background-color: rgb(255, 255, 255);">(3) Determination of rotation center
Based on three points, the principle of a circle can be determined, and the robot rotates three times at the initial position to obtain the coordinates of Robot31, Robot32 and Robot33, which are relative to the coordinates of the robot coordinate system. The coordinates of the three points can be calculated to obtain the coordinates of the rotation center.font-family: "Helvetica Neue", Helvetica, Verdana, Arial, sans-serif; font-size: 14px; text-wrap: wrap; background-color: rgb(255, 255, 255);">(4) Standard line slope:
It is necessary to select a point in each field of view of the two cameras as the starting point and end point of the standard line, and then find the slope of this standard line in the robot coordinate system.Adjust the robot to the appropriate position and determine that this is the standard position. At this time, the two cameras in Group 1 take images of two different Mark points at the initial position respectively.The center coordinates of the two Mark points in the initial position of the two cameras are found through the template matching method. Point10 (Row10, Column10) and Point20 (Row20, Column20) are found. Point10 and Point20 are determined as the starting points and end points of the standard line.Find the coordinates of Point10 in the robot hand as shown in the figure below. In the figure below, XOY is the robot hand coordinate system, and X1O1Y1 is the image coordinates of group 1 camera 1.
Through the distance from point to line, the actual lengths of d14, d15 and d16 can be obtained. Since the same mark point is used during the movement, the values of d1, d2, d3 in the field of view of camera 1 and camera 2 are the same. Therefore, the actual coordinates of Point10 in the robot coordinate system are: Point10X=X1[1]+d16+d2, Point10Y=Y1[1]+(d1-d15). You can also obtain the actual coordinates of Point20 in the robot coordinate system:
Point20X=X2[1]+d26+d2, Point20Y=Y2[1]+(d1-d25).font-size: 14px; text-wrap: wrap; background-color: rgb(255, 255, 255);">K= (Point20Y-Point10Y) / (Ponit20X-Point20X)
= (Y2[1]-Y1[1]-d25+d15) / (X2[1]-X1[1]+d26-d16)
After each time you position, you need to compare with the slope of this standard line, so as to obtain the angle with the standard line, and finally perform rotation correction.
Note: Since Point10 and Point20 will fall in different positions, the above formula will change when calculating the actual coordinates of Point10 and Point20, but the principle is the same.
Positioning calibrationIn the future, each time the detection is performed, the obtained results are compared with the standard line, and the angle included with the standard line and the position deviation relative to the standard position can be obtained, and the angle included with the standard line can be corrected according to the obtained angle and position deviation.During the calibration process, it is necessary to rotate first and then translate, and a closed-loop feedback system should be corrected in real time.
Application field: Assembly positioning of touch screens and case of mobile phones or tablets.
Scheme introduction:
For some large-format objects, multiple images can be taken, each covering a different part of the object. If the camera is calibrated and their relative relationship with a common world coordinate system is known, precise measurements can be made with different images.Multiple images can even be stitched into a large image covering the entire object, which can be achieved by correcting each image to the same measurement plane. On the result image, measurements can be performed directly in the world coordinate system. Image stitching diagram:
Installation: Two or more cameras must be installed on a stable platform, and each image covers part of the entire scene.The camera orientation can be arbitrary, that is, they do not need to appear parallel or perpendicular to the surface of the object.Adjust the camera focal length, light and overlapping area to use a large reference object that can cover the entire field of view.In order to ensure that multiple images can be stitched into a large image, there must be a small overlapping area between them.The overlapping area can be very small because this overlapping area is just to ensure that there are no gaps in the result image of the stitching. The following figure is a schematic diagram of the overlapping area.
Calibration: The calibration of an image can be divided into two steps.Determine the internal parameters of each camera.Each camera can be calibrated separately to find the internal parameters of the camera.Determine the external parameters of all cameras.Because all images need to be converted into a certain world coordinate system in the end, a large calibrator needs to be used, which can appear in all images, and it can be composed of multiple calibrator plates, and the number of calibrator plates is consistent with the number of cameras used.The following picture shows the calibration images taken by the two cameras respectively.Note: In order to determine the external parameters of the camera, it is enough for each camera to only take a calibration image.The calibration object cannot be moved during the process of taking calibration images by multiple cameras separately.Ideally, these calibration images should be acquired simultaneously.
Stand a single image into a large image: First, each image must be corrected, and these images will be converted into a common coordinate system, and they will match correctly.After obtaining all maps required for correcting images, each pair of image taken with two cameras can be corrected and efficiently stitched.The result diagram of the stitching consists of two corrected images, each corrected image occupies a part of the image. The following figure is the corrected image and the stitching result.
Application field: Detection of surface quality of LCD panels
2. Stylize images by non-calibration method
Introduction: Compared with the first three methods, this method has lower accuracy and is suitable for applications where high-precision splicing is not required.The advantage of this splicing method is that it does not require camera calibration, and each individual image can be arranged automatically.Rules for shooting single images: When shooting each image, the following rules must be followed: (1) There must be overlap between adjacent images (2) The overlapping areas between images must have relatively obvious characteristics, which can ensure a more accurate automatic matching process.If the features within some overlapping regions are not obvious, it can be overcome by defining appropriate image pairs.If the characteristics of the entire object are not particularly obvious, the overlapping area should be larger.(3) The scale ratios of overlapping images must be approximately equal.Generally speaking, the difference in scaling ratios cannot exceed 5-10% (4) The brightness of these images should be similar, at least in the overlapping area.Otherwise, if the brightness difference is very obvious, the seams between images will be very obvious in the resulting image, as shown in the figure below.
Define overlapping image pairs: Some overlapping image pairs need to be defined, and the conversion relationship between these image pairs is determined by matching.The matching process will only be applied to these overlapping image pairs.
If there are more images that need to be stitched, or there are more images per line of overlapping images, it is very important to reasonably and thoroughly arrange the configuration of the image pair.Otherwise, some images may not be able to achieve exact matching.This is mainly because some errors in point coordinates caused by noise can't accurately calculate the conversion relationship between images, which will be transmitted from one image to the next image.
Extract feature points in the image: The number of extracted feature points affects the running time and the matching results.The more feature points, the slower the matching process.But too few feature points will increase the possibility of false results.Match feature points in overlapping areas and determine the conversion relationship between images: The most important task in the process of image stitching is the matching process between image pairs.Matched image pairs can have arbitrary translation and rotation, the only requirement is that the image should have approximately the same scaling ratio.If information about translation and rotation between images is available, it can be used to define the search area, which can speed up the matching process and make the algorithm more robust.The process of matching feature points is shown in the figure below.
Generate a stitched image Once you know the conversion relationship between image pairs, you can call the function to stitch the image. The following picture is a stitched image.
Note: Pay attention to the spherical splicing. The above method is only applicable to the camera rotating or scaling around the center of light.If the movement of the camera involves translation or is not strictly rotation about the center of light, the splicing results obtained using this method will be inaccurate and cannot be used for precise measurements.
Application field: (1) Stitching of street scenes.(2) Production of electronic maps.(3) Splicing of medical images.
Popular Blog Posts