In image measurement process and machine vision applications, in order to determine the relationship between the three-dimensional geometric position of a point on a spatial object surface and its corresponding points in the image, a geometric model of camera imaging must be established. These geometric model parameters are camera parameters.Under most conditions, these parameters must be obtained through experiments and calculations. This process of solving parameters is called camera calibration (or camera calibration).Whether in image measurement or machine vision applications, calibration of camera parameters is a very critical link. The accuracy of the calibration results and the stability of the algorithm directly affect the accuracy of the results produced by the camera work.Therefore, doing a good job in camera calibration is the prerequisite for doing a good job in follow-up work, and improving calibration accuracy is the focus of scientific research.
Camera calibration methods include: traditional camera calibration method, active vision camera calibration method, and camera self-calibration method.
The world coordinate system is the absolute coordinate system of the system. Before the user coordinate system is established, the coordinates of all points on the screen are determined by the origin of the coordinate system.
It is generally also defined on the camera coordinate system of the first camera.The origin is located at the center of the camera.Then, the projection matrix P=[I|0] of this camera.That is, this camera does not translate or rotate in the world coordinate system.In this way, the positions of other cameras can be rotated and translated based on the world coordinate system defined by the first camera as a reference.The purpose of this is to be simple to calculate.Of course, you can also not build the world coordinate system like this.But the calculation will be more complicated.
Machine vision is to use machines instead of human eyes to make measurements and judgments.
The machine vision system refers to converting the absorbed target into an image signal through a machine vision product, transmitting it to a dedicated image processing system, obtaining the morphological information of the target, and converting it into a digital signal based on the pixel distribution, brightness, color and other information; the image system performs various operations on these signals to extract the characteristics of the target, and then controlling the equipment movements on the site based on the discrimination results.
Camera is divided into CCD and CMOS.
Four common coordinate systems in the field of computer vision: pixel coordinate system, image coordinate system, camera coordinate system, and world coordinate system.font-family: "Helvetica Neue", Helvetica, Verdana, Arial, sans-serif; font-size: 14px; text-wrap: wrap; background-color: rgb(255, 255, 255);">1. Pixel coordinate system:
The origin of the pixel coordinate system u-v is O0,
In the visual processing library OpenCV, u corresponds to x and v corresponds to y;
In the visual processing library OpenCV, u corresponds to x and v corresponds to y;
2. Image coordinate system:
The origin of image coordinate system x-y is O1, which is the midpoint of the pixel coordinate system.
As shown in the figure:
Assuming (u0, v0) represents the coordinates of O1 in the u-v coordinate system, dx and dy represent the physical dimensions of each pixel on the horizontal axis x and vertical axis y respectively;
Then the relationship between the image coordinate system and the pixel coordinate system is as follows:
3. Assuming that the unit in the physical coordinate system is millimeters, then the unit of dx is millimeters/pixel.
Then the unit of x/dx is pixels,
that is, the unit is the same as u.font-family: "Helvetica Neue", Helvetica, Verdana, Arial, sans-serif; font-size: 14px; text-wrap: wrap; background-color: rgb(255, 255, 255);">For convenience,
Write the above formula into a matrix form:
4. Camera coordinate system:
As shown in the figure:
O is the camera's optical center,
Zc is the optical axis of the camera, which is perpendicular to the image plane;
OO1 is the focal length of the camera;
5. The relationship between camera coordinate system and image coordinate system:
As shown in the figure:
6. World coordinate system:
World coordinate system was introduced to describe the position of the camera,
The rotation of any dimension can be expressed as the product of the coordinate vector and the appropriate square matrix.
The translation vector is the offset between the first coordinate origin and the second coordinate origin;
In the world coordinate system, there are two important parameters:
Rotation matrix R and translation vector T
Popular Blog Posts