google.com, pub-4497197638514141, DIRECT, f08c47fec0942fa0 Industries Needs: DATA PROCESSING AND CALIBRATION OF A CROSS-PATTERN STRIPE PROJECTOR

Wednesday, January 12, 2022

DATA PROCESSING AND CALIBRATION OF A CROSS-PATTERN STRIPE PROJECTOR

 

2 PHYSICAL SETUP FOR DENSE 3-D SURFACE ACQUISITION

2.1 Classification of Triangulation based Sensors

Dense surface acquisition is one of the most challenging tasks in computer vision. Active research in the last two decades led to a variety of high speed and high precision sensors including stereo camera systems, laser range finders and stripe projection systems.

The class of triangulation based sensors observes the object from at least two different angles. In order to obtain three[1]dimensional measurements, point correspondences have to be established, allowing 3-D shape to be reconstructed in a way that is analogous to the way the human eye works.

The family of triangulating sensors can be further subdivided in active and passive triangulation systems. Active triangulation systems illuminate the scene rather than relying on natural or uncontrolled lighting.

A stereo camera is the prime example of passive optical triangulation. For stereo vision, two or more cameras are used to view a scene. Determining the correspondences between left and right view by means of image matching, however, is a slow process. For faithful 3-D reconstruction of objects, passive stereo vision techniques depend heavily on cooperative surfaces, mainly on the presence of surface texture. The fact that most industrial parts lack this feature reduces its usefulness in an industrial context.

However, since the underlying measurement principle is strictly photogrammetric, those methods enjoy all the advantages of photogrammetric systems, like the possibility for multiple observations (geometrically constrained matching), self-calibration, robustness and self diagnosis.

To overcome the need for cooperative surfaces and to speed up the evaluation steps, active triangulation systems project specific light patterns onto the object. The light patterns are distorted by the object surface. These distorted patterns are observed by at least one camera and then used to reconstruct the objects surface.

Some of the most widely used active triangulation techniques are:

• Light dot range finders: A single laser point is projected onto the surface and observed by a camera. If the position and orientation of the light source are known, a single 3-D point can be computed by intersection. For dense surface measurement, the light-dot must scan the surface.

• Light dot range finders: A single laser point is projected onto the surface and observed by a camera. If the position and orientation of the light source are known, a single 3-D point can be computed by intersection. For dense surface measurement, the light-dot must scan the surface.

• LCD shutter devices: A light projector shining through a computer controlled LCD screen is used to illuminate a surface in much the same way as in light-stripe range finder systems. The LCD effectively allows the surface to be scanned without needing to move the object. LCD shutter devices are suitable for measuring stationary objects, and are generally faster than light stripe systems for this type of task. Depth of field issues, however, make LCD devices less robust than light stripe devices.

• Moiré devices: A set of fringe patterns is projected using an interference technique. The contours of the fringes are used to reconstruct the object’s surface. Moiré devices are appropriate for the precise acquisition of smooth surfaces with few discontinuities.

 

2.2 Sensor Architecture

We use a LCD type projector (ABW LCD 640 Cross, Wolf (1996)) for our experiments . The line pattern is generated by switching lines on a two-dimensional LCD illuminated from behind. This type of projector has the advantage that there are no moving parts.

While normal LCD stripe projectors use two glass plates with conducting stripes aligned precisely (Figure 1 (b)), a cross-pattern projector has one of the glass plates turned by 90 degrees (Figure 1 (c)). Since all stripes can be switched individually, arbitrary vertical and horizontal stripe patterns can be generated (albeit no arbitrary 2D patterns can be generated, since the 2D pattern always results from a XOR of the two line patterns).

In the context of a photogrammetric evaluation, this means that the projector can be modeled as an inverse camera delivering 2D “image” coordinates.

Our projector features a LCD with 640×640  lines, line spacing of 90µm and a halogen light source of 400W. According to the specification, patterns can be switched in 14 milliseconds making it feasible to acquire images in video real-time. Nevertheless, using high quality cameras, we found that the latency time to completely replace one pattern by another is about 50ms . Commands and pattern sequences can be sent to the projector via a RS-232 interface.

In our previous experiments, we used a standard video camera (SONY XC75) with a 1/2” imager and approximately 8µm pixel size, grabbed with an ELTEC frame grabber at 748 576 × pixels. This camera has been replaced by a pair of high quality, digital cameras (Basler A113) with 2/3” imagers, 6.7 µm pixel size at 1300×1000 pixels and 12mm Schneider-Kreuznach lenses.

Projector and camera were mounted on a stable aluminum profile with a fixed, but unknown, relative orientation (Figure 2).

3 DATA ACQUISITION AND PROCESSING

3.1 Introduction to Coded Light Techniques

In light sectioning using light stripe range finders, a single line is projected onto the object (Figure 1 (a)) and then observed by a camera from another viewpoint. Tracking the bright line appearing in the image will yield all parallaxes and thus object depth. To recover the object's 3-D geometry, many lines have to be projected under different angles (Figure 1 (b)), which can be accomplished either by projecting an array of parallel lines simultaneously or by projecting different lines in temporal succession.

The first method has the disadvantage that the lines have to be projected closely for a dense reconstruction. In this case, a correspondence problem arises, especially at steep surfaces or jump edges. Projecting in temporal succession means that for n different light section angles n images have to be acquired, where n may be in the order of several hundreds to thousands.

This problem can be elegantly solved by the use of coded light techniques, which require only in the order of ld n images to resolve n different angles (see Figure 4 for n = 8 ). Rather than projecting a single line per image, a binary pattern is used. This technique has been initially proposed by Altschuler (Altschuler et al. 1979, Stahs and Wahl 1990).

 

3.2 Solutions to the Correspondence Problem – Phase Shift Processing

3.2.1 Projection of Gray Code

For structured light analysis, projecting a Gray Code is superior to a binary code projection (see Figure 5, top). On the one hand, successive numbers of the Gray Code vary exactly in one bit. Thus, wrong decoding which is most likely to occur at locations where one bit switches, introduces only a misplacement of at most one resolution unit. On the other hand, the width of bright and dark lines in the pattern with finest resolution is twice as wide compared to the binary code. This facilitates analysis especially at steep object surfaces where the code appears to be compressed.

3.2.2 Combination of Gray Code and Phase Shift

To obtain a resolution beyond the number of lines which can be switched by the projector, phase shifting can be applied. This uses the on/off intensity pattern generated by the switched projector lines as an approximation of a sine wave. The pattern is then shifted in steps of π/2 for a total of N = 4 pattern positions. Approximating the sampled values f (φi) at a certain fixed position (Figure 5, bottom) by

Thus, assuming σg = 2 and C = 25 (which corresponds to a modulation of 50), σφ0 = 0.057 is obtained. Since 4 projector lines are used for the range [0,2 π ] (Figure 5), this transforms to 0.036 or 1/28 of the line width.

Thus, from theoretical analysis, the accuracies which can be obtained by phase shifting are comparable to those in photogrammetry.

One problem using phase shift measurements is that small errors in phase measurement near the changeover from 2π to 0 can cause large measurement errors with a magnitude of about 1 period or 4 projector lines (Figure 6 (b)).

This problem can be solved by an oversampling technique. If the resolution of the projected gray code is half the length of a period, we can specify valid ranges for phase measurements, observed at a specific pixel position. This allows to detect and even correct for gross measurement errors (Figure 6 (c)).

However, it has to be taken into account that non-uniform object surface properties, such as a sharp change from black to white result in systematic measurement errors. Also, since the camera pixels effectively integrate over a certain area of the stripe code, the above error estimation is only true with a camera resolution sufficiently higher than the projector resolution.

Another problem arises if we want to combine measurements from different cameras using the same projector. The phase shift method yields for each camera pixel the corresponding projector stripe number with substripe accuracy. This means there is no direct link between image coordinates acquired by different cameras which might improve accuracy and reliability of the resulting surface points.

 

3.3 Solutions to the Correspondence Problem – Line Shift Processing

3.3.1 Requirement Analysis

Based on the experience we made with our previous system, a detailed analysis of all requirements for the design of our new method has been performed. We identified the following requirements, all of which have not been met by a single system to date:

• Abilitity for automation: The system should be able to adapt automatically to different objects without user interaction.

• Efficiency: Dense surface measurements of some 100.000 surface points should be feasible within a couple of seconds.

• Versatility: The method should extend over a wide range of sensor configurations, particularly the combination of multiple cameras with a calibrated or uncalibrated projector.

• Insensitivity to reflectance properties of object surface: Changes in surface reflectance should not cause systematic errors in range measurement.

• Accuracy information for 3-D coordinates: In addition to the object point coordinates, statistical measures for the quality of point determination should be available and used by further processing steps.

• Consistency and reliability: The system should prefer consistent and reliable points over complete coverage of the object’s surface, especially in the presence of unfavorable lighting conditions and reflective surfaces.

 

3.3.2 Line Shift Processing

To meet the formulated requirements, we have developed a new method, called line shift processing, to solve the correspondence problem fast and precisely.

As before, efficiency is achieved by making use of the highly parallel nature of our projection unit. However, inherent problems in phase shift measurements made us create a new pattern design. We project a sequence of parallel lines, achieved by illuminating each nth projector line. For our experiments, we have chosen n = 6 .

The evaluation of the so called line shift images is performed similar to the images obtained with a light stripe range finder (Section 3.3.3). Six images for the x and six images for the y coordinates are needed to use the whole resolution provided by the projector.

After the line centers have been detected, the gray code sequence is used to resolve ambiguities and determine uniquely the projector line number. An oversampling technique, similar to the one used in phase shift processing is used to make the ambiguity resolution more robust.

In the next step we intersect the lines, joining the detected stripe centers, to obtain camera coordinates with sub-pixel accuracy for each projector coordinate.

The transition from camera images to projector images is one of the major differences between the two methods.

Performing the same steps for an arbitrary number of camera / projector combinations, immediately gives us not only the correspondences between image points of a single camera / projector combination but also corresponding points between any of the cameras linked by a common projector.

 

3.3.3 Locating the Stripe Centers

A lot of research was done, mainly in the computer vision community, to efficiently determine the center of a light stripe with sub-pixel accuracy. (Trucco et al., 1998) compare five major algorithms with respect to bias, introduced by the peak detector and evaluate the theoretical and practical behaviour under ideal and under noisy conditions.

All of the considered algorithms determine the peak position by fitting a 1-D curve to a small neighborhood of the maximum of the stripe crossection, assuming a Gaussian intensity profile.

The algorithms are compared with respect to a bias, introduced by the peak detector and the theoretical and practical behavior under ideal and under noisy conditions.

In summary, the results were that all but some small center of mass filters are reasonably unbiased and that a Gaussian approximation, developed by those authors, and two detectors developed by (Blais and Rioux, 1986) performed well even under severe noise conditions.

Taking into account these results, we implemented our peak detection algorithm based on the detector of Blais and Rioux. After removing the effects of ambient lighting, the images are convolved by a fourth or eighth order linear filter:

We estimate the peak center position by linear interpolation at positions where a change in sign is encountered in the convolved image. This way, subpixel accuracy estimates for the peak center position are obtained.

 

3.3.4 Correction for Reflectance Discontinuities

Reflectance discontinuities on the object surface are a major source of errors for most 3-D sensors based on the evaluation of structured light patterns. In previous work, Curless and Levoy (Curless and Levoy, 1995) have introduced space-time analysis to address this problem for a laser stripe range finder.

Implementing space-time analysis for a stripe projector, however, means to establish correspondences between camera coordinates and sub-stripe projector coordinates. In consequence, we would loose the direct link between different cameras seeing the same projector pattern.

In contrast to laser stripe range finders, our system allows to acquire an “all-white” image of the work space. The information, gathered by this image and by a second image, containing only the effects of ambient lighting allows us to normalize the stripe images. This step considerably reduces the influence of reflectance discontinuities on the surface of the object. For example, Figure 8 shows the effects of a strong surface intensity variance on the result of phase and line shift processing. In this example, the word “Intensity” is printed in black letters onto a white, planar surface. As a result, phase shift processing obtains height variations of up to 800 µm. Using line shift processing with intensity normalization, this can be reduced to 120 µm.

3.3.5 Improvement of Accuracy, Reliability and Flexibility by use of multiple Cameras

Accuracy and reliability can be significantly improved if image data from multiple cameras is available, all sharing the same projector.

Because line shift processing directly links projector lines to camera coordinates, corresponding pairs of image points between different cameras, as well as pairings of image points between the cameras and the projector can easily be found. Each pair contributes four observations to solve for the three unknown object point coordinates. It is obvious, that a large number of observations, corresponding to a high redundancy, yields more accurate results.

The data, obtained by multiple cameras, can also be used to enforce explicit consistency tests. Specular reflections, as they are likely to occur on the surface of machined metal parts, cause spurious 3-D point measurements. The consistency tests are based on the observation, that specular reflections are viewpoint dependent. If the cameras view the object from different angles we can compute the deviation between object points, computed from different pairings of image points. This allows us to either discard one single observation or completely discard all the observations for the corresponding surface point.

If multiple cameras are available, we can also omit the projector’s contribution to the observations and use it just as an aid to establish point correspondences between multiple cameras. This variant makes it possible to use projection devices, that have not been specifically designed for measurement purposes, the same way as our projector is used in an uncalibrated setup. A particularly attractive solution combines a standard video beamer with a stereo camera pair to collect surface information of large-scale objects for virtual reality applications.

It has to be noted that in a setup where only the cameras deliver image point observations, errors due to reflectance discontinuities on the object surface are inherently accounted for.

 

3.4 Computation of Object Point Coordinates

3-D point determination is carried out using a forward intersection based on the extrinsic and intrinsic parameters obtained from the bundle. Forward intersection uses a minimum of four observed image coordinates to estimate three world coordinates. Using more than one camera and a calibrated projector, redundancy and hereby accuracy and reliability can be significantly improved. The same holds true for a combination of more than two cameras and an uncalibrated projector.

In every case, object point coordinates are obtained by inverting a 3×3 matrix which afterwards contains the covariance information.


4 SYSTEM CALIBRATION

4.1 Direct Calibration vs. Model Based Calibration

There are two basic approaches for the calibration of an optical 3-D system.

Direct calibration uses an arbitrary calibration function (usually a polynomial) to describe the mapping from observations to three-dimensional coordinates. The parameters of this function are obtained by measuring a large number of well-known points throughout the measuring volume. An immediate advantage is, that no care has to be taken to model any phenomena, since every source of error is implicitly handled by the computed coefficients.

However, direct calibration requires a highly accurate calibration normal. Especially for sensors with a large measurement volume, this requirement complicates the calibration procedure or makes it even impossible. Since the calibration function acts as a black box, there is no information about the quality of measurements.

In model based calibration, parameters of a geometric model of the sensor, so called intrinsic and extrinsic parameters, are determined. The model describes how points in 3-D space are projected onto the image plane, considering imperfect cameras and lenses. There exist techniques in photogrammetry to simultaneously estimate these parameters during measurement tasks, however, most commonly a specially designed test object is used to effectively compute the desired quantities from a few calibration measurements. Since any short-term geometrically stable object can be used for calibration, there is no need for an accurate calibration normal. Nonetheless, if absolute measurements are required, at least one accurate distance (e.g. from a scale bar) is needed to fix the scale.

On the down side, highly accurate measurements require complicated sensor models and some effects in the image formation process might remain uncorrected.

In a previous paper (Brenner et al., 1999) we have compared polynomial depth calibration, a standard direct calibration technique to model based photogrammetric calibration. As a conclusion, we found that both calibration procedures yield comparable accuracies. However, in our opinion it is advantageous to obtain the model parameters explicitly. The fact, that model parameters hold true for all the measurement volume of the sensor omits problems with measurements lying outside the volume originally covered by calibration. In addition, residuals and the covariance matrix give a clear diagnosis for the determination of the calibration parameters and object point coordinates. Finally, point correspondences that have been established between multiple cameras can increase redundancy and thus give more accurate results.

4.2 Photogrammetric Calibration

Since the projector is able to project horizontal and vertical patterns, it can be modeled as an inverse camera. Two[1]dimensional image coordinates can be obtained by phase shift and line shift processing. Thus, the projector can be calibrated using a planar test field and a convergent setup.

The test field we use consists of an aluminum plate, on which we fixed a sheet of self-adhesive paper showing white dots on a black background. Five of those targets are surrounded by white rings. These rings allow to determine the orientation of the test field. In the next step all visible target points are measured and identified fully automatically. Then, image coordinates for the camera are obtained by computing the weighted centroid. After that, corresponding projector coordinates are computed with sub-pixel accuracy by a sampling at the centroid positions.

At present, we export these measurements and compute the bundle solution for the calibration parameters externally using the “Australis” software package from The Department of Geomatics of The University of Melbourne.

We use a camera model with 10 parameters, namely the focal length c , the principal point offsets ∆x and ∆y , K1 , K2 and K3 for radial symmetric distortion, P1 , P2 for decentering distortion and finally B1 and B2 for scale and shear (Fraser, 1997).



Typical standard deviations are about 1/25 to 1/30 of the pixel size for the image point coordinates and about 5µm for the object space coordinates within a measurement area of about  25×25  cm2 which means a triangulation accuracy of 1/50,000 .

 

5 EXPERIMENTS

This section reports some results and compares them with the ones of our previous setup. To make the numbers more comparable, we used only part of the image, located at the left edge of the sensor and seeing roughly the same area of the measurement volume as we used in our previous tests. This prevents the new system to benefit from the higher resolution of our new cameras.

Accuracy assessment was performed using a granite reference plane as calibration normal. This plane is certified to have an overall surface flatness of 7 microns. Since the surface is black, we covered it by a self-adhesive white foil to obtain a high signal modulation. The reference plane was positioned in front of the sensor both perpendicular to the optical axis and at different angles. The measurement area was about 20cm × 20cm.

Images were taken for a single camera in combination with our calibrated projector. After a dense cloud of measured 3D points is obtained, a plane is fit to the data and minimum, maximum and standard deviation of the points from the plane are determined.

Table 1 shows the results of this step for our previous and current setup. From this table, we can see that our new approach has reduced standard deviations by half, compared to the previous method. It is also remarkable, that the maximum deviation was reduced by a factor of three.

This translates to approximately 1:10,000 relative accuracy.


One reason for the improved accuracy lies in our digital cameras that deliver pixel-synchronous information without any errors induced by a camera / framegrabber combination. The second reason is a modified camera model which gives better results especially for large radial distances. Third, we perform stricter consistency tests during data processing. Last, but not least, our new method gives more accurate results than phase shift processing.

 

6 CONCLUSIONS AND FUTURE WORK

We have addressed several issues concerning the calibration and data processing of a coded light measurement system. We enhanced our hardware setup by two digital cameras and identified conventional phase shift processing as a significant source of errors if objects show large variations in surface reflectance.

To further improve our system, we developed an alternative processing scheme, called line shift processing, and evaluated its behavior under different conditions. The accuracy of the new method has been investigated by measuring a highly accurate reference plane. As a result, we can state that under ideal conditions, accuracy has been improved by a factor of two compared to our previous system.

Moreover, line shift processing has another immediate advantage: since integer-valued projector coordinates are linked to sub-pixel camera coordinates, correspondences between multiple cameras in a single projector, multiple camera setup can easily be established. These correspondences can considerably improve accuracy and allows to define consistency criteria in order to reject spurious measurements. Additionally, the projector can be used solely as an aid to establish point correspondences. In this kind of scenario the intrinsic parameters of the projector have no influence on triangulation results and even off the shelf video projectors can be used.

Calibration was achieved by adopting a standard camera model to our stripe projector, modeled as an inverse camera. The calibration method provides a high level of accuracy and is general enough to be applied to other sensor setups, for example for large measurement volumes using standard video projectors. Automatic target point measurement and identification has been the first step towards a fully automatic calibration procedure. In the near future, we will achieve full automation by using a computer-controlled pan-tilt unit.

In spite of the promising results we obtained, further work is necessary to evaluate the performance of line shift processing in the context of more complex measurement tasks.


No comments:

Post a Comment

Tell your requirements and How this blog helped you.

Labels

ACTUATORS (10) AIR CONTROL/MEASUREMENT (38) ALARMS (20) ALIGNMENT SYSTEMS (2) Ammeters (12) ANALYSERS/ANALYSIS SYSTEMS (33) ANGLE MEASUREMENT/EQUIPMENT (5) APPARATUS (6) Articles (3) AUDIO MEASUREMENT/EQUIPMENT (1) BALANCES (4) BALANCING MACHINES/SERVICES (1) BOILER CONTROLS/ACCESSORIES (5) BRIDGES (7) CABLES/CABLE MEASUREMENT (14) CALIBRATORS/CALIBRATION EQUIPMENT (19) CALIPERS (3) CARBON ANALYSERS/MONITORS (5) CHECKING EQUIPMENT/ACCESSORIES (8) CHLORINE ANALYSERS/MONITORS/EQUIPMENT (1) CIRCUIT TESTERS CIRCUITS (2) CLOCKS (1) CNC EQUIPMENT (1) COIL TESTERS EQUIPMENT (4) COMMUNICATION EQUIPMENT/TESTERS (1) COMPARATORS (1) COMPASSES (1) COMPONENTS/COMPONENT TESTERS (5) COMPRESSORS/COMPRESSOR ACCESSORIES (2) Computers (1) CONDUCTIVITY MEASUREMENT/CONTROL (3) CONTROLLERS/CONTROL SYTEMS (35) CONVERTERS (2) COUNTERS (4) CURRENT MEASURMENT/CONTROL (2) Data Acquisition Addon Cards (4) DATA ACQUISITION SOFTWARE (5) DATA ACQUISITION SYSTEMS (22) DATA ANALYSIS/DATA HANDLING EQUIPMENT (1) DC CURRENT SYSTEMS (2) DETECTORS/DETECTION SYSTEMS (3) DEVICES (1) DEW MEASURMENT/MONITORING (1) DISPLACEMENT (2) DRIVES (2) ELECTRICAL/ELECTRONIC MEASUREMENT (3) ENCODERS (1) ENERGY ANALYSIS/MEASUREMENT (1) EQUIPMENT (6) FLAME MONITORING/CONTROL (5) FLIGHT DATA ACQUISITION and ANALYSIS (1) FREQUENCY MEASUREMENT (1) GAS ANALYSIS/MEASURMENT (1) GAUGES/GAUGING EQUIPMENT (15) GLASS EQUIPMENT/TESTING (2) Global Instruments (1) Latest News (35) METERS (1) SOFTWARE DATA ACQUISITION (2) Supervisory Control - Data Acquisition (1)