Sunday, November 14, 2010

The Wild, Wild West of 3D Metrology

Optical metrology system suppliers are often present at
manufacturing trade shows. Casual discussions with the personnel at
the booths, though, may reveal an astonishing discovery: the stated
accuracies of the optical metrology systems seem to sit in the wild, wild
west. No clarification on how numbers are achieved, or under which
circumstances, or on which types of surfaces, or even at which sigma values the
numbers may be quoted. Metrology systems providers
that do provide thorough information on accuracy often reference
“ideal” test artifacts such as matte white surfaces in dark environments.
Even at a trade show, where people come to learn about measurement systems, this
is not fair to customers. But back in the real world, in your
environment, where it really matters, be especially sure to cross
your t’s and dot your i’s. Assuming the wrong numbers, even a
little bit, is very, very bad because measurement systems have to be a small
fraction of actual manufacturing blueprint tolerances…so any unclear
factors have severe consequences for applicability to a process.

Why is this noteworthy? Well, for one thing, the cameras and lensing
determine the noise/accuracy floor of the system. Once this floor is
determined, software factors such as image processing algorithms, data
extraction, etc. set the operating parameters. But here’s where things get
interesting (and challenging): environmental factors such as vibration,
lighting, and heat, and application factors such as the parts being measured,
the camera’s field of view relative to part size, and the method used to
collect/finalize the data work to compromise this theoretical floor.

The challenge for optical metrology systems providers is that they have
to work with customers’ requirements/environments, whereas the challenge for
companies that purchase optical measurement systems is that their requirements
can change.

What, then, is the solution?

The solution is for metrology systems suppliers to GUARANTEE that their
measurement systems will work in the customer environments, as installed, for
the intended purpose. Whereas this may leave some ambiguity later if the
customer’s requirements change, at very least the system is guaranteed to work
at least once. If it doen’t work, no payment. This means that the
company using the measurement system should verify the results, and if the
system does not meet specification, request a refund. This
should be part of the discussion from the beginning. No performance, no

Some metrology systems suppliers reference established NIST, ISO, ANSI,
VDI/VDE, etc. standards when quoting accuracy. In many/most cases,
these standards are useless for an application. VDI/VDE 2634, for
example, allows measuring matte white tooling balls that are large relative
to the camera’s field of view. This is irrelevant for many
applications. Who manufactures matte white tooling balls? Probably
not your company. Your company should ask the metrology system provider to
verify accuracy on your component. These standards are fine for
traceability but say nothing about applicability. For a
measurement system to be effective, you need both.

If anything, these “idealized” test procedures will show the
metrology system at its best - but in reality, we want to know how the metrology
system will perform at its worst - when things go wrong in a production
environment. This indicator of robustness is what separates production
metrology equipment from a laboratory experiment. If you are a lab, then
you can carefully set up each experiment/test/analysis. But, if you are
producing parts, and rely on your measurement system to provide an accurate
description of your process, the laboratory is a far world away.

One quick way to confirm a system’s performance is to perform a repeatability
study. Simply measure the exact same part 10 times in the exact same
way. Compare runs 2 through 10 to run 1. The largest deviation
should be less than the stated value at 2 sigma. Then, perform an accuracy
study. Simply measure the exact same part 10 times, but change as many
things as possible between each measurement (the part’s location in the work
area, the vibration in the workspace, lighting brightness, lighting changes,
camera angles, calibration routines, etc.). Again, compare runs 2 through
10 to run 1. And, again, the largest deviation should be less than the
stated value at 2 sigma.

For further information on accuracy studies, read our blog entry on
measuring surface plates, and also return to this blog - we’ll be continuing to
discuss this topic in the future. The important thing to take
away is not to idealize the experiment, but to push the metrology system to
its limits to better understand it. Then, spec the equipment at a
reasonable indication of its peformance. This may mean specifying it
differently than what the manufacturer states.

If applied with perspective, a metrology system won’t be a wildcard in your
production environment, and won’t force your environment into the wild,
wild west.

Point Clouds Are Dumb

Let’s face it. Point clouds and mesh files are dumb.
Although the 3D scanner industry has been promoting them for decades as emerging
inspection formats, the fact is, they are inefficient, unintelligent,
processor-intensive, bulky, and contain too much data that you don’t need and
too little information that you do.

This doesn’t mean that the people who use point clouds are dumb. They
are, in fact, some of the smartest people we know. But here’s our “point:”
early in the dawn of CAD, the files were also dumb. After the entities
were created, their construction information was lost. As the technology
matured, parametric design became an efficient, useful practice. In the
future, the same thing will happen to inspection data. Each measured point
will become parameterized with the information that created it.

This parameterization will include all measurement system characteristics,
and may also include pre-fetched characteristics enabling a more elegant link to
CAD, CAM, CAE, and multiphysics simulations, as well as SPC and other management
tools. Until this happens ubiquitously, choose your point cloud
utilization carefully.

Let’s look at a few reasons why point clouds/meshes should be

1) Unless you’re measuring a “profile of a surface,” they don’t contain
the information you need.
Holes? Edges? Spheres, slots,
sheet metal information, cylinders? Forget it. Not only is this data
not directly contained in the cloud/mesh, it may very well be corrupted.
And, even if you’re measuring a profile of a surface, the point
clouds/meshes are usually overly dense, or overly subsampled - but rarely
“just right.”

2) Point clouds/meshes contain massive amounts of information that still
has to be parsed by the inspection software in order to perform alignments,
inspect features, tolerance, and report.
The problem is, this parsing
is often performed arbitrarily, and therefore, sub-optimally. You should
not let an arbitrary hole-fitting algorithm decide how to optimize the
fit. Rather, the hole fit should take place at a stage when the systematic
errors can still be traced.

3) Point clouds/meshes often contain corrupted data. Data
inside of holes, in corners, and in (for example) dark/shiny areas are often
susceptible to bad data caused by the optical path (reflections inside the hole,
reflections in corners, signal-to-noise nonuniformities in dark areas,
saturation in shiny areas, etc.). Yet, these errors are passed through to
the point cloud, and then onto the inspection. However, by the time the
data reaches the inspection, the knowledge of which areas contained systematic
errors is lost.

4) Inspection should take place from the camera data (or CMM generation
parameters, or tracker inner workings, etc…), not the point
Why? The camera is the device that generates the
data. Therefore, the inspection should be computed off of camera
parameters (pixel sensitivities, feature/target/fringe uncertainties,
triangulation, redundancy, etc.). Inspecting off of a “dumb” data format,
after all of the key data characteristics have been thrown away, does not make
sense, and most certainly hurts the system’s accuracy/traceability.

5) The inspection files are huge. Even with 2010’s powerful
computers, some clouds/meshes are too computationally/memory intensive to
process at full density. Therfore, data is, once again, arbitrarily thrown
away. This is especially true as higher-pixel cameras are required to meet
accuracy, cycle time, or other specific project requirements.

6) Missing/insufficient data is not apparent until it’s too
Often, long after the scanning/data collection process is
complete, and the point cloud is generated, the realization takes place that
there is data missing in a key area. Rather than update on-the-fly, often,
the entire file has to be re-generated (re-aligned, meshed, filtered, exported,
etc.), and the inspection re-generated from scratch.

Now, let’s look at when point clouds/meshes are useful:

1) Reverse engineering - When accuracy is not crucial but complete
coverage of the surface is important, and the data is going to be artistically
interpreted, re-drawn, and optimized into another process, point clouds/meshes
are optimal.

2) Lower accuracy inspection - If highest accuracy is not crucial,
point clouds are just fine. This is, obviously, relative and dependent on
the kinds of products you manufacture, and their associated tolerances. In
many cases, the point cloud can address 90% of the inspection, and the remaining
10% can be handled with other methods.

3) Engineering/Development - If the engineering/development team is
still identifying unresolved problems, or looking for problems/opportunities
that they are not yet aware of, the point cloud/mesh can be a powerful
investigative file format.

4) Slower cycle times - If cycle times are not crucial,
point clouds may be fine. Note that the mere act of scanning the surface
and generating a point cloud/mesh may very well be faster than most competing
technologies, which in many situations is a win.

5) User-driven inspection/programming - If an expert operator
oversees every aspect of the data handling and inspection, and is there to
babysit and scrutinize every data selection, feature fit, etc., a point
cloud/mesh can provide very close results to an optimal system.

The most important thing to remember about point clouds and meshes is: use
your own judgment. Investigate the pros/cons to decide if they are the
right format for you. Don’t assume that just because they are growing in
popularity, that they are the catch-all approach to inspection. But
do know that, if applied correctly, they can be valuable! Unless the
measurements are parameterized, ordinary “dumb” point clouds must be
approached with measure - which is, of course, the point!

How Expensive Are Your Cameras?

When you purchased your optical metrology system (such as a “scanner” or
“photogrammetry system”), how much did the cameras cost? Did your supplier
offer different performance packages? Was there a “lower resolution” model
and a “higher resolution” model? What was the price difference between the
low-end system and the high-end system? We can tell you what it should
have been, all else considered equal:

1 megapixel camera: about $3k
4 megapixel camera: about $7500
11 megapixel camera: about $11k
16 megapixel camera: about $15k

These are reasonable single-camera prices for computer-linked,
industrial cameras with large pixels (between 7um and 12um), at least 12 bit
converters, high clock speed (up to GigE), reliable data transfer (buffered),
and rugged (industrial) design. If you want to verify this yourselves,
contact your local high-end industrial camera supplier.

Why is this important? Because some metrology system providers mark up
cameras stratospherically. They assume you are willing to pay big bucks
for “performance,” completely ignoring the fact that the industrial camera
marketplace has already priced this “performance.” In fact, camera swaps
are commonplace, and often require little or no modification to the rest of
the metrology system, software algorithms, and packaging.

In triangulation-based systems, pixels are important in 2 ways:

1) pixels = Z accuracy. Since a triangulation-based system derives
its X,Y (and therefore Z as a projected angle of the triangulation)
accuracy from sub-pixel marking of features (fringe, target, etc.) in the
image, higher pixels = higher accuracy, all else considered equal.

2) pixels = X,Y resolution per field of view. Upgrading to more pixels
means a larger image can be collected while maintaining the same pixel density
in the image.

But it goes much, much deeper than this. Cameras also = freedom.
Why add scans when you can add cameras? Using a single “scanner” to
measure 360 degrees around an object usually means mounting it on a robot and
placing the part on a turntable. But what if, instead, you could set up a
camera “network” around the part? The software price stays the same, and
you simply add cameras/projectors to accomplish the measurement. This
means no robot/turntable is necessary, and even if you need 10-4megapixel
cameras, this is only a $75k upgrade. What if you could get rid of all of
the clumsy, multi-scan referencing by using many large-pixel cameras?
What if you could accomplish your measurement tasks quickly, inexpensively,

What if…?

Your 3D Metrology System is Not Normal

We’re in the infant stages of 3D metrology. How do we know?
Because almost every analysis, reporting and processing software on the market
today is using Gaussian fitting algorithms:

How round is the circle? Perform a Gaussian fit (i.e. assume
the errors are normally-distributed)
How flat is the plane?
Perform a Gaussian fit (i.e. assume the errors are
Surface profile? Perform a Gaussian fit (i.e.
assume the errors are normally-distributed)

The problem is, data is almost never Gaussian. Even if you are an
expert in kurtosis and skew, if you’re using a Gaussian fitting
algorithm on a data set that isn’t Gaussian, you’re getting the wrong
result. What we really want to know is, “after the systematics are
removed, what is the Gaussian component?”

The deeper problem is that the systematics are very difficult to
pinpoint. They can elude simple analysis tools, often requiring
investigation of complex/nonlinear interactions, and application of specialized
optimizations. Systematics can pop up in both contact and non-contact
metrology systems, and are the direct result of the underlying physics of the
measurement system, the design trade-offs, and their application.

3D metrologists spend countless hours performing accuracy study after
accuracy study, looking for the answer to the question: “How accurate is my
measurement system?” But the elusive answer is only as revealing as
the toolset used to analyze the data.

In an optics system, this translates to a wide range of gremlins: multipath,
calibration-induced, signal-to-noise, movement/stability, thermal, incident,
algorithmic, global, merging, depth of field, diffractive, saturation,
filtering, and many others. Even the best-designed system must yield to
this reality, and this provides a tremendous opportunity for innovation and
improvement. And this can only come through specialization.

For this very reason, as metrology systems become increasingly
specialized for optimal performance in specific environments (such as
production environments), assumptive Gaussian approaches become more
obsolete still, because these systems are truly controlled, and therefore can
benefit, in a very comprehensive and predictable way, from robust
corrections from systematic errors. In fact, for this benefit alone,
metrology systems should be designed for their specific

So what are Gaussian algorithms useful for? They’re
useful for evaluating outliers, and for communicating results in a language that
many people understand. They’re useful for evalutaing based on the mean,
and for analyzing against an expected result. They’re useful for
determining whether an expected result (i.e. randomness) exists. And, most
of all, they’re useful, after systematic errors have been removed,
for quantifying background noise in a measurement system.

But what are the real-world symptoms and implications of
Gaussian-only data analysis?
- Playing “Whac-A-Mole” with observations even in so-called high-end metrology systems to remove systematic errors (such as focus, shadows, or probe offset). Many who
use these systems have had to dump out data for no apparent reason to
make them “work”. Some people call this “filtering.”
- The algorithms that use Gaussian assumptions are easier to code and were often
legacy products, and much faster to execute. When the code that many of
us use today was originally written, computers were not fast enough to
practically use “robust” algorithms in all situations. In
addition, there have been advances in the design of robust algorithms
over the past 10 years that have not yet made it into commercial software.
Computers are fast enough now, many of the algorithmic approaches have been
solved, and there is no reason to put up with it anymore(!)
- Any
algorithm that is minimizing the square of the error is assuming that the pdf
(probability density function) of the data is Gaussian. These are commonly
referred to as “best fit” or “least-squares” algorithms.

So whose job is it to adopt these new robust algorithms, and
- At the 3D metrology hardware/system manufacturer end,
the systematic errors should be removed, and a clean data set (no artifacts, no
extraneous data) fed into the analysis package. It should not be
the job of the downstream software to decide how the hardware’s data gets
processed, and which data gets used or unused. If anything, the usage of
the data should be an attribute.
- At the analysis software end, the
fitting and reporting algorithms should be robust enough to handle (that is,
without blowing up), in a way that correctly utilizes the data’s
characteristics, any sub-optimal data that was passed through from the

In the future, all analysis software packages used for 3D metrology will
employ new “robust” algorithms that will work optimally even when the data set
is not Gaussian, leaving the algorithms widely employed
today behind.
Gaussian analysis will become a subset tool of these algorithms. And,
while this new approach may be ”non-normal,” it will most definitely
be an improvement!

For more light reading on the subject, perform a search for

Commerical, Off-The-Shelf Hardware

The next great change in the 3D metrology industry is going to come from
commercial, off-the-shelf hardware.  No longer will customers spend tens of
thousands of dollars extra for a “higher-performance” model, when that
“higher-performance” model is really just a higher-megapixel camera, brighter
laser, brighter light, smaller box, or beefed-up design.  If these
“upgrades” only cost the 3D metrology companies perhaps a few thousand dollars,
why should you, the customer, spend tens of thousands?  Why should the 3D
metrology company dictate the “value” of the upgrade, when the marketplace has
already done so at the hardware component level? 

A parallel to this is computers.  When computer hardware was very
expensive, it made sense to place the emphasis of a system’s design on the
hardware.  But now, hardware is a commodity.  We don’t pick our
computers based on their absolute performance, we pick them based on the
software we’re going to run.  The same is true for cameras, lasers, and
light sources - to an increasing degree, they are commodity items.  This
means that you, the customer, should not pay big bucks for them.  You
should pay for the software to run them, but if you need higher performance
(more megapixels, brighter laser, more light, etc.), you should pay only an
incremental price for that performance, dictated by the hardware

To look at this another way, if a customer needs the higher performance, but
the 3D metrology company has priced (i.e. upcharged) it out of reach, the
customer is forced to purchase a less-than optimal solution.  This does not
benefit the customer (who needs the better hardware) OR the 3D metrology (who is
not supplying an optimal solution).  In the future, 3D metrology companies
will become software companies, and allow the customer to pick whatever hardware
they need to accomplish the job. 

Now here’s a secret: 3D metrology companies already ARE software
companies.  They’re just not letting you know it yet.  They’re
purchasing off-the-shelf cameras, light sources, lasers, power supplies, and
cables, and assembling them into branded packages.  Then, they’re selling
you the software to run them.  No 3D metrology company is making their own
cameras or light sources.  They’re simply re-packaging them as
components of engineered products.  Then, they’re hooking them up to

What does this mean to you, the customer? It means great things,
because, soon, you will be able to get exactly the performance you need, without
the major hardware markups.  You will be able to buy one software license,
and then pick the hardware performance that’s right for you.  Just like
with regular computer software.  And the 3D metrology companies will help
you engineer the exact performance you need, without marking up the hardware

Yet another advantage of this is supportability.  When the hardware
becomes decoupled from the small, volatile 3D metrology companies and becomes
sourced off-the-shelf, your manufacturing environments are no longer held
hostage by these 3D metrology companies.  You can purchase redundant
equipment inexpensively, and deal with large camera manufacturers, laser
suppliers, etc. for replacement.  Your risk for deploying 3D metrology
systems will diminish, and even if a 3D metrology company goes out of business,
fails to support the product, or gets bought out, you can always purchase

Under this model, any hardware component that truly IS custom-made will be
blueprinted, and easy to replicate, along with information on the

Sound good?  It does to us, too.

Technology Makes Things Simpler For All

What is technology?  Why is it important? 

These are abstract questions, and become even moreso when applied to an
obscure field such as 3D metrology.  But they also sit at the root of our
[metrologists'] very reason for being, so we have to answer/face them. 

In short, we think that technology is a means to convert a complicated
workflow into a simple one.  Think of products such as guns,
cars, and computers.  In the early days, an expert spent long hours
accomplishing a task, such as learning to shoot and reload a gun
quickly/accurately, preparing a car for a long trip, or formatting an I/O
structure for a computer database.  Over time, through technology, the task
became effortless, so that almost anyone could fire hundreds of rounds quickly
(machine gun), drive across country (modern vehicles), and load/share
information with colleagues (modern software).  These advancements
literally changed the respective lives of countless persons, and inspired yet
further advancements. 

For a metrologist, a simple workflow can be elusive.  For those that
have been battered by primitive technology, wrong algorithms for the job, or
inefficient use of hardware/environment, achieving of our daily goals sometimes
means dozens of hours of work, thousands of mouse clicks, and impeccable
attention to detail.  But soon, technology will make all of this easy —
this is our assertion. 

Some day, one mouse click will solve all of our problems, and all of the
nuances of our thought will be translated into a complete, traceable, controlled
process - in seconds.  Don’t believe us?  Look to

Is your metrology headed in the right (technology-driven)

How To Design A Metrology Test Artifact

Everyone who works in the manufacturing world is accountable to the
overarching watchdog of traceability.  It comes in different forms: ANSI,
ISO, NIST, and others, but the concept is the same: the manufacturing
process should be controlled, through the use of metrology, to
within published guidelines.  It is common to talk about a 10:1
rule in the 3D measurement world, whereby if a manufactured component has
a blueprint tolerance of 0.010″, the measurement system used to verify the
component should be accurate to within 0.001″ at some standard
deviation (usually 2 or 3 sigma for normally distributed data sets). 

By the same logic, when we confirm that the measurement system is accurate to
0.001″, we should use a test artifact that is certified and known to be within
0.0001″, for example.  Therein lies the design.  A test artifact that
is stable to within 0.0001″ over applicable size ranges, various temperature
ranges, with inherent physical properties capable of exposing small variances in
the measurement system takes some serious thought and planning. 

We all have an arsenal of readily-available tools to accomplish this: surface
plates, tooling balls, temperature-controlled rooms, and in some cases,
reference measurement systems such as CMMs and interferometers, but the key is
to understand the limitations of the materials, techniques, thermals, and even
the warm-up periods for the metrology equipment under test.  Things like
adhesives, body temperatures of the operators, residual stresses, gravity, and
human handling characteristics can impact the validity of a metrology
study.  We as metrologists know that extra time spent in setup and planning
can make or break our results, so why not spend the extra time when testing our
own equipment? 

If we think of the metrology artifact as “measuring” our metrology system,
then we’re thinking the right way, and we can pass this traceability through to
the important stuff-the products we’re measuring!

Cycle Times and Speed of 3D Metrology Systems

One of the most confusing aspects of a measurement system’s performance
is its “speed”.  How should it be spec’d? The time to collect data
after everything has been set up in its favor?  The per-measurement cycle
time when placed on a robot system (after everything has been set up in its
favor)?  The time to set up, calibrate, prep, configure, collect, merge,
view, edit, post-process, report, export, use, pack-down, and exit for a skilled
(or unskilled) operator?  The number of factors that influence a
metrology system’s “speed” in any given environment are massive. 

However, it’s not really that confusing when influences of cycle time
are viewed as trade-offs.  In the real world, we think speed should be
spec’d as something like “the time required to perform the complete inspection,
from power up of equipment through export of the final report for
an unskilled operator.”  That’s right: unskilled operator.  That
will force us all to carefully consider what we’re saying.  The trade-offs
involved when spec’ing like this are evident:

1) The more a measurement system can assist with setup, positioning,
monitoring, compensating, and self-regulating, the less an unskilled operator
has to know
2) The less prep required, the less an unskilled operator has to
3) The less finicky a measurement system is in an environment, or on a
measured surface, the less an unskilled operator has to know
4) The
fewer tasks required, the less an unskilled operator has to know
5) The fewer
actual measurements required, the less an unskilled operator has to know
The more controlled the measurment from a metrology point of view, the less an
unskilled operator has to know

See a trend here?  The operator’s impact on the overall cycle time can
be enormous.  By placing the restriction that the measurements must be
carried out by an unskilled operator, a completely different set of rules apply
in the overall system design.  A metrology system that doesn’t require
a skilled operator to perform optimally is a tool for that
operator.  A metrology system that requres a skilled operator to perform
optimally is a career path for that operator. 

This gets back to one of the most important rules of product development:
know thy end user.  Does the end user want to become an expert?  Or
does the end user want to use the system to complete another (more important)
job?  We think, in almost every case, the second scenario

Therefore, for every end user, the same result will be obtained in the same
amount of time.  Now it makes sense to talk about

How Mature is Your Metrology?

Believe it or not, one of the biggest compliments we have ever heard is that
our product is “mature.”  This is a bigger compliment than almost anything
we can think of.  Since we build specialized 3D metrology systems for
specific applications, that stamp of approval from a customer means we’ve
crossed our “t’s” and dotted or “i’s” to produce something that not only a
customer can relate to as a success, but also provides the comfort and peace of
mind of a well-executed solution. 

Every technology we develop serves a specific purpose, and often a
combination of technolgies come together to form an integrated solution. 
One of the challenges of this type of business is that the difficult, advanced,
or high-tech aspects of the solution must fade invisibly into the
background so the end user doesn’t perceive them as difficult, high-tech,
or advanced.  This can be accomplished in a variety of ways, from GUI
(graphical user interface) design, to logic structures that intelligently
execute the commands seamlessly within the environment. 

However, the nuances of the real world turn these human-machine inteface
designs and logic structures into the crux of the problem: how can our metrology
system perform, in a way that is completely intuitive to the end user, all of
the functions required, and as efficiently as possible?  One
could argue that the underlying technology is what makes a product “mature,” but
we assert that the human-machine interface is just as important.  This
includes obvious things like reduction of mouse clicks, adaptive settings
relative to the environment, hardware implementation, and a well-defined picture
of the psychology of the end user.  But it actually goes much deeper than
this.  What we really want to do is design the software to be an embodiment
of the end user’s trade knowledge. 

After, all, the trade knowledge was there long before the software.  It,
in fact, is what is mature.  The software is simply an extension of what
has already existed. 

So, we ask: is your 3D metrology solution “mature?”  Does its language,
execution, and pace fit completely or near-completely within the cumulative
experience of each specific end user?  It should.

Metrology is Green

Let’s face it.  Green is “in.”  Green cars, houses, energy,
packaging, transportation, companies, technologies.  So, we ask, “What’s
more green than metrology?”

Good metrology practice and application of the data into manufacturing
process ensures less waste.  That’s green. 

Metrology systems don’t consume very much energy relative the to the energy
required to manufacture a product (most products, that is).  That’s green,

Catching a production error by using a metrology system ensures less
re-work.  That’s green, too. 

Using metrology to monitor and control a process to achieve optimal operation
is green, too. 

If a process is controlled, then defective parts are less likely to leave the
manufacturing facility on trucks, railways, or planes, which saves even more
energy and cost.  That’s also green. 

Using metrology proactively to design more efficient, lighter, stronger, and
“green” products is the cornerstone of being green. 

If metrology is used to improve quality, then failures down the road are less
likely to occur, resulting in less need to make replacement parts or
repairs.  That’s green. 

All of the above adds up to cost and time savings, which impacts the bottom
line, improves profit, and puts more “green” in your company’s pocket.

A Broad Overview on Optical (Photogrammetry) Targets

Here comes an esoteric discussion on none other than…optical
targets.  Optical targets help us correlate measurement systems, provide
fixed references directly at the part surface, and act as direct measurements of
hard-fixed points.  They have thickness, they have specularity,
dispersion, edge contrast, absorbtion, spectral shift, and possibly even
translucency.  They come in various sizes, surface types, prices, and
methods of manufacture.  They are important in our achieving of absolute
accuracy, and it is important to quantify their impact on the underlying
metrology.  Yes, these absolute references become a fundamental link in our
error budget.  So let’s get started. 

Optical targets work by creating a reference point, often at the part surface
or at a tooling point location, for a camera to observe as a known location
within the measurement system.  They are often round, but can exist as
other shapes.  The known shape combined with the masked area on the target
(the “dot”) provides a basis for any number of methods to extract the center and
edge parameters.  Books have been written on this extraction, and new
methods are continually being developed.  The ability of these various
algorithms to work in real-world scenarios ultimately depends on the
illumination of the target as seen by the camera, the size of the target
relative to the sensor size of the camera, the quality of the target as
determined by its light scattering or focusing characteristics, the contrast at
the target edge, uniformity of the target’s surface, and degree to which the
target matches the “ideal” shape it is intended to represent.  In general,
a high-quality target should return the same, similar, or at very least a
predictable light/edge response when viewed normal to the target, as at
fairly steep angles to the target.  This single criterion puts many targets
in the category of “unacceptable” for some accuracy budgets, so choose

Once the target’s optical characteristics are determined, they can
be weighted against the precision of the observation, and this overall
uncertainty can be fed into the error budget and calculation of its 3D
position.   After the target’s center point
is determined, an offset correction may be necessary to resolve the 3D
coordinate into a known reference, or a net measured surface. 
  This offset measurement is usually assumed as a fixed value for a
given target type, and is determined via repeated measurement with a micrometer
or similar thickness measurement device.  The offset often takes place
normal to the target’s measured normal vector as determined by the shape of its
edge (such as the plane of the circular “dot”). 

Traditional “dot” targets can be thought of as 1-bit targets, whereas “coded”
targets can range from 1 to n bits in addition to the center bit.  The
additional bit codes provide information such as unique identifiers, and
generally surround the center “dot” of the target with some pattern unique to
the specific decoding mechanism.  They are particularly useful for
disambiguating target locations in uncalibrated or unknown camera states. 
A well-designed coding mechanism will allow for some leeway in the observance of
the code, so that the decoder will not incorrectly identify a code in the real
world.  The same light response, masking, and observation characteristics
that apply to the center “dot” target also apply to the code, so again, choose
wisely.  An important charteristic of the decoder (whether for 1-bit “dot”
or multi-bit targets) is that it successfully handles shadows, partial
obstructions, and other real-world observation and illumination

Targets are often considered disposable, and therefore can exist on removable
adhesive.  This adhesive also factors into the error budget, and the
adhesive’s performance on various materials, with various surface finishes, at
various temperatures can affect the target’s usefulness.  On some surfaces,
the adhesive acts as FOD or a corrosive agent, so always make sure the target is
compatible with its application.  Further, when used in “extreme”
temperature, humidity, pressure, electrical, or mechanical ranges, both the
target and any adhesives present must be rated or tested prior to use to verify
stability.  Some targets are manufactured for specialized
environments, so try to find one that matches your needs. 

Long-term application of targets, such as on tooling points or scale bars
should be mechanically coupled to the substrate material.  This is because
in field use, artifacts such as scale bars are subject to human handling, and
often undergo thermal cycles while traveling as checked-in luggage, in hot cars,
snow-covered delivery vehicles, etc.  This stress may loosen or move the
adhesive, and affect the certified scale value.  One method to mechanically
fix the target is to drill a small hole at several locations on the outside edge
of the target and into the substrate material (away from the center “dot” or
code), and use a temperature and material-compatible glue to permanently affix
the target to the bulk material.  Low TCE scale bar targets should be
affixed with Low TCE glue/epoxy, while high TCE scale bar targets should be
affixed with high TCE glue or epoxy. 

Once a target has been characterized, it can serve as a feedback loop into a
metrology system design to optimize camera selection, define performance
specifications, select between lens manufacturers, conduct ambient light or
motion testing, and generally study system repeatability, accuracy, and

Optical targets are often used, sometimes abused, occasionally understood,
yet always in our tool bag.  Whether used for photogrammetric bundling,
single-observation locating, alignment into coordinate systems, correlation in
other devices, or random fitting into a scene, performing the work up-front to
quantify your targets will return both peace of mind and provide that much more
success in your metrology projects.  Once you understand your targets,
they’ll become your go-to “reference.”

Accuracy Testing with Precision Surface Plates

One of the most fundamental 3D metrology tests out there is the plane
test.  It is simple to conduct, revealing, practical, easy to analyze, and
correlates directly to the real world.  Here’s how it works:

1) Get a surface plate that covers as much area as you have room for. 
Granite plates are great.  Metal plates are magnificent.  Ceramic
plates are spectacular.  You get the picture.  There are 100’s of
suppliers out there.  Make sure you get one that is certified.  Grade
B, Grade A, and Grade AA are common terms you might encounter–they refer to how
flat a surface is using a series of standard tests.  You might also
encounter the term “surface finish,” but we’re really concerned with the
“flatness” here.  You’ll see why.

2) Measure the surface plate.  Collect as many points as you can. 
Keep every point.  Don’t throw away any points, or smooth the data. 
You’ll see why.

3) Using measurement software, fit a plane through the raw data.  There
are many software programs that can do this, but make sure you use every
point.  If you collected points on a corner or edge of the surface plate,
delete or ignore them because we’re only interested in flatness for the purpose
of this study.

4) Look for the following values on the plane fit: 1-sigma value, RMS value,
max deviation, min deviation.  The 1-sigma and RMS values are generally
similar for near-Gaussian error distributions.  The max and min deviations
represent a total bandwidth.  Most measurement systems are specified using
1-sigma, 2-sigma, or (max-min)/2 as their reference.  To get the 2-sigma
value, double the 1-sigma value.  Do these numbers 1) match what the
supplier is quoting for their accuracy? 2) meet the specifications for your
application?  Remember that the measurement tolerance has to be much
smaller than the manufacturing tolerance.  If your measurement software has
the ability to perform a “color map” relative to the plane, this “color map” is
useful for looking at how the errors are distributed throughout the plane. 
The numbers revealed by this test demonstrate how “globally accurate” the
measurement system is.  All metrology suppliers should quote a “global
accuracy” on their systems, and specify whether it is a 1-sigma, 2-sigma, or
(max-min)/2 accuracy.

5) Now, repeat steps 3 and 4, except this time fit a small plane through the
data.  1/10th of the total plane area would be a reasonble
size.  This time, instead of “global accuracy”, these numbers are
showing you the “background noise” of the measurement system.  The
“background noise” might be much smaller than the “global accuracy,” and is
useful for determining how small features such as circles, radii, and other
localized inspections will be influenced by random noise.

6) Consider the results.  Both “global accuracy” and “background noise”
provide useful information.  It is possible for a system to have a high
“global accuracy” number but a small “local accuracy” number.  It is
also possible for a system to have “global accuracy” and “local accuracy”
numbers that are similar.  Consider how each of these numbers affects your
inspection requirements.  Now consider that the reason we wanted to collect
a lot of points, and not smooth or filter the data is because both the local and
global accuracy numbers are important.

7) Consider the surface you just measured.  If it was granite, there
might be internal/local reflections due to the quartz in the granite that might
act as a source of local inaccuracy.  If it was metal, there might have
been specular or direct reflections that influenced the local accuracy.  If
it was ceramic, there might have been light penetration beneath the
surface.  Every material is different.  Do not dismiss this as
trivial!  Find out why these materials responded the way they did, because
they are going to influence your measurements!

8 ) Next, were the lights on or off during the measurement?  What kind
of lighting was present?  This is important for optical systems.  Was
the room free from vibration?  Do different operators conducting the same
test yield the same results?  All of these questions are significant
because they impact the outcome of the results.  Conduct this “plane test”
under varying conditions to determine how the measurement system is influenced
by its environment.

9) Finally, do not stop there!  Make a plane out of the actual material
you plan to measure.  If you are measuring titanium, machine a titanium
plane.  If you are working with carbon fiber, construct a carbon fiber
plane.  It is imperative that you test on your actual material, because
each material can respond differently to optical measurement in
particular.  Further yet, measure these materials in the real-world
environments in which they will be measured.  Look for global and local
accuracy in each case.

10) After this plane test has been conducted, try measuring the same plane,
but moving the measurement equipment.  Try measuring at the far extents of
the measurement system, close in to the measurement system, tilt the plane
relative to the measurement system, and even try it upside down or facing into a
wall, floor, or corner.  All of these results combine to indicate a “true”
system performance.

You will be amazed by what a plane measurement can reveal.  You’ll
quickly be able to determine whether your measurement system is within
specifications, and how much “random noise” there is relative to “global
inaccuracy”.  Testing different materials in different environments will
also start to reveal the limits of the equipment.  Remember, our goal is
not to perform the “perfect measurement.”  It is to better understand our
equipment, and how it can be used within our organization to improve our

Why a 3D Metrology Blog?

Why not?  Seriously, this industry is changing quickly.  New
technologies, new applications, new computing power, a growing knowledge base,
new companies, merging companies, new business models, new support

We remember when it took 10 hours to load a reasonable CAD file.  We
remember when standards that are common today were still emerging.  We’ve
seen, tested, and used some great and some not-so-great technologies. 
We’ve seen the shifts in the way metrology is perceived, accepted, interpreted,
and used.

We’re all metrologists.  If you’ve ever used a tape measure, you’re a
metrologist.  If you’ve ever timed an egg, you’re a metrologist.  If
you’ve ever eyeballed the right size clothing, you’re a metrologist.  And
if you’ve ever perceived the impact of a public policy or legal document, you’re
a metrologist.  But despite the vast nature of this word, to us it means
something like “the professional application, through the application
of ethics, science, technology, and good engineering practice, of a 3D
coordinate measurement device for the benefit of humankind.”

3D metrology has an impact on the safety of our planes, the efficiency of our
cars, the profitability of our companies, and the perception of our
brands.  Metrology is often associated with “quality,” but that is another
matter.  For now, check back occasionally to read our postings.  Not
every day, because we don’t write that much.  But maybe once a
month or few.