Accessibility / Report Error

Flow measurements

Abstract

This paper will review several techniques for flow measurements including some of the most recent developments in the field. Discussion of the methods will include basic theory and implementation to research instrumentation. The intent of this review is to provide enough detail to enable a novice user to make an informed decision in selecting the proper equipment to solve a particular flow measurement problem.

Flow measurements; hot-wire anemometry; Laser-Doppler Anemometry (LDA); particle image velocimetry


TECHNICAL PAPERS

Flow measurements

K. D. Jensen

Dantec Dynamics Inc.; 200 Williams Dr; Ramsey, NJ 07446; USA; Kim.Jensen@dantecdynamics.com

ABSTRACT

This paper will review several techniques for flow measurements including some of the most recent developments in the field. Discussion of the methods will include basic theory and implementation to research instrumentation. The intent of this review is to provide enough detail to enable a novice user to make an informed decision in selecting the proper equipment to solve a particular flow measurement problem.

Keywords: Flow measurements, hot-wire anemometry, Laser-Doppler Anemometry (LDA), particle image velocimetry

Introduction

Almost all industrial, man-made flows are turbulent. Almost all naturally occurring flows on earth, in oceans, and atmosphere are turbulent. Therefore, the accurate measurement and calculation of turbulent flows has wide ranging application and significance.

In general, turbulent motion is 3-D, vortical, and diffusive making the governing Navier-Stokes equation (above) very hard (or impossible) to solve in most real applications, thus the need to measure flow.

Early turbulence research has been complemented by experimental methods that included pressure measurements and by the point measurement technique of Hot Wire Anemometry (HWA). Particular difficulties in using these intrusive methods include, reversing flows, vortices, and highly turbulent flows. In addition, intrusive probes are subject to non-linearity (require calibration), sensitivity to multi-variable effects (temperature, humidity, etc.), and breakage among other problems.

With developments of lasers in the mid 1960's non-intrusive flow measurement became practical. Soon after the introduction of gas lasers, Laser-Doppler Anemometry (LDA) was developed by Yeh and Cummins. This was one of the most significant advances for fluid diagnostics since we now had a nearly ideal transducer. Specifically, the output is exactly linear, no calibrations are required, the output has low noise, high frequency response and velocity is measured independently of other flow variables. During the last three decades, the LDA technique has witnessed significant advancements in terms of optical methods such as fibers, as well as, advanced signal processing techniques and software development. In addition, the LDA method has been extended to the Phase Doppler technique for the measurement of particle and bubble size along with velocity.

The rapid development within lasers and camera technology opened the possibility for qualifying (flow visualization) and later on quantifying whole flow field measurement. The development of Particle Image Velocemetry (PIV) has become one of the most popular instruments for flow measurements in numerous applications. Modern developments of camera and laser technology, as well as PIV software, continue to improve the performance of the PIV systems and their applicability to difficult flow measurements. In addition to the instantaneous measurement of the flow, a time resolved measurement is now possible with high frequency lasers and high frame rate cameras. Particle and bubble sizing is also possible with a modified PIV system that includes a second camera. Planar Laser Induced Fluorescence (PLIF) is now available as standard measurement systems for concentration and temperature. The PLIF systems are especially useful for heat transfer and mixing studies.

Point Versus Full Field Velocity Measurement Techniques:

Advantages and Limitations

Hot Wire Anemometry (HWA), Laser Doppler Anemometry (LDA), and Particle Image Velocimetry (PIV) are currently the most commonly used and commercially available diagnostic techniques to measure fluid flow velocity. The great majority of the HWA systems in use employ the Constant Temperature Anemometry (CTA) implementation. A quick comparison of the key transducer properties of each technique is shown in Table 1 with expanded details on spatial resolution, temporal resolution and calibration provided in the following sections.

Spatial Resolution

High spatial resolution is a must for any advanced flow diagnostic tool. In particular, the spatial resolution of a sensor should be small compared to the flow scale, or eddy size, of interest. For turbulent flows, accurate measurement of turbulence requires that scales as small as 2 to 3 times the Kolmogorov scale be resolved. Typical CTA sensors are a few microns in diameter, and a few millimeters in length, providing sufficiently high spatial resolution for most applications. Their small size and fast response make them the diagnostic of choice for turbulence measurements.

The LDA measurement volume is defined as the fringe pattern formed at the crossing point of two focused laser beams. Typical dimensions are 100 microns for the diameter and 1 mm for the length. Smaller measurement volumes can be achieved by using beam expansion, larger beam separation on the front lens, and shorter focal length lenses. However, fewer fringes in the measurement volume increase the uncertainty of Doppler frequency estimation.

A PIV sensor is the subsection of the image, called an "interrogation region". Typical dimensions are 32 by 32 pixels, which would correspond to a sensor having dimensions of 3 mm by 3 mm by the light sheet thickness (~1mm) when an area of 10 cm by 10 cm is imaged using a digital camera with a pixel format of 1000 by 1000. Spatial resolution on the order of a few micrometers has been reported by (Meinhart, C. D. et.al, 1999) who have developed a micron resolution PIV system using an oil immersion microscopic lens. What makes PIV most interesting is the ability of the technique to measure hundreds or thousands of flow vectors simultaneously.

Temporal Resolution

Due to the high gain amplifiers incorporated into the Wheatstone Bridge, CTA systems offer a very high frequency response, reaching into hundreds kHz range. This makes CTA an ideal instrument for the measurement of spectral content in most flows. A CTA sensor provides an analog signal, which is sampled using A/D converters at the appropriate rate obeying the Nyquist sampling criterion.

Commercial LDA signal processors can deal with data rates in the hundreds of kHz range, although in practice, due to measurement volume size, and seeding concentration requirements, validated data rates are typically in the ten kHz, or kHz range. This update rate of velocity information is sufficient to recover the frequency content of many flows.

The PIV sensor, however, is quite limited temporally, due to the framing rate of the cameras, and pulsing frequency of the light sources used. Most common cross-correlation video cameras in use today operate at 30 Hz. These are used with dual-cavity Nd:Yag lasers, with each laser cavity pulsing at 15 Hz. Hence, these systems sample the images at 30 Hz, and the velocity field at 15 Hz. High framing rate

CCD cameras are available that have framing rates in the ten kHz range, albeit with lesser pixel resolution. Copper vapor lasers offer pulsing rates up to 50 KHz range, with energy per pulse around a fraction of one mJ. Hence, in principle, a high framing rate PIV system with this camera and laser combination is possible. But, in practice, due to the low laser energy, and limited spatial resolution of the camera, such a system is suitable for a limited range of applications.

Recently, however, CMOS-based digital cameras have been commercialized with framing rates of 1 KHz and pixel resolution of 1k x 1k. Combined with Nd:Yag lasers capable of pulsing at several kHz with energies of 10 to 20 mJ/pulse, this latter system brings us one step closer to measuring complex, three-dimensional turbulent flow fields globally with high spatial and temporal resolution. The vast amount of data acquired using a high framing rate PIV system, however, still limits its common use due to the computational resources needed in processing the image information in nearly real time, as is the case with current low framing rate commercial PIV systems.

Summary of Transducer Comparison

In summary, the point measurement techniques of CTA and LDA can offer good spatial and temporal response. This makes them ideal for measurements of both time-independent flow statistics, such as moments of velocity (mean, rms, etc) and time-dependent flow statistics such as flow spectra and correlation functions at a point. Although rakes of these sensors can be built, multi-point measurements are limited mainly due to cost.

The primary strength of the global PIV technique is its ability to measure flow velocity at many locations simultaneously, making it a unique diagnostic tool to measure three dimensional flow structures, and transient phenomenon. However, since the temporal sampling rate is typically 15 Hz with today's commonly used 30 Hz cross-correlation cameras, the PIV technique is normally used to measure instantaneous velocity fields from which time independent statistical information can be derived. Cost and processing speed are currently the main limiting factors that influence the temporal sampling rate of PIV.

Intrusive and Non-Intrusive Measurement Techniques

Most emphasis in recent times has been in the development of non-intrusive flow measurement techniques, for measuring vector, as well as, scalar quantities in the flow. These techniques have been mostly optically-based, but when fluid opaqueness prohibits access, then other techniques are available. A quick overview of several of these non-intrusive measurement techniques is given in the next few sections for completeness. More extensive discussion on these techniques can be found in the references cited.

Particle Tracking Velocimetry (PTV) and Laser Speckle Velocimetry (LSV)

Just like PIV, PTV and LSV measure instantaneous flow fields by recording images of suspended seeding particles in flows at successive instants in time. An important difference among the three techniques comes from the typical seeding densities that can be dealt with by each technique. PTV is appropriate with "low" seeding density experiments, PIV with "medium" seeding density and LSV with "high" seeding density. The issue of flow seeding is discussed later in the paper.

Historically, LSV and PIV techniques have evolved separately from the PTV technique. In LSV and PIV, fluid velocity information at an "interrogation region" is obtained from many tracer particles, and it is obtained as the most probable statistical value. The results are obtained and presented in a regularly spaced grid. In PIV, a typical interrogation region may contain images of 10-20 particles. In LSV, larger numbers of particles in the interrogation region scatter light, which interfere to form speckles. Correlation of either particle images or speckles can be done using identical techniques, and result in the local displacement of the fluid. Hence, LSV and PIV are essentially the same technique, used with different seeding density of particles. In the rest of the chapter, the acronym PIV will be used to refer to either technique.

In PTV, the acquired data is a time sequence of individual tracer particles in the flow. In order to be able to track individual particles from frame to frame, the seeding density needs to be small. Unlike PIV, the PTV results in sparse velocity information located in random locations. Guezennec, Y. G. et.al. (1994) have developed an automated three-dimensional particle tracking velocimetry system that provides time-resolved measurements in a volume.

Image Correlation Velocimetry (ICV)

Tokumaru and Dimotakis, (1995) introduced image correlation velocimetry (ICV) with the purpose of measuring imaged fluid motions without the requirement for discrete particles in the flow. Schlieren based image correlation velocimetry was recently implemented by Kegrise and Settles (2000) to measure the mean velocity field of an axisymetric turbulent free-convection jet. Also recently, Papadopoulos, (2000) demonstrated a shadow image velocimetry (SIV) technique, which combined shadowgraphy with PIV to determine the temperature field of a flickering diffusion flame. Image correlation was also used by Bivolaru et al. (1999) to improve on the quantitative evaluation of Mie and Rayleigh scattering signal obtained from a supersonic jet using a Fabry-Perot interferometer. Although such developments are novel, we are still far from being able to fully characterize a flow by complete simultaneous measurements of density, temperature, pressure, and flow velocity.

Doppler Ultrasonic Velocimetry

Doppler ultrasound technique was originally applied in the medical field and dates back more then 30 years. The use of pulsed emissions has extended this technique to other fields and has open the way to new measuring techniques in fluid dynamics.

In pulsed Doppler ultrasound, an emitter periodically sends a short ultrasonic bursts and a receiver continuously collects echoes issues from seeding particles present in the path of the ultrasonic beam. By measuring the frequency shift between the ultrasonic frequency source, the receiver, and the fluid carrier, the relative motion is measured.

Instruments such as Met-Flow's UVP provide velocity information along a line or a grid depending on set-up.

Doppler ultrasonic flow meters may find use in applications where other techniques fail to produce results. Such application may be; Liquid slurries, aerated liquids or flows where neither optical nor physical assess can be provided.

Multi Hole Pressure Probes

Five-Hole and Seven-Hole Probes

5- or 7-hole probe are comprised of a single cylindrical body with five or seven holes at its tip, as shown in Figure 1. These holes communicate internally with pressure-measuring devices. If the probe is placed in an air stream, the pressures recorded can be converted to directional air velocity. Over the past years these probes have been miniaturized down to diameters as small as 1.5mm and automated methods of calibration have been developed.


In resent years the multi-hole technology design has been expanded to incorporate embedded pressure sensor probes that can measure the three components of velocity, as well as, static and dynamic pressure with frequency response higher than 1000 Hz.

Omni Directional Probes

5 or 7-hole probes provide 3- dimensional velocity information within a typical 60 - 70 degree cone from the probe axis. The 18 hole Omni probe overcomes this angularity limitation by incorporating a spherical probe tip. By employing 18 pressure ports distributed on the surface of a spherical surface, the Omniprobe can accurately measure flow velocity and direction from practically any direction.

Multi Hole Probe Calibration

Probe calibration is essential to proper operation of multi hole probes. The calibration defines a relationship between the measured probe port pressures and the actual velocity vector.

Multi hole probes are calibrated by placing it in a uniform flow field where velocity magnitude and direction, density, temperature, static pressure are well defined. A multi hole probe is typically calibrated by positioning the probe in over 2000 different orientations with respect the main flow direction., At each orientation, the probe port pressures and the free stream dynamic pressure are recorded with respect to the velocity vector to generate a calibration data set.

Constant Temperature Anemometer (CTA)

The hot-wire anemometer was introduced in its original form in the first half of the 20th century. A major breakthrough was made in the fifties, where it became commercially available in the presently used constant temperature operational mode (CTA). Since then it has been a fundamental tool for turbulence studies.

The measurement of the instantaneous flow velocity is based upon the heat transfer between the sensing element, for example a thin electrically heated wire or a metal film, and the surrounding fluid medium. The rate of heat loss depends on the excess temperature of the sensing element, its physical properties and geometrical configuration, and the properties of the moving fluid.

Implementation of the Constant Temperature Anemometer

The velocity is measured by its cooling effect on a heated sensor. A feedback loop in the electronics keeps the sensor temperature constant under all flow conditions. The voltage drop across the sensor thus becomes a direct measure of the power dissipated by the sensor. The anemometer output therefore represents the instantaneous velocity in the flow. Sensors are normally thin wires with diameters down to a few micrometers. The small thermal inertia of the sensor in combination with very high servo-loop amplification makes it possible for the CTA to follow flow fluctuations up to several hundred kHz. shows a basic diagram of a Constant Temperature Anemometer

The relationship between the fluid velocity and the heat loss of a cylindrical wire is based on the assumption that the fluid is incompressible and that the flow around the wire is potential. When a current is passed through wire, heat is generated (I2Rwire). During equilibrium, the heat generated is balanced by the heat loss (primarily convective) to the surroundings. If the velocity changes, then the convective heat transfer coefficient will also change resulting in a wire temperature change that will eventually reach a new equilibrium with the surroundings.

The heat transfer equations governing the behavior of the heated sensor includes fluid properties (heat conductivity, viscosity, density, concentration, etc.), as well as temperature loading, sensor geometry and flow direction with respect to the sensor. Thus, the multi dependency of the CTA makes it possible to measure other fluid parameters, such as concentration, and provide the ability to decompose the velocity vector into its components.

Governing Equation:

Consider a thin heated wire mounted to supports and exposed to a velocity U.

W = power generated by Joule heating

W = I2 Rw , recall Rw = Rw(Tw)

Q = heat transferred to surroundings

Qi = CwTw =thermal energy stored in wire

Cw = heat capacity of wire

Tw = wire temperature

The voltage-velocity relation has an exponential nature, which makes it strongly non-linear, but results in a nearly constant relative sensitivity over very large velocity ranges. The CTA, therefore, covers velocities from a few cm/s to well above the speed of sound.

The complexity of the transfer function makes it mandatory to calibrate the anemometer before use. In 2 -and 3-dimensional flow measurements, directional calibrations must be performed in addition to the velocity calibration.

The CTA anemometer is ideally suited for the measurement of time series in one-, two- or three-dimensional flows, where high temporal and spatial resolution is required. It requires no special preparation of the fluid (seeding or the like) and can measure in places not readily accessible thanks to small probe dimensions.

CTA Sensors and Their Characteristics

Anemometer probes are available with four types of sensors: Miniature wires, Gold-plated wires, Fiber-film or Film-sensors. Wires are normally 5 m in diameter and 1.2 mm long suspended between two needle-shaped prongs. Gold-plated wires have the same active length but are copper- and gold-plated at the ends to a total length of 3 mm long in order to minimize prong interference. Fiber-sensors are quartz-fibers, normally 70 mm in diameter and with 1.2 mm active length, covered by a nickel thin-film, which again is protected by a quartz coating. Fiber-sensors are mounted on prongs in the same arrays, as are wires. Film-sensors consist of nickel thin-films deposited on the tip of aerodynamically shaped bodies, wedges or cones. CTA anemometer probes may be divided in to four categories

Wire Sensors

Miniature wires:

First choice for applications in air flows with turbulence intensities up to 5-10%. They have the highest frequency response. Repairable.

Gold-plated wires:

For applications in airflows with turbulence intensities up to 20-25%. Repairable.

Film-Film Sensors

Thin-quartz coating:

For applications in air. Frequency response is inferior to wires. They are more rugged than wire sensors and can be used in less clean air. Repairable.

Heavy-quartz coating:

For applications in water. Repairable.

Film-Sensors

Thin-quartz coating:

For applications in air at moderate-to-low fluctuation frequencies. They are the most rugged CTA probe type and can be used in less clean air than fiber-sensors. Not repairable.

Heavy-quartz coating:

For applications in water. They are more rugged than fiber-sensors. Not repairable.

CTA Sensor Arrays

Probes are available in one-, two- and three-dimensional versions as single-, dual and triple-sensor probes referring to the number of sensors. Since the sensors (wires or fiber-films) respond to both magnitude and direction of the velocity vector, information about both can be obtained, only when two or more sensors are placed under different angles to the flow vector.

Split-fiber and triple-split fiber probes are special designs, where two or three thin-film sensors are placed in parallel on the surface of a quartz cylinder. They may supplement X-probes in two-dimensional flows, when the flow vector exceeds an angle of ±45°.

Single-Sensor Normal Probes

For one-dimensional, uni-directional flows. They are available with different prong geometry, which allows the probe to be mounted correctly with the sensor perpendicular and prongs parallel with the flow.

Single-sensor slanted probes (45° between sensor and probe axis): For three-dimensional stationary flows where the velocity vector stays within a cone of 90°. Spatial resolution 0.8x0.8x0.8 mm (standard probe). Must be rotated during measurement.

Dual-Sensor Probes

X-probes:

For two-dimensional flows, where the velocity vector stays within ±45° of the probe axis.

Split-fiber probes:

For two-dimensional flows, where the velocity vector stays within ±90° of the probe axis. The cross-wise spatial resolution is 0.2 mm, which makes them better than X-probes in shear layers.

Triple-Sensor Probes

Tri-axial probes:

For 3-dimensional flows, where the velocity vector stays inside a cone of 70° opening angle around the probe axis, corresponding to a turbulence intensity of 15%. The spatial resolution is defined by a sphere of 1.3 mm diameter.

Triple-split film probes:

For fully reversing two-dimensional flows, Acceptance angle is ±180°.

Constant Temperature Anemometer Calibration

The CTA voltage output, E, has a non-linear relation with the input cooling velocity impinging on the sensor, U. Even though analytical treatment of the heat balance on the sensor shows that the transfer function has a power-law relationship of the form E2 = A+BUn (King's Law), with constants A, B, and n, it is most often modeled using a fourth order polynomial relation. Moreover, any flow variable, which affects the heat transfer from the heated CTA sensor, such as the fluid density and temperature, affects the sensor response. Hence, CTA sensors need to be calibrated for their velocity response before use. In many cases, they also need to be calibrated for angular response. A common case such as gradually increasing ambient temperature can be dealt with either by performing a range of velocity calibrations for the range of ambient temperatures, or by performing analytical temperature corrections for the measured voltages. Jørgensen, Dantec Dynamics A/S describes a temperature correction method, which makes it possible to reduce the influence of temperature variations on the measured velocity to typically less than +/- 0.1%/C.

In addition to the King's Law response by the Hot-Wire Anemometer probe for a velocity normal to the wire, for multi wire probes the probes must also under go a YAW calibration to take into account the Cosine angular response of the wire(s) to the cooling velocity.

X-Probe Directional Calibration

Most CTA probe sensors have a pronounced directional sensitivity, especially cylindrical sensors with a large length to diameter ratios. An infinitely long cylindrical sensor has a cosine relation between the effective cooling velocity and the actual flow velocity. Finite sensors have a tangential cooling added to the cosine contribution, defined by the yaw factor k or k2.

The effective cooling velocity can be described as:

Where

Ueff = Effective cooling velocity

U0 = Actual velocity

a = angle between sensor normal and the actual velocity

k = Yaw factor

Un= Velocity component perpendicular to the sensor

Ut= Velocity component parallel with the sensor

The yaw factor is to some extent depending on the angle a and the velocity U0 . Yaw calibration should be performed at the velocity expected during measurements and to select a k-value that optimizes the directional response best within the expected angular range.

Typically X-array probes have configuration defaults for the yaw factors (k12 and k22 ). It is however, recommended to perform individual directional calibrations, to obtain maximum accuracy, especially in highly turbulent flows.

Directional calibrations require the probe to be placed in a rotating holder in a plane flow field of constant velocity. The probe is then rotated through a set of yaw angles, and related values of angle and probe voltages are recorded.

In each angular position the cooling velocity equations for the two sensors form an equation system with two unknowns, namely k1 and k2. By solving the equations, a value of k1 and k2 are obtained for each angle. The average k1 and k2 will then be used as yaw constants for the probe, when velocity decomposition are carried out in Conversion/Reduction procedures.

Tri-Axial Probe Directional Calibration

Tri-axial probes are used in 3-D flows and the directional calibration includes the pitch factor in addition to the yaw factor. The pitch factor defines how the sensor response varies, when the actual velocity vector moves out of the wire-prong plane.

The pitch factor h is defined by means of the expression:

Where

Ueff = Effective cooling velocity

U0 = Actual velocity

q = angle between sensor-prong plane and actual velocity vector

h= Pitch facto

Un = Velocity component perpendicular to the sensor (in the sensor/prong plane)

Ubn = Velocity component perpendicular to the sensor/prong plane

Tri-axial probes normally have configuration defaults for both yaw and pitch factors (k2 and h2 ). It is however, recommended to perform individual directional calibrations, if you want to obtain maximum accuracy, especially in highly turbulent flows.

Directional calibrations require that the probe is placed in a rotating holder in a plane flow field of constant velocity. The probe is then set to a fixed inclination angle and rotated through 360º around the probe axis, while related values of roll angle and probe voltages are recorded.

In each angular position the cooling velocity equation for the each sensor forms an equation system with two unknowns, namely k2 and h2. By solving the equations, values of ki2 and hi2 (i = 1,2,3) are obtained for each angle. The average ki2 and hi2 will then be used as yaw and pitch constants for the probe..

Dealing With Multiple Wire Probes

Dual-Sensor (X-Array) Probes.

Dual sensor probes (X-probes) have the sensor plane in the xy-plane of the probe coordinate system. The angle between the x-axis and wire 1 is denoted a (x/1) and between the x-axis a (x/2).

Velocity Decomposition:

In StreamWare the following equations are used for calculation the velocity components (W is zero or can be neglected) on basis of the calibration velocities and the yaw-factors k1, k2 for the wires. The following steps are carried out:

Calibration velocity (U1cal,U2cal) to Wire coordinate system (U1,U2):

These two equations are solved with respect to U1 and U2 and inserted in the Wire (U1,U2) to Probe (U,V) coordinate equations:

Tri-Axial Probes:

The probe is placed in a three-dimensional flow with the probe axis aligned with the main flow vector.

The velocity components U, V and W in the probe coordinate system (x,y,z) are given by:

Where U1, U2 and U3 are calculated from the general set of equations:

Where U1eff , U2eff and U3eff are the effective cooling velocities acting on the three sensors and ki and hi are the yaw and pitch factors, respectively.

As Tri-axial probes are calibrated with the velocity in direction of the probe axis Ueff is replaced by Ucal·(1+ki2+hi2)0.5·cos54.736º , where 54.736º is the angle between the velocity and the wires:

With the default values k2 = 0.0225 and h2 = 1.04 the solution for U1, U2 and U3 becomes:

Commercial Hot-Wire Anemometers

Modern Constant Temperature Anemometer systems are typically comprised of three elements.

1. A computer with an Analog to Digital converter and data acquisition software for sampling the analog signal from the Constant Temperature Anemometer and performing some degree of digital signal processing.

2. The Constant Temperature Anemometer, which should provide the user with a range of configuration options. It should be possible to modify resistor configuration for the bridge top (see), since this resistor configuration will influence the maximum heat transfer from a probe for a given application and moreover, the frequency response of the system. Signal conditioning electronics should be present. Typical signal conditioning options are: Signal amplification (gain) typically used in conjunction with turbulence measurements and to match the CTA signal to the digital sampling unit. High and low pass filters to prevent aliasing when digitally sampling the analog CTA signal and to filter out DC signal components to allow high gain turbulence measurements. Input offset to eliminate V0 (the anemometer output signal at zero velocity).

3. A calibration facility that covers a wide velocity range. Water calibration system typically has a velocity range from a few cm/sec to 10 m/s. Gas calibration system typically has velocity range spanning from a few cm/sec at the lower end going up to MACH 1.

Laser Doppler Anemometry (LDA)

Laser Doppler Anemometry is a non-intrusive technique used to measure the velocity of particles suspended in a flow. If these particles are small, in the order of microns, they can be assumed to be good flow tracers following the flow and thus their velocity corresponds to the fluid velocity. Important characteristics of the LDA technique, listed in the following section, make it an ideal tool for dynamic flow measurements and turbulence characterization.

Characteristics of LDA

Laser anemometers offer unique advantages in comparison with other fluid flow instrumentation:

Non-Contact Optical Measurement

Laser anemometers probe the flow with focused laser beams and can sense the velocity without disturbing the flow in the measuring volume. The only necessary conditions are a transparent medium with a suitable concentration of tracer particles (or seeding) and optical access to the flow through windows, or via a submerged optical probe. In the latter case the submerged probe will of course to some extent disturb the flow, but since the measurement take place some distance away from the probe itself, this disturbance can normally be ignored.

No Calibration - No Drift

The laser anemometer has a unique intrinsic response to fluid velocity - absolute linearity. The measurement is based on the stability and linearity of optical electromagnetic waves, which for most practical purposes can be considered unaffected by other physical parameters such as temperature and pressure.

Well-Defined Directional Response

The quantity measured by the laser Doppler method is the projection of the velocity vector on the measuring direction defined by the optical system (a true cosine response).The angular response is thus unambiguously defined.

High Spatial and Temporal Resolution

The optics of the laser anemometer are able to define a very small measuring volume and thus provides good spatial resolution and yield a local measurement of Eulerian velocity. The small measuring volume, in combination with fast signal processing electronics, also permits high bandwidth, time-resolved measurements of fluctuating velocities, providing excellent temporal resolution. Usually the temporal resolution is limited by the concentration of seeding rather than the measuring equipment itself.

Multi-Component Bi-Directional Measurements

Combinations of laser anemometer systems with component separation based on color, polarization or frequency shift allow one-, two- or three-component LDA systems to be put together based on common optical modules. Acousto-optical frequency shift allows measurement of reversing flow velocities.

Principles of LDA

Laser Beam

The special properties of the gas laser, making it so well suited for the measurement of many mechanical properties, are the spatial and temporal coherence. At all cross sections along the laser beam, the intensity has a Gaussian distribution, and the width of the beam is usually defined by the edge-intensity being 1/e2=13% of the core-intensity. At one point the cross section attains its smallest value, and the laser beam is uniquely described by the size and position of this so-called beam waist.

With a known wavelength l of the laser light, the laser beam is uniquely described by the size d0 and position of the beam waist as shown in Figure 10. With z describing the distance from the beam waist, the following formulas apply:


The beam divergence a is much smaller than indicated in Figure 10, and visually the laser beam appears to be straight and of constant thickness. It is important however to understand, that this is not the case, since measurements should take place in the beam waist to get optimal performance from any LDA-equipment. This is due to the wave fronts being straight in the beam waist and curved elsewhere. According to the previous equations, the wave front radius approaches infinity for z approaching zero, meaning that the wave fronts are approximately straight in the immediate vicinity of the beam waist, thus letting us apply the theory of plane waves and greatly simplify calculations.

Doppler Effect

Laser Doppler Anemometry utilizes the Doppler effect to measure instantaneous particle velocities. When particles suspended in a flow are illuminated with a laser beam, the frequency of the light scattered (and/or refracted) from the particles is different from that of the incident beam. This difference in frequency, called the Doppler shift, is linearly proportional to the particle velocity.

The principle is illustrated in Figure 11 where the vector U represents the particle velocity, and the unit vectors ei and es describe the direction of incoming and scattered light respectively. According to the Lorenz-Mie scattering theory, the light is scattered in all directions at once, but we consider only the light reflected in the direction of the LDA receiver. The incoming light has the velocity c and the frequency fi, but due to the particle movement, the seeding particle "sees" a different frequency, fs, which is scattered towards the receiver. From the receiver's point of view, the seeding particle acts as a moving transmitter, and the movement introduces additional Doppler-shift in the frequency of the light reaching the receiver. Using Doppler-theory, the frequency of the light reaching the receiver can be calculated as:

Even for supersonic flows the seeding particle velocity |U| is much lower than the speed of light, meaning that |U/c|<<1. Taking advantage of this, the above expression can be linearized to:

With the particle velocity U being the only unknown parameter, then in principle the particle velocity can be determined from measurements of the Doppler shift Df.


In practice this frequency change can only be measured directly for very high particle velocities (using a Fabry - Perot interferometer). This is why in the commonly employed fringe mode, the LDA is implemented by splitting a laser beam to have two beams intersect at a common point so that light scattered from two intersecting laser beams is mixed, as illustrated in Figure 12. In this way both incoming laser beams are scattered towards the receiver, but with slightly different frequencies due to the different angles of the two laser beams. When two wave trains of slightly different frequency are super-imposed we get the well-known phenomenon of a beat frequency due to the two waves intermittently interfering with each other constructively and destructively. The beat frequency corresponds to the difference between the two wave-frequencies, and since the two incoming waves originate from the same laser, they also have the same frequency, f1 = f2 = fI, where the subscript "I" refers to the incident light:

where q is the angle between the incoming laser beams and j is the angle between the velocity vector U and the direction of measurement. Note that the unit vector es has dropped out of the calculation, meaning that the position of the receiver has no direct influence on the frequency measured. (According to the Lorenz-Mie light scattering theory, the position of the receiver will however have considerable influence on signal strength). The beat-frequency, also called the Doppler-frequency fD, is much lower than the frequency of the light itself, and it can be measured as fluctuations in the intensity of the light reflected from the seeding particle. As shown in the previous equation the Doppler-frequency is directly proportional to the x-component of the particle velocity, and thus can be calculated directly from fD:


Further discussion on LDA theory and different modes of operation may be found in the classic texts of Durst et al (1976) and Watrasiewics and Rudd (1976) and the more resent Albrecht, Borys, Damaschke and Tropea: Laser Doppler and Phase Doppler Measurement Techniques (2003), ISBN 3-540-67838-7, Springer Verlag

Calibration

The LDA measurement principle is given by the relation Vx = df • fd where Vx is the component of velocity in the plane of the laser beams, and perpendicular to their bisector, df is the distance between fringes, and fd is the Doppler frequency. The fringe spacing is a function of the distance between the two beams on the front lens and the focal length of the lens, given by the relation df = l /[ 2sin (q /2)] where l = laser wavelength and q = beam crossing angle. Since dfis a constant for a given optical system, there is a linear relation between the Doppler frequency and velocity. The calibration factor, i.e. the fringe spacing, is constant, calculable from the optical parameters, and mostly unaffected by other changing variables in the experiment. Hence, the LDA requires no physical calibration prior to use.

Implementation

The Fringe Model

Although the above description of LDA is accurate, it may be intuitively difficult to quantify. To handle this, the fringe model is commonly used in LDA as a reasonably simple visualization producing the correct results.

When two coherent laser beams intersect, they will interfere in the volume of the intersection. If the beams intersect in their respective beam waists, the wave fronts are approximately plane, and consequently the interference will produce parallel planes of light and darkness as shown in Figure 13.The interference planes are known as fringes, and the distance df between them depends on the wavelength and the angle between the incident beams:


The fringes are oriented normal to the x-axis, so the intensity of light reflected from a particle moving through the measuring volume will vary with a frequency proportional to the x-component, ux, of the particle velocity:

If the two laser beams do not intersect at the beam waists but elsewhere in the beams, the wave fronts will be curved rather than plane, and as a result the fringe spacing will not be constant but depend on the position within the intersection volume. As a consequence, the measured Doppler frequency will also depend on the particle position, and as such it will no longer be directly proportional to the particle velocity, hence resulting in a velocity bias.

Measuring Volume

Measurements take place in the intersection between the two incident focused laser beams, and the measuring volume is defined as the volume within which the modulation depth is higher than e-2 times the peak core value. Due to the Gaussian intensity distribution in the beams the measuring volume is an ellipsoid as indicated in Figure 14.

where F is the lens' focal length, E is the beam expansion (see Figure 15), and DL is the initial beam thickness (e-2).



As important are the fringe separation and number of fringes in the measurement volume. These are given by:

This number of fringes applies for a seeding particle moving straight through the center of the measuring volume along the x-axis. If the particle passes through the outskirts of the measuring volume, it will pass fewer fringes, and consequently there will be fewer periods in the recorded signal from which to estimate the Doppler frequency. To get good results from the LDA-equipment, one should ensure a sufficiently high number of fringes in the measuring volume. Typical LDA set-ups produce between 10 and 100 fringes, but in some cases reasonable results may be obtained with less, depending on the electronics or technique used to determine the frequency. The key issue here is the number of periods produced in the oscillating intensity of the reflected light, and while modern processors using FFT technology can estimate particle velocity from as little as one period, the accuracy will improve with more periods.

Backscatter Versus Forward Scatter

A typical LDA setup in the so-called back-scatter mode is shown in Figure 16. The figure also shows the important components of a modern commercial LDA system. The majority of light from commonly used seeding particles is scattered in a direction away from the transmitting laser, and in the early days of LDA, forward scattering was thus commonly used, meaning that the receiving optics was positioned opposite of the transmitting aperture (consult text by van den Hulst, 1981, for discussion on light scattering).


Figure 17 shows the mie scatter function for three different particle sizes. From the left size<l, center size=l and right size>l.


Most LDA measurements are performed using seeding particles that are larger than the wave length of the laser light of most lasers. Figure 17 illustrates that a much smaller amount of light is scattered back towards the transmitter, but advances in technology has made it possible to make reliable measurements even on these faint signals, and today backward scatter is the usual choice in LDA. This so-called backscatter LDA allows for the integration of transmitting and receiving optics in a common housing (as seen in Figure 16), saving the user a lot of tedious and time-consuming work aligning separate units.

Forward scattering LDA is however, not completely obsolete since in some cases the improved signal-to-noise ratio make forward-scatter the only way to obtain measurements at all. Experiments requiring forward scatter might include:

High speed flows, requiring very small seeding particles, which stay in the measuring volume for a very short time, and thus receive and scatter a very limited number of photons.

Transient phenomena which require high data-rates in order to collect a reasonable amount of data over a very short period of time.

Very low turbulence intensities, where the turbulent fluctuations might drown in noise, if measured with backscatter LDA.

Forward and back-scattering is identified by the position of the receiving aperture relative to the transmitting optics. Another option is off-axis scattering, where the receiver is looking at the measuring volume at an angle. Like forward scattering this approach requires a separate receiver, and thus involves careful alignment of the different units, but it helps to mitigate an intrinsic problem present in both forward an backscatter LDA. As indicated in Figure 14 the measuring volume is an ellipsoid, and usually the major axis dz is much bigger than the two minor axes dx and dy, rendering the measuring volume more or less "cigar-shaped". This makes forward and backscattering LDA sensitive to velocity gradients within the measuring volume, and in many cases also disturbs measurements near surfaces due to reflection of the laser beams.

Figure 18 illustrates how off-axis scattering reduces the effective size of the measuring volume. Seeding particles passing through either end of the measuring volume will be ignored since they are out of focus, and as such contribute to background noise rather than to the actual signal. This reduces the sensitivity to velocity gradients within the measuring volume, and the off-axis position of the receiver automatically reduces problems with reflection. These properties make off-axis scattering LDA very efficient for example in boundary layer or near- surface measurements.


Optics

In modern LDA equipment the light from the beam splitter and the Bragg cell is sent through optical fibers, as is the light scattered back from seeding particles. This reduces the size and the weight of the probe itself, making the equipment flexible and easier to use in practical measurements. A photograph of a pair of commercially available LDA probes is shown in Figure 19. The laser, beam splitter, Bragg cell and photodetector (receiver) can be installed stationary and out of the way, while the LDA-probe can be traversed between different measuring positions.


It is normally desired to make the measuring volume as small as possible, which according to the formulas governing the measurement volume means that, the beam waist

should be small. The laser wavelength l is a fixed parameter, and focal length F is normally limited by the geometry of the model being investigated. Some lasers allow for adjustment of the beam waist position, but the beam waist diameter DL is normally fixed. This leaves beam expansion as the only remaining way to reduce the size of the measuring volume. When no beam expander is installed, then E = 1.

A beam expander is a combination of lenses in front of or replacing the front lens of a conventional LDA system. It converts the beams exiting the optical system to beams of greater width. At the same time the spacing between the two laser beams is increased, since the beam expander also increases the aperture. Provided the focal length F remains unchanged, the larger beam spacing will thus increase the angle q between the two beams. According to the formulas in governing the measurement volume this will further reduce the size of the measuring volume.

In agreement with the fundamental principles of wave theory, a larger aperture is able to focus a beam to a smaller spot size and hence generate greater light intensity from the scattering particles. At the same time the greater receiver aperture is able to pick up more of the reflected light. As a result the benefits of the beam expander are threefold:

  • Reduce the size of the measuring volume at a given measuring distance.

  • Improve signal-to-noise ratio at a given measuring distance, or

  • Reach greater measuring distances without sacrificing signal-to-noise.

Frequency Shift

A drawback of the LDA-technique described so far is that negative velocities ux< 0 will produce negative frequencies fD < 0. However, the receiver can not distinguish between positive and negative frequencies, and as such, there will be a directional ambiguity in the measured velocities.

To handle this problem, a Bragg cell is introduced in the path of one of the laser beams (as shown in Figure 12). The Bragg cell shown in Figure 20 is a block of glass. On one side, an electro-mechanical transducer driven by an oscillator produces an acoustic wave propagating through the block generating a periodic moving pattern of high and low density. The opposite side of the block is shaped to minimize reflection of the acoustic wave and is attached to a material absorbing the acoustic energy.


The incident light beam hits a series of traveling wave fronts, which act as a thick diffraction grating. Interference of the light scattered by each acoustic wave front causes intensity maxima to be emitted in a series of directions. By adjusting the acoustic signal intensity and the tilt angle q B of the Bragg cell, the intensity balance between the direct beam and the first order of diffraction can be adjusted. In modern LDA-equipment this is exploited, using the Bragg cell itself as the beam splitter. Not only does this eliminate the need for a separate beam splitter, but it also improves the overall efficiency of the light transmitting optics, since more than 90% of the lasing energy can be made to reach the measuring volume, effectively increasing the signal strength.

The Bragg cell adds a fixed frequency shift f0 to the diffracted beam, which then results in a measured frequency off a moving particle of

and as long as the particle velocity does not introduce a negative frequency shift numerically larger than f0, the Bragg cell with thus ensure a measurable positive Doppler frequency fD. In other words the frequency shift f0 allows measurement of velocities down to

without directional ambiguity. Typical values might be l = 500 nm, f0 = 40 MHz, q = 20º, allowing of negative velocity components down to ux > -57.6 m/s. Upwards the maximum measurable velocity is limited only by the response-time of the photo-multiplier and the signal-conditioning electronics. In modern commercial LDA equipment, such a maximum is well into the supersonic velocity regime.

Signal Processing

The primary result of a laser anemometer measurement is a current pulse from the photodetector. This current contains the frequency information relating to the velocity to be measured. The photocurrent also contains noise, with sources for this noise being:

  • Photodetection shot noise

  • Secondary electronic noise

  • Thermal noise from preamplifier circuit

  • Higher order laser modes (optical noise)

  • Light scattered from outside the measurement volume, dirt, scratched windows, ambient light, multiple particles, etc.

  • Unwanted reflections (windows, lenses, mirrors, etc).

The primary source of noise is the photodetection shot noise, which is a fundamental property of the detection process. The interaction between the optical field and the photo-sensitive material is a quantum process, which unavoidably impresses a certain amount of fluctuation on the mean photocurrent. In addition there is mean photocurrent and shot noise from undesired light reaching the photodetector. Much of the design effort for the optical system is aimed at reducing the amount of unwanted reflected laser light or ambient light reaching the detector.

A laser anemometer is most advantageously operated under such circumstances that the shot noise in the signal is the predominant noise source. This shot noise limited performance can be obtained by proper selection of laser power, seeding particle size and optical system parameters. In addition, noise should be minimized by selecting only the minimum bandwidth needed for measuring the desired velocity range by setting low-pass and high-pass filters in the signal processor input. Very important for the quality of the signal, and the performance of the signal processor, is the number of seeding particles present simultaneously in the measuring volume. If on average much less than one particle is present in the volume, we speak of a burst-type Doppler signal. Typical Doppler burst signals are shown in Figure 22.


Figure 23 shows the filtered signal, which is actually input to the signal processor. The DC-part, which was removed by the high-pass filter, is known as the Doppler Pedestal. The envelope of the Doppler modulated current reflects the Gaussian intensity distribution in the measuring volume.


If more particles are present in the measuring volume simultaneously, we speak of a multi-particle signal. The detector current is the sum of the current bursts from each individual particle within the illuminated region. Since the particles are located randomly in space, the individual current contributions are added with random phases, and the resulting Doppler signal envelope and phase will fluctuate. Most LDA-processors are designed for single-particle bursts, and with a multi-particle signal, they will normally estimate the velocity as a weighted average of the particles within the measuring volume. One should be aware however, that the random phase fluctuations of the multi-particle LDA signal adds a phase noise to the detected Doppler frequency, which is very difficult to remove.

To better estimate the Doppler frequency of noisy signals, frequency domain processing techniques are used. With the advent of fast digital electronics, the Fast Fourier Transform of digitized Doppler signals can now be performed at a very high rate (100's of kHz). The power spectrum S of a discretized Doppler signal x is given by

where N is the number of discrete samples, and K = -N, -N + 1, .., N – 1. The Doppler frequency is given by the peak of the spectrum.

Data Analysis

In LDA there are two major problems faced when making a statistical analysis of the measurement data; velocity bias and the random arrival of seeding particles to the measuring volume. While velocity bias is the predominant problem for simple statistics, such as mean and rms values, the random sampling is the main problem for statistical quantities that depend on the timing of events, such as spectrum and correlation functions (see Lee & Sung, 1994).

Table 2 illustrates the calculation of moments, on the basis of measurements received from the processor. The velocity data coming from the processor consists of N validated bursts, collected during the time T, in a flow with the integral time scale tI. For each burst the arrival time ai and the transit time ti of the seeding particle is recorded along with the non-cartesian velocity components (ui, vi, wi). The different topics involved in the analysis are described in more detail in the open literature, and will be touched upon briefly in the following section.

Making Measurements

Dealing With Multiple Probes (3D Setup)

The non-cartesian velocity components (u1, u2, u3) are transformed to cartesian coordinates (u, v, w) using the transformation matrix C:

A typical 3-D LDA setup requiring coordinate transformation is shown in Figure 24, where 3-dimensional velocity measurements are performed with a 2-D probe positioned at off-axis angle a1 and a 1-D probe positioned at off-axis angle a2. The transformation for this case is:


Buchave, in DISA Information No. 29 shows demonstrates that to obtain a good accuracy for the transformed V and W velocity components the minimum angle between the to incident fringe sets should be no less than 19 deg,

Calculating Moments

Moments are the simplest form of statistics that can be calculated for a set of data. The calculations are based on individual samples, and the possible relations between samples are ignored as is the timing of events. This leads to moments sometimes being referred to as one-time statistics, since samples are treated one at a time.

Table 2 lists the formulas used to estimate the moments. The table operates with velocity-components xi and yi, but this is just examples, and could of course be any velocity component, cartesian or not. It could even be samples of an external signal representing pressure, temperature or something else.

George (1978) gives a good account of the basic uncertainty principles governing the statistics of correlated time series with emphasis on the differences between equal-time and Poisson sampling (LDA measurements fall in the latter). Statistics treated by George include the mean, variance, autocorrelation and power spectra. A recent publication by Benedict & Gould (1996) gives methods of determining uncertainties for higher order moments.

Velocity Bias and Weighting Factor

Even for incompressible flows where the seeding particles are statistically uniformly distributed, the sampling process is not independent of the process being sampled (that is the velocity field). Measurements have shown that the particle arrival rate and the flow field are strongly correlated ([McLaughlin & Tiedermann (1973)] and [Erdmann & Gellert (1976)]). During periods of higher velocity, a larger volume of fluid is swept through the measuring volume, and consequently a greater number of velocity samples will be recorded. As a direct result, an attempt to evaluate the statistics of the flow field using arithmetic averaging will bias the results in favor of the higher velocities.

There are several ways to deal with this issue:

Ensure statistically independent samples - the time between bursts must exceed the integral time-scale of the flow field at least by a factor of two. Then the weighting factor corresponds to the arithmetic mean, hi = . Statistically independent samples can be accomplished by using very low concentration of seeding particles in the fluid.

Use dead-time mode - The dead-time is a specified period of time after each detected Doppler-burst, during which further bursts will be ignored. Setting the dead-time equal to two time the integral time scale will ensure statistically independent samples, while the integral time-scale itself can be estimated from a previous series of velocity samples, recorded with the dead-time feature switched off.

Use bias correction - If one plans to calculate correlations and spectra on the basis of measurements performed, the resolution achievable will be greatly reduced by the low data-rates required to ensure statistically independent samples. To improve the resolution of the spectra, a higher data rate is needed, which as explained above will bias the estimated average velocity. To correct this velocity-bias, a non-uniform weighting factor is introduced:

The bias-free method of performing the statistical averages on individual realizations uses the transit time, ti , weighting (see George, 1975). Additional information on the transit time weighting method can be found in [George (1978)], [Buchhave, George & Lumley (1979)] and [Buchhave (1979), Benedict & Gould (1999)]. In the literature transit time is sometimes referred to as residence time.

Particle Image Velocimetry (PIV)

Principles of PIV

PIV is a velocimetry technique based on images of tracer "seeding" particles suspended in the flow. In an ideal situation, these particles should be perfect flow tracers, homogeneously distributed in the flow, and their presence should not alter the flow properties. In that case, local fluid velocity can be measured by measuring the fluid displacement (see Figure 25) from multiple particle images and dividing that displacement by the time interval between the exposures. To get an accurate instantaneous flow velocity, the time between exposures should be small compared to the time scales in the flow, and the spatial resolution of the PIV sensor should be small compared to the length scales in the flow.


Principles of PIV have been covered in many papers including Willert C. E., and Gharib, M., (1991), Adrian, R. J., (1991) and Lourenco, L. M. et. al. (1989). A more recent book by Raffel, M., Willert, C., Kompenhans, J., (1998) is an excellent source of information on various aspects of PIV.

The principle layout of a modern PIV system is shown in Figure 26. The PIV measurement includes illuminating a cross section of the seeded flow field, typically by a pulsing light sheet, recording multiple images of the seeding particles in the flow using a camera located perpendicular to the light sheet, and analyzing the images for displacement information.


The recorded images are divided into small sub-regions called interrogation regions, the dimensions of which determine the spatial resolution of the measurement. The interrogation regions can be adjacent to each other, or more commonly, have partial overlap with their neighbors. The shape of the interrogation regions can deviate from square to better accommodate flow gradients. In addition, interrogation areas A and B, corresponding to two different exposures, may be shifted by several pixels to remove a mean dominant flow direction (DC offset) and thus improve the evaluation of small fluctuating velocity components about the mean.

The peak of the correlation function gives the displacement information. For double or multiple exposed single images, an auto-correlation analysis is performed. For single exposed double images, a cross-correlation analysis gives the displacement information.

Image Processing

If multiple images of the seeding particles are captured on a single frame (as seen in the photograph of Figure 27), then the displacements can be calculated by auto-correlation analysis.


This analysis technique has been developed for photography-based PIV, since it is not possible to advance the film fast enough between the two exposures. The auto-correlation function of a double exposed image has a central peak, and two symmetric side peaks, as shown in Figure 28. This poses two problems: (1) although the particle displacement is known, there is an ambiguity in the flow direction, (2) for very small displacements, the side peaks can partially overlap with the central peak, limiting the measurable velocity range.


In order to overcome the directional ambiguity problem, image shifting techniques using rotating mirrors (Landreth, C. C. et. al, 1988) and electro-optical techniques (Landreth, C. C. et. al. 1988), (Lourenco, L. M., 1993) have been developed. To leave enough room for the added image shift, larger interrogation regions are used for auto-correlation analysis. By displacing the second image at least as much as the largest negative displacement, the directional ambiguity is removed. This is analogous to frequency shifting in LDA systems to make them directionally sensitive. Image shifting by rotating mirrors does however, introduce a parallax error in the resulting velocity estimations. This error has been documented to be as large as 11%.

The preferred method in PIV is to capture two images on two separate frames, and perform cross-correlation analysis. This cross-correlation function has a single peak, providing the magnitude and direction of the flow without ambiguity.

Common particles need to exist in the interrogation regions, which are being correlated, otherwise only random correlation, or noise will exist. The PIV measurement accuracy and dynamic range increase with increasing time difference Dt between the pulses. However, as Dt increases, the likelihood of having common particles in the interrogation region decreases and the measurement noise goes up. A good rule of thumb is to insure that within time Dt, the in-plane components of velocity Vx and Vy carry the particles no more than a third of the interrogation region dimensions, and the out-of-plane component of velocity Vz carries the particles no more than a third of the light sheet thickness.

Most commonly used interrogation region dimensions are 64 by 64 pixels for auto-correlation, and 32 by 32 pixels for cross-correlation analysis. Since the maximum particle displacement is about a third of these dimensions, to achieve reasonable accuracy and dynamic range for PIV measurements, it is necessary to be able to measure the particle displacements with sub-pixel accuracy.

FFT techniques are used for the calculation of the correlation functions. Since the images are digitized, the correlation values are found for integral pixel values, with an uncertainty of ±0.5 pixels. Different techniques, such as centroids, Gaussian, and parabolic fits, have been used to estimate the location of the correlation peak. Using 8-bit digital cameras, peak estimation accuracy of between 0.1 pixels to 0.01 pixels can be obtained. In order for sub-pixel interpolation techniques to work properly, it is necessary for particle images to occupy multiple (2-4) pixels. If the particle images are too small, i.e. around one pixel, these sub-pixel estimators do not work properly, since the neighboring values are noisy. In such a case, slight defocusing of the image improves the accuracy.

Digital windowing and filtering techniques can be used in PIV systems to improve the results. Top-hat windows with zero padding, or Gaussian windows, applied to interrogation regions are effective in reducing the cyclical noise, which is inherent to the FFT calculation. A Gaussian window also improves measurement accuracy in cases where particle images straddle the boundaries of the interrogation regions. A spatial frequency high-pass filter can reduce the effect of low frequency distortions from optics, cameras, or background light variations. A spatial frequency low pass filter can reduce the high-frequency noise generated by the camera, and ensure that the sub-pixel interpolation algorithm can still work in cases where the particles in the image map are less than 2 pixels in diameter. Typical image to vector processing sequence is shown in Figure 30.


The spatial resolution of PIV can be increased using multi-pass correlation approaches. Offsetting the interrogation region by a value equal to the local integer displacement, higher signal-to-noise can be achieved in the correlation function, since the probability of matching particle pairs is maximized. This idea has led to implementations such as adaptive correlation (Adaptive Correlation, Dantec Product Information, 2000), super-resolution PIV (Keane, R. D. et. al. 1995), hybrid PTV (Cowen, E. A. & Monosmith, S. G. 1997).

PIV Calibration

The PIV measurement is based on the simple relation V = d/D t. Seeding particle velocities, which approximate the flow field velocity, are given by the particle displacement obtained from particle images in at least two consecutive times, divided by the time interval between the images. Hence, the PIV technique also has a linear calibration response between the primary measured quantity, i.e. particle displacement, and particle velocity. Since the displacement is calculated from images, commonly using correlation techniques, PIV calibration involves measuring the magnification factor for the images. In the case of 3-D stereoscopic PIV, calibration includes documenting the perspective distortion of target images obtained in different vertical locations by the two cameras situated off normal to the target.

Implementation

The vast majority of modern PIV systems today consist of dual-cavity Nd:Yag lasers, cross-correlation cameras, and processing using software or dedicated FFT-based correlation hardware. This section reviews the typical implementation of these components to perform a measurement process (as shown in the flowchart of Figure 31) and addresses concerns regarding proper seeding, light delivery and imaging.


Light Sources and Delivery

In PIV, lasers are used only as a source of bright illumination, and are not a requirement. Flash lamps, and other white light sources can also be used. Some facilities prefer these non-laser light sources because of safety issues. However, white light cannot be collimated as well as coherent laser light, and their use in PIV is not widespread.

PIV image acquisition should be completed using short light pulses to prevent streaking. Hence, pulsed lasers are naturally well suited for PIV work. However, since many labs already have existing continuous wave (CW) lasers, these lasers have been adapted for PIV use, especially for liquid applications.

Dual-cavity Nd:Yag lasers, also called PIV lasers, are the standard laser configuration for modern PIV systems. Nd:Yag lasers emit infrared radiation whose frequency can be doubled to give 532-nm green wavelength. Mini lasers are available with powers up to 200mJ per pulse, and larger lasers provide up to 1J per pulse. Typical pulse duration is around 10 nanoseconds, and the pulse frequency is typically 5-30Hz. In order to achieve a wide range of pulse separations, two laser cavities are used to generate a combined beam. Control signals required for PIV Nd:Yag lasers are shown in Figure 32.


Argon-Ion Lasers are CW gas lasers whose emission is composed of multiple wavelengths in the green-blue-violet range. Air-cooled models emitting up to 300mW and water-cooled models emitting up to 10 Watts are quite common in labs for LDA use. They can also be used for PIV experiments in low speed liquid applications, in conjunction with shutters or rotating mirrors.

Copper-vapor Lasers are pulsed metal vapor lasers that emit green (510nm) and yellow (578nm). Since the repetition rates can reach up to 50KHz, energy per pulse is few mJ or less. They are used with high framing rate cameras for flow visualization, and some PIV applications.

Ruby Lasers have been used in PIV because of their high-energy output. But, their 694nm wavelength is at the end of the visible range, where typical CCD chips and photographic film are not very sensitive.

Fiber optics are commonly used for delivering Ar-Ion beams conveniently and safely. Single mode polarization preserving fibers can be used for delivering up to 1 Watt of input power, whereas multi-mode fibers can accept up to 10 Watts. Although use of multi-mode fibers produces non-uniform intensity in the light sheet, they have been used in some PIV applications.

The short duration high power beams from pulsed Nd:Yag lasers can instantly damage optical fibers. Hence, alternate light guiding mechanisms have been developed, consisting of a series of interconnecting hollow tubes, and flexible joints where high-power mirrors are mounted. Light sheet optics located at the end of the arm can be oriented at any angle, and extended up to 1.8 m. Such a mechanism can transmit up to 500mJ of pulsed laser radiation with 90% transmission efficiency at 532 nm, offering a unique solution for safe delivery of high-powered pulsed laser beams.

The main component of light sheet optics is a cylindrical lens. To generate a light sheet from a laser beam with small diameter and divergence, such as one from an Ar-Ion laser, using a single cylindrical lens can be sufficient. For Nd:Yag lasers, one or more additional cylindrical lenses are used, to focus the light sheet to a desired thickness, and height. For light sheet optics designed for high power lasers, a diverging lens with a negative focal length is used first to avoid focal lines.

Image Recorders

Cross-correlation cameras are the preferred method of sampling data for PIV. The cross-correlation cameras use high-performance progressive-scan interline CCD chips. Such chips include m × n light sensitive cells and an equal number of storage cells (blind cells). Figure 33 is a schematic illustration of the light-sensitive pixels and storage cell layout for these cameras.


The first laser pulse is timed to expose the first frame, which is transferred from the light-sensitive cells to the storage cells immediately after the laser pulse. The second laser pulse is then fired to expose the second frame. The storage cells now contain the first camera frame of the pair with information about the initial positions of seeding particles. The light-sensitive cells contain the second camera frame, which has information on the final positions of the seeding particles. These two frames are then transferred sequentially to the camera outputs for acquisition and cross-correlation processing.

Cross correlation CCD cameras are available with resolutions up to 2K by 2K pixels, and framing rates up to 30 Hz. 8-bit cameras are sufficient for most purposes. However, recently, 12 bit cameras are also becoming common, especially for applications such as Planar Laser-Induced Fluorescence (PLIF) where extra sensitivity and dynamic range are required.

Flow fields with velocities ranging from microns per second to supersonic speeds can be studied since inter-frame time separations down to few hundred nanoseconds can be obtained. One interesting feature of the cameras is that they can be asynchronously reset. This is particularly useful in conjunction with the special triggering options for synchronizing the measurements to external events, such as rotating machinery.

PIV Data Processing

Typically, the "raw" PIV data obtained from the cross correlation of two images, needs to be validated and optionally smoothed, before statistical values are calculated, or various derived quantities are computed.

The following are various data validation techniques that are commonly used:

(i) Correlation peak-height validation works based on the height of the peaks in the correlation plane. If P1 is the highest peak, and P2 is the second highest peak, the most common approach called detectibility criterion validates vectors for which P1/P2 >k, where k is typically around 1.2 (Kean R. D., Adrian, R. J., 1992)

(ii) Velocity-range validation rejects vectors whose magnitude or components are outside of a given range. Normally, the user has an idea about the range of velocities in the flow. This information is used as validation criteria.

Vmin < |V| < Vmax (length)

Vx,min < |Vx| < Vx,max (x-component)

Vy,min< | Vy| < Vy,max (y-component)

Hence, if the vector does not satisfy the required relations above, it is rejected.

(iii) Moving-average validation is a special case of the general class of iterative filtered validation, described by (Host-Madsen, A. and McCluskey, D. R., 1994). Since the vector field is over sampled by the PIV technique, there is a correlation between neighboring vectors, and there is not too much change from one vector to its neighbor. If a vector deviates too much from its neighbors, it must be an "outlier". Hence, in this technique, the average of the vectors neighboring a given vector is calculated, and compared with the vector. If the difference is larger than a certain acceptance factor, that vector is rejected. The rejected vector may be substituted by a local average of its neighbors.

Moving average filter substitutes each vector with the uniformly weighted average of the vectors in a neighborhood of a specified size m x n. Here, m and n are odd number of vector cells symmetrically located around each vector. This filter takes out the high frequency jitter in the PIV results.

After validation and optional filtering, the user is ready to calculate derived quantities and statistical values. Following are the commonly calculated derived quantities from PIV data:

(i) Vorticity is the curl of the three-dimensional velocity vector. From 2-D PIV calculations, the normal component of the vorticity vector can be calculated.

(ii) Streamlines are curves parallel to the direction of the flow. They are defined by the equation vxdy = vydx. They represent the path that a particle would follow if the flow field were constant with time. Hence, the streamlines calculated from PIV measurements are only correct, if the flow is two-dimensional in the plane of the light sheet.

The commonly employed statistical properties that are calculated from PIV measurements include the mean of each velocity component, the standard deviation of the mean, and the covariance coefficient.

Stereoscopic (3-D) PIV

Conventional 2-D planar PIV technique measures projections of 3-D velocity vectors onto the 2-D plane defined by the light sheet. It is not capable of measuring the third component of the flow, normal to the light sheet. In fact, if that normal component is large, the planar PIV technique can give wrong results even for the in-plane components of velocity, due to "parallax" error. This error gets increasingly large from the center to the edges of the image. In these situations, the problem is normally minimized by having a large focal length lens so that the distance from the camera to the image is large compared to the image area.

Since there are many applications where it is important to measure the third component of velocity normal to the light sheet, various approaches have been proposed to recover that third component. The most common technique, which will be described here, is called "Stereoscopic PIV", and involves using an additional camera, and viewing the flow from two different angles. It is based on the same principle as the human stereo eyesight. The two eyes see slightly different images of the objects around us. The differences of the images are compared in the brain, and interpreted as the 3-dimensional perception.

Stereoscopic Imaging Basics and the Scheimpflug Condition

Stereoscopic PIV is a planar PIV technique for measuring all three components of velocity. Instead of having a single camera normal to the light sheet, two cameras are used, each looking at the same flowfield at different angles. Due to different orientations, each camera records a different image. 3-D displacements and hence velocities on the plane can be derived by properly calibrating the camera views of a target, and combining 2-D results from each camera

The principle of stereo-PIV is indicated in Figure 34. When each camera views the measurement volume illuminated by the light sheet at an angle, the CCD-chip in each camera needs to be tilted so that the entire field of view of the camera can be focused. In fact, for each camera to be properly focused, the object (light sheet), camera lens, and image (CCD chip) planes should all intersect along a common line (Prasad , A. K. & Jensen, K. 1995). This is called the Scheimpflug condition.


When the Scheimpflug condition is satisfied, a perspective distortion is introduced into the two images as a side effect. Hence, the magnification factor is not constant across the image any more, and needs to be evaluated via calibration.

Calibration and 3-D Reconstruction

In order to reconstruct the true 3-D (X, Y, Z) displacements from two 2-D (x, y) displacements as observed by the two cameras, a numerical model is necessary that describes how each of the two cameras image the flow field onto their CCD chips. Using the camera imaging models, four equations (which may be linear or non-linear) with three unknowns are obtained.

Instead of a theoretical model that requires careful measurements of distances, angles, etc., an experimental calibration approach is preferred. The experimental calibration estimates the model parameters based on the images of a calibration target as recorded by each camera.

A linear imaging model that works well for most cases is called the Pinhole Camera Model, and is based on geometrical optics. This leads to the following Direct Linear Transform equations, where x,y are image coordinates, and X,Y,Z are object coordinates. This physics based model cannot describe non-linear phenomena such as lens distortions.

In experiments involving significant lens distortion, refraction, etc., higher order non-linear imaging models can be used. (Soloff, S. M. et.al., 1997) These models are based not on a physical mapping of the geometry, but rather on a least square fitting of the image-object pairs using adjustable parameters. Imaging parameters such as image magnification and focal length do not need to be determined, and higher order terms can compensate for non-linear effects.

Seeding Particles

Rather than relying on naturally existing particles, it is common practice to add particles to the flow to have control over their size, distribution, and concentration. In general, these particles should be small enough to be good flow tracers, and large enough to scatter sufficient light for imaging. They should also be non-toxic, non-corrosive, and chemically inert if possible. Melling (1997) reviews a wide variety of tracer materials that have been used in liquid and gas PIV experiments. Methods of generating seeding particles and introducing them into the flow are also discussed.

The choice of seeding depends on a number of parameters. Primarily the seeding material should be chosen considering the flow that is to be measured, and the laser available. In general seeding particles should be chosen as large as possible in order to scatter the most light, but the particle size is limited, since too large particles will not follow the flow properly. In general the maximum allowable particle size decrease with increasing flow velocity, turbulence and velocity gradients.

Ideally the seeding material should also be chosen, so the seeding particles are neutrally buoyant in the carrying fluid, but in many flows this is a secondary consideration. Finally it should be considered how the flow is seeded.

Water flows are often implemented using water in a closed circuit making it easier to control seeding. Possibilities for liquid applications include silver-coated hollow glass spheres, polystyrene, polymers, titanium dioxide (TiO2), aluminum oxide (Al2O3), conifer pollen, and hydrogen bubbles. Furthermore, fluorescent dies are used in conjunction with polystyrene or polymer particles to generate particles that will absorb the incident laser radiation and emit at another wavelength band. A common dye for Nd:Yag lasers operating in the 532-nm spectral range is Rhodomine-B, which when excited emits at wavelengths above 560 nm. Hence, for applications where many reflections exist from geometric boundaries (for example stirred tanks), the use of fluorescent particles is greatly advantageous.

Air flows on the other hand are often not re-circulated, and thus require the seeding to be generated at the inlet and disposed of at the outlet. For gas flow applications, theatrical smoke, different kinds of atomized oil, water, titanium dioxide (TiO2), aluminum oxide (Al2O3) have been used. Typical theatrical smoke generators are inexpensive, and they generate plenty of particles. Oil can be atomized using devices such as a Laskin nozzle, generating particles in the micron to sub-micron range, which are particularly useful for high-speed applications. Titanium dioxide (TiO2), and aluminum oxide (Al2O3) are useful for high temperature applications such as combustion, and flame measurements.

The natural concentration of very small particles is often much greater than that of particles in the useful range. In some cases, most often when measuring in liquids, this causes an undesirable shot noise level due to the incoherent signals from the many small particles. In general, it is strongly recommended, whenever possible, to control the size and concentration of the seeding particles by filtering the fluid and subsequently adding seeding particles of known size.

Presented at ENCIT2004 – 10th Brazilian Congress of Thermal Sciences and Engineering, Nov. 29 -- Dec. 03, 2004, Rio de Janeiro, RJ, Brazil.

Technical Editor: Atila P. Silva Freire.

  • Adrian, R.J.(1991) "Particle Imaging Techniques for Experimental Fluid Mechanics", Annual Review of Fluid Mechanics, vol 23, p 261-304
  • Bakker, A. K., Myers, J., ward, R. W., and Lee, C. K. (1996), "The Laminar and Turbulence Flow Pattern of a Pitched Blade Turbine", Trans. I. Chem.. E. 74, 485-491
  • Benedict, L. H., and Gould, R. D., "Towards better uncertainty estimates for turbulence statistics," Experiments in Fluids, Vol. 22, 1996, pp. 129-136.
  • Benedict, L. H., and Gould, R. D., 1999, "Understanding biases in the near-field region of LDA two-point correlation measurements," Experiments in Fluids, Vol. 26, pp. 381-388.
  • Bivolaru, D., Ötügen, M. V., Tzes, A., and Papadopoulos, G., 1999, "Image Processing for Interferometric Mie and Rayleigh Scattering Velocity Measurements," AIAA Journal, Vol. 37, No. 6, pp. 688-694.
  • Buchhave, P., 1979, "The measurement of turbulence with the burst-like Laser-Doppler-Anemometer-Errors and Correction Methods," Tech. Report TRL-106, State University of New York at Buffalo.
  • Buchhave, P., George, W. K., and Lumley, J. L., 1979, "The measurement of turbulence with the laser-Doppler anemometer," Annual Review of Fluid Mechanics, Vol. 11, pp. 443-503.
  • Cowen, E. A., and Monosmith, S. G. (1997) "A Hybrid Digital Particle Tracking Velocimetry Technique", Experiments in Fluids 22, p 199-211.
  • Durst, F., Melling, A., and Whitelaw, J. H., 1976, "Principles and Practice of Laser-Doppler Anemometry," Academic Press.
  • Durst, F., and Zare, M., (1975), Laser-Doppler Measurements in Two-Phase Flows. Proceedings of the LDA-Symposium, Copenhagen, pp. 403-429.
  • Erdmann, J. C., and Gellert, R. I., 1976, "Particle arrival statistics in Laser Anemometry of turbulent flow," Applied Physics Letters, Vol. 29, pp. 408-411.
  • George, W. K., 1976, "Limitations to measuring accuracy inherent in the laser Doppler signal," in The Accuracy of Flow Measurements by Laser Doppler Methods.
  • George, W. K., 1978, "Processing of random signals," Proceedings of the Dynamic flow Conference, Skovlunde, Denmark.
  • Guezennec, Y. G., Brodkey, R. S., Trigue, N. T., and Kent, J. C., (1994) "Algorithms for Fully Automated Three-Dimensional Particle Tracking Velocimetry," Experiments in Fluids 17, p 209-219.
  • Host-Madsen, A. and McCluskey, D.R. (1994) "On the Accuracy and Reliability of PIV Measurements", Proceedings of the Seventh International Symposium on Applications of Laser Techniques to Flow Measurements, Lisbon
  • Keane, R.D.,and Adrian, R.J.(1992) " Theory of Cross-correlation Analysis of PIV Images", Applied Scientific Research, vol.49, p 191-215
  • Keane, R. D., Adrian, R. J., Zhang, Y. (1995) "Super-resolution Particle Image Velocimetry", Measurement Science and Technology, 6, p 754-568
  • Kegrise, M. A., and Settles, G.S., 2000, "Schlieren Image-Correlation Velocimetry and its Application to Free-Convection Flows", 9th international Symposium on Flow Visualization, Carlomagno, G. M., and Grant, I., eds., Henriot-Watt Univ.,Edinburgh, pp. 380:1-13.
  • Landreth, C. C., Adrian, R. J., Yao, C. S. (1988a) "Double-pulsed Particle Image Velocimeter with Directional Resolution for Complex Flows, Experiments in Fluids 6, p 119-128
  • Landreth, C. C., Adrian, R. J. (1988b) Electro-optical Image Shifting for Particle Image Velocimetry, Applied Optics 27, p 4216-4220
  • Lourenco, L. M., Krothopalli, A., and Smith, C. A. (1989), "Particle Image Velocimetry", Advances in Fluid Mechanics Measurements, Springer-Verlag, Berlin, 127
  • Lourenco, L. M. (1993) "Velocity Bias Technique for Particle Image Velocimetry Measurements of High Speed Flows", Applied Optics 32, p 2159-2162
  • McLaughlin, D. K. and Tiedermann, W. G. , Jr. 1973 "Biasing Correction for Individual Realization of Laser Anemometer Measurements in Turbulent Flow," Physics of Fluids, 16,12, pp 2082-2088.
  • Melling, A. (1997) "Tracer Particles and Seeding for Particle Image velocimetry", Measurement Science and Technology, Vol. 8 No. 12, p 1406-1416
  • Meinhart, C. D., Werely, S. T., Santiago, J. G. 1999 "PIV Measurements of a Microchannel Flow", Experiments in Fluids, Vol. 27, pp. 414-419.
  • Papadopoulos, G., 2000, "Inferring Temperature by Means of a Novel Shadow Image Velocimetry Technique," Journal of Thermophysics and Heat Transfer, Vol. 14, No. 4, pp. 593-600.
  • Prasad, A. K., Jensen, K. (1995); "Scheimpflug Stereo Camera for Particle Image Velocimetry in Liquid Flows"; Applied Optics, Vol.34, No.30, p 7092-7099
  • Raffel, M., Willert, M., & Kompenhans, J. (1998), "Particle Image Velocimetry, A Practical Guide" Springer-Verlag Berlin, Heidelberg
  • Sheng, J., Meng, Hui, and Fox, Rodney, O. (1998), "Validation of CFD Simulations of a Stirred Tank Using Particle Image Velocimetry Data", The Canadian Journal of Chemical Engineering, Vol.76, June
  • Soloff, S. M., Adrian, R. J., and Liu, Z. C. (1997), "Distortion Compensation for Generalized Stereoscopic Particle Image Velocimetry", Measurement Science and Technology, 8, p 1441-1454
  • TOKUMARU, P. T., and DIMOTAKIS, P. E. 1995 "Image Correlation Velocimetry," Exp. in Fluids 19(1), 1-15.
  • Tropea,C. (1995) "Laser Doppler anemometry: recent developments and future challenges". Meas. Sci. Technol. 6: 605-619.
  • Van den Hulst, H. C., 1981, "Light scattering by small particles," Dover Publications.
  • Watrasiewics, B. M., and Rudd, M. J., 1976, "Laser Doppler Measurements," Butterworths & Co. Ltd.
  • Willert, C.E., Gharib, M. (1991) "Digital Particle Image Velocimetry", Experiments in Fluids 10, p 181-193
  • Yasushi Takeda 1995 "Velocity Profile Measurement by Ultrasonic Doppler Method" Experimental Thermal and Fluid Science.

Publication Dates

  • Publication in this collection
    26 Apr 2005
  • Date of issue
    Dec 2004
Associação Brasileira de Engenharia e Ciências Mecânicas - ABCM Av. Rio Branco, 124 - 14. Andar, 20040-001 Rio de Janeiro RJ - Brazil, Tel.: +55 21 2221-0438, Fax: +55 21 2509-7129 - Rio de Janeiro - RJ - Brazil
E-mail: abcm@abcm.org.br