+ All Categories
Home > Documents > Experimental Study of Lagrangian Velocity and Energy Statistics in Inhomogeneous Turbulence

Experimental Study of Lagrangian Velocity and Energy Statistics in Inhomogeneous Turbulence

Date post: 28-Mar-2022
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
81
Wesleyan University The Honors College Experimental Study of Lagrangian Velocity and Energy Statistics in Inhomogeneous Turbulence by Surendra Bahadur Kunwar Class of 2010 A thesis submitted to the faculty of Wesleyan University in partial fulfillment of the requirements for the Degree of Bachelor of Arts with Departmental Honors in Physics Middletown, Connecticut April, 2010
Transcript
Experimental Study of Lagrangian Velocity and Energy Statistics in
Inhomogeneous Turbulence
faculty of Wesleyan University
Degree of Bachelor of Arts
with Departmental Honors in Physics
Middletown, Connecticut April, 2010
Lagrangian velocity and energy statistics are studied in inhomogeneous tur-
bulence. A Rλ = 285 flow between two oscillating grids, with regions of nearly
homogeneous and highly inhomogeneous turbulence, is studied. Large data sets of
three-dimensional tracer particle velocities have been collected using stereoscopic
high speed cameras with real-time image compression technology. Lagrangian
structure functions conditioned on the instantaneous large scale velocity are mea-
sured in both homogeneous and inhomogeneous regions of the flow to assess the
effects of the large scales on the small scales in turbulence. At all scales, the
structure functions depend strongly on the large scale velocity, the dependence
showing clear signatures of inhomogeneity near the oscillating grids. But even in
the homogeneous region in the center, a strong dependence on the large scale ve-
locity remains at all scales. The conditional structure function measurements are
powerful tools in assessing the effects of inhomogeneity and intermittency of the
large scales on the small scales in turbulence. There is a bias present in the mea-
surement of Lagrangian structure functions at all timescales due to the finiteness
of the measurement volume. A method we developed to estimate such bias for
different timescales in our data is described. Our method gives an extrapolation
based on the structure functions for different artificial volumes, revealing the es-
timates of errors at different timescales. Analysis of structure functions is limited
to the timescales whose errors do not exceed 17%. Components of the turbulent
kinetic energy budget are estimated to identify the chief agents of turbulent energy
transport using Lagrangian energy measurements. Our estimation of the terms
in the turbulent energy equation identifies pressure transport as being significant,
along with the velocity transport term, in turbulent energy transport. These en-
ergy measurements provide a basis for further studying Lagrangian and Eulerian
energy transport in turbulent systems. Lastly, we study the average decay of the
kinetic energy of a particle as it enters a measurement volume in the shape of a
slab or a cube. We see that the bias due to trajectory length affects the energy
measurement. When this sample bias is removed, particles lose two-thirds of their
energy during their residence time.
Acknowledgment
At the beginning of my second semester at Wesleyan, when I couldn’t get into
a class that I wanted to, my faculty advisor Prof. Greg Voth showed me around
in the lab, and suggested me to do research with him. Having been fascinated by
the stories of scientists since childhood, I had no hesitation in trying out research.
More than three years later, I look back and appreciate that Prof. Voth has
guided me in research, academics, career and life in general. I will always be
deeply grateful to him for his steady and encouraging efforts to develop me as a
physicist, both in lab and in classroom. I have no doubts that the opportunity
that Prof. Voth provided me to be his Research Student for three summers were
indispensable in the making of this thesis.
All the data that I used for analysis have been produced by Dan Blum, a
PhD student of Prof. Voth. Without Dan’s diligence and helpfulness, it would
be unlikely for me to achieve what I have done today. I sincerely thank him for
everything, including information and nicely formatted figures from a recently
published paper.
My parents have always strived to give me the best possible education, and
this has been instrumental in building my academic foundation. Without my
family’s love, dream and endeavors, I wouldn’t be able to undertake this thesis
project successfully today.
During my years at Voth lab, I have had the pleasure of interacting with, or at
least knowing, superb graduate and undergraduate students. Some were involved
in building components of the experiment that I have studied. Many contributed
to building the solid tank and oscillating grid system, and the image compression
circuit is a fruit of the hardwork of a long list of advisees of Prof. Voth. They all
contributed to my research, and I am extremely thankful for their efforts. I am
also deeply indebted to the Physics Department at Wesleyan University for being
warm and family-like in nurturing young students like me. Also, all my labmates,
my friends and my hallmates have been greatly supportive and enthusiastic about
my thesis and research. It has been a pleasure talking about my research with
them, and I have always been positively driven by their interest in what I am
doing.
Contents
1.1 Fluids and Navier Stokes equation . . . . . . . . . . . . . . . . . . 2
1.2 Reynolds number . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Kolmogorov Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.7 Turbulence and Research . . . . . . . . . . . . . . . . . . . . . . . 9
2 The Experiment 12
2.2 Demands of Lagrangian statistics . . . . . . . . . . . . . . . . . . 16
2.3 Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Laser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Stereomatching: the final step . . . . . . . . . . . . . . . . . . . . 19
3 Measurement Volume Bias in Lagrangian Statistics 22
3.1 The possible sources of bias . . . . . . . . . . . . . . . . . . . . . 22
3.2 Estimating the possible bias in structure functions . . . . . . . . . 24
3.2.1 Structure function vs volume . . . . . . . . . . . . . . . . 25
3.2.2 Particle density vs volume . . . . . . . . . . . . . . . . . . 27
3.2.3 Structure function vs volume again . . . . . . . . . . . . . 29
3.3 Extrapolating the Structure Function . . . . . . . . . . . . . . . . 30
3.4 Comparison of errors . . . . . . . . . . . . . . . . . . . . . . . . . 34
4 Conditional structure functions and their significance 36
4.1 Eulerian Structure functions . . . . . . . . . . . . . . . . . . . . . 37
4.2 Determining the large and small scale velocities . . . . . . . . . . 38
4.3 Conditioned structure functions . . . . . . . . . . . . . . . . . . . 39
4.3.1 Eulerian structure functions . . . . . . . . . . . . . . . . . 39
4.3.2 Lagrangian structure functions . . . . . . . . . . . . . . . . 40
4.4 Effects of Inhomogeneity . . . . . . . . . . . . . . . . . . . . . . . 44
4.5 Other Causes of Dependence . . . . . . . . . . . . . . . . . . . . . 45
4.5.1 Kinematic Correlation . . . . . . . . . . . . . . . . . . . . 45
4.5.2 Reynolds number . . . . . . . . . . . . . . . . . . . . . . . 46
4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1 Decomposition of quantities . . . . . . . . . . . . . . . . . . . . . 52
5.1.1 Velocity decomposition . . . . . . . . . . . . . . . . . . . . 52
5.2 Energy Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2.1 Reynolds equation . . . . . . . . . . . . . . . . . . . . . . 54
5.2.4 Energy budget for the center of the tank . . . . . . . . . . 59
5.3 Energy decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6 Conclusion 65
List of Figures
2.1 A diagram of the experiment containing the tank illuminated by
the laser beam, the cameras and the grids. . . . . . . . . . . . . . 13
2.2 Flowchart showing the determination of 3D velocities from camera
images of tracer particles . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 Second order LVSF vs timescale for different artificial measurement
volumes, with radii ranging from 0.5 cm to 4.5 cm in increments of
0.5 cm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Comparing the average particle density at each measurement volume. 28
3.3 Second order LVSF vs timescale for different artificial measurement
volumes, with radii ranging from 0.8 cm to 2 cm. . . . . . . . . . 30
3.4 Second order LVSF vs measurement volume radius r for different
timescales up to the inertial subrange . . . . . . . . . . . . . . . . 31
3.5 Second order LVSF and its functional fit vs measurement volume
radius r for distinct timescales up to the inertial subrange . . . . . 33
3.6 Second order LVSF (both the data and the extrapolation) vs timescale 34
viii
4.1 Eulerian second order conditional structure function versus large
scale velocity at the center of the tank. . . . . . . . . . . . . . . . 40
4.2 Lagrangian second order conditional structure function versus timescale
for different instantaneous velocities at the center of the tank. . . 41
4.3 Lagrangian second order conditional structure function versus large
scale velocity, for different timescales at the center of the tank. . . 43
4.4 Lagrangian second order conditional structure function versus large
scale velocity, for different timescales in the near grid region. . . . 44
4.5 Eulerian second order conditional structure function versus large
scale velocity; comparing the dependence in different Reynolds
number turbulence. . . . . . . . . . . . . . . . . . . . . . . . . . . 47
scale velocity magnitude at the center of the tank. . . . . . . . . . 49
5.1 The axial component of velocity against time on the centerline
of a turbulent jet (figure taken from the experiment of Tong and
Warhaft (1995) as published by Pope [1]). . . . . . . . . . . . . . 52
5.2 The mean kinetic energy (KE) of a particle vs time (as a multiple
of Kolmogorov timescale at different detection volumes (slab and
cube) in the center of the tank. . . . . . . . . . . . . . . . . . . . 61
5.3 Mean KE vs time (as a multiple of Kolmogorov timescale) along
particle trajectory for cubes of different sizes but with a common
center at the homogeneous region of the flow. . . . . . . . . . . . 63
List of Tables
3.1 Comparison of the error estimates obtained by our method and the
one Berg et al. developed, for different timescales. . . . . . . . . . 35
4.1 Comparing the standard deviation of different components of par-
ticle velocity at the center of the tank. . . . . . . . . . . . . . . . 48
5.1 The estimated values of the production term, the dissipation term,
and the velocity transport term in the turbulent kinetic energy
equation for the center of the tank. . . . . . . . . . . . . . . . . . 59
x
Theories
Turbulence is a common phenomenon in our everyday life. River water flowing
at high speed, air in the wake of a speeding car, smoke emitted from a power plant
chimney are all turbulent. Turbulence can be a nuisance at times. Turbulence in
the air is a concern during flights, and turbulent water can be very destructive
during natural calamities like floods. On many occasions, turbulence is desirable.
Turbulent fluids are very efficient at mixing different components in a solution.
This mixing property also makes it useful in combustion in engines, as the fuel
needs to mix well with oxygen. Besides having such industrial applications, tur-
bulence comes into play in a host of other processes like cloud formation and
transport of pollutants.
Broadly speaking, the goal of my research is to better understand turbulence.
1
1.1 Fluids and Navier Stokes equation 2
Before describing turbulence as my field of research and before explaining my spe-
cific area of interest in turbulence, I will introduce some basic physics concepts
relevant to the field.
1.1 Fluids and Navier Stokes equation
Before quantifying turbulence, it is important to understand what a fluid is.
Anything that flows is a fluid. This definition includes all liquids and gases.
However, not all fluids have the same macroscopic properties of compressibility
and viscosity. As long as the fluid velocities remain low compared with the speed
of sound, a fluid is incompressible, so air and water are incompressible in a wide
range of circumstances. Also, they have constant viscosity and so belong to the
class of fluids called Newtonian fluids. Let’s denote the velocity field of a fluid by
U(r, t), where r is the position vector and t is time. Then, the assumption that
the density of air and water do not change means the divergence of their velocity
field is zero:
∇ ·U(r, t) = 0 (1.1)
Eq 1.1 is actually a result of the principle of conservation of mass applied to a
constant density fluid [1]. The behavior of all Newtonian fluids can be described
by the Navier Stokes equation, which is really a momentum conservation equation
in fluids:
Dt = ∂U
∂t + (U · ∇) U; p(r, t) is the pressure field;
ρ is the fluid density; and ν is the kinematic viscosity of the fluid. Sometimes, it
is convenient to write equations in tensor notation. The advantage of expressing
Eqs 1.1 and 1.2 in tensor notation is clear when one derives other equations
from them. Below are the tensor versions of the Navier Stokes equation and the
’incompressibility’ equation:
DUj Dt
∂Ui ∂xi
= 0 (1.4)
Perhaps, it is also insightful to present the non-dimensional version of Eq 1.2 to
see a powerful property of the Navier Stokes equation. This needs the introduction
of the Reynolds number first.
1.2 Reynolds number
A widely used characteristic of fluid flows is Reynolds number, defined as
Re = UL ν
(1.5)
where U and L are the characteristic speed and lengthscale of the flow respectively,
and ν is the kinematic viscosity of the fluid. Reynolds number is seen as an
indicator of the intensity of turbulence taking place in a flow- the higher the
Reynolds number, the more intense the turbulence. Most flows show an onset
1.3 Lagrangian Statistics and its Importance 4
of turbulence at Reynolds number of a few thousand. It should be noted that
Reynolds number is dimensionless. Usually, turbulence researchers use the Taylor-
scale Reynold’s number
15Re (1.6)
With the knowledge of Reynolds number, it is now possible to understand the
non-dimensional form of Navier Stokes equation (Eq 1.2):
∂U
, p = p
ρU2 0
From Eq 1.7, it is clear that water and air will behave in the same way if they
have identical Reynolds number and flow geometry. This is a big advantage as it
makes possible the comparison of experiments using air with those using water.
1.3 Lagrangian Statistics and its Importance
Before discussing the theories of turbulence, it is important to understand La-
grangian analysis- the study of the properties of a fluid element as it moves around
in the fluid. This is different from the Eulerian concept, which involves monitoring
the behavior of the fluid at fixed positions in space. Temporal evolution of the
properties of fluid elements could be studied in two ways. One is keeping track
of a specific fluid element as it moves to different positions with time. Another
is just recording the events happening at a fixed point in the fluid with time.
1.3 Lagrangian Statistics and its Importance 5
The second method has a drawback, which will be clear once energy cascade is
discussed in Section 1.5.
It is important to understand why Lagrangian statistics should be explored. In
other words, why should it matter what a fluid particle does along its trajectory
with time. Well, we want to understand whether the properties that a particle
picks up at a particular locality of the fluid have any correlation with its properties
some time later during its motion. For instance, our experiment is a system where
kinetic energy is passed on to the fluid at an inhomogeneous region of the fluid.
The particles then move to the homogeneous region of the fluid in the course of
time. We want to understand whether the properties of the particle at the inho-
mogeneous origin has any influence on its behavior in the homogeneous region or
vice versa. The right way to do this is by tracking the particle and studying the
correlation of its properties, say velocity values, at different times. So the whole
point of Lagrangian analysis of turbulence is to figure out whether fluid particles
‘remember’ their kinetic properties from some past time during their motion in a
trajectory.
There are various Lagrangian quantities that can be measured; one Lagrangian
entity has been widely measured in different experiments. It is introduced in the
next section.
1.4 LVSF (Lagrangian Velocity Structure Func-
tion)
Suppose we are following a specific fluid element during it’s motion with ve-
locity U, where U(t) = u(t)i + v(t)j + w(t)k. Then we can define the pth order
Lagrangian Velocity Structure Function (LVSF) to be
Sp(τ) = (uτ )p = (u(t+ τ)− u(t))p (1.8)
where u(t) is a component of particle velocity at time t, τ is the time lag, and p
is the order of the function. With this, the second order LVSF (in x-component
of velocity) is
S2(τ) = ⟨ (u(t+ τ)− u(t))2⟩ (1.9)
It is worth going back to the issue of why Lagrangian study is important. Using
the second order Lagrangian velocity structure function as an example, we see that
the structure function is nothing but the ‘correlation’ between the velocities of an
average particle some timescale τ apart. In other words, it explores if the velocity
of a particle at a time t is ‘correlated’ with its velocity time t+ τ later.
1.5 Kolmogorov Theory
The first concept to have profound significance in the development of theories
about turbulence was that of energy cascade by Richardson in 1921 [1]. He pro-
posed the idea that energy introduced through large scale eddies in turbulence is
1.5 Kolmogorov Theory 7
handed down to smaller and smaller eddies. The cascading of energy terminates
as the eddies reach the smallest size possible, and energy is dissipated then. This
powerful idea of cascading of energy and scale of motion inspired Kolmogorov to
hypothesize three statements about homogeneous and isotropic turbulence. Col-
lectively called K41, they were:
1. Local Isotropy: In a very high Reynolds number turbulence, all small scale
motions are statistically isotropic.
2. First Similarity Hypothesis: In a very high Reynolds number turbulence,
all small scale motions are universal and are determined by the mean energy
dissipation rate ε and the viscosity ν.
3. Second Similarity Hypothesis: In a very high Reynold’s number turbu-
lence, the inertial range statistics depend only on the mean energy dissipa-
tion rate and are independent of the viscosity of the fluid.
Energy cascade dictates that at any given time, eddies of different scales are
present in a fully developed turbulence. A small scale eddy could be swept by
a large scale motion at a fixed point in a fluid. If we are keeping track of the
properties of a fluid at a fixed position, the properties of large scales motions would
suddenly dominate at that point. This would result in a mix-up of properties of
different scales. In order to avoid this confusion due to the sweeping action by
large scales, fluid elements are tracked wherever they go in the Lagrangian study
of fluids.
1.6 Kolmogorov’s predictions for second order
LVSFs
With the above three hypotheses in mind, and using dimensional analysis, we
can infer the following behavior of the second order LVSFs in different length
scales. For a very small time lag τ , it is possible to write Eq 1.9 as
S2(τ) =
)2 ⟩ τ 2 (1.10)
So we expect the second order LVSF to be quadratic with time lag τ for very
small values of τ . The acceleration variance quantity
⟨( d
has been
determined for fully developed turbulence by Bodenschatz et al. [2]. In the inertial
subrange of energy cascade, when there is neither production nor dissipation of
energy, Kolmogorov’s prediction for second order LVSFs using Second Similarity
Hypothesis and dimensional analysis is
S2(τ) = C(L) p ετ (1.11)
where C (L) p is the Lagrangian Kolmogorov constant (whose value is estimated to
be about 6 in literature), and ε is the mean energy dissipation rate of the turbulent
system. Thus, in the inertial range, where the energy is just passed from large
scale eddies to smaller ones, the second order LVSF is linear with τ . We can also
1.7 Turbulence and Research 9
expand the second order LVSF algebraically:
S2(τ) = ⟨ (u(t+ τ)− u(t))2⟩
⟩ − 2 u(t+ τ)u(t) (1.12)
Since the flow is considered to be statistically stationary, we can assume that
u(t + τ) = u(t) for a large time lag τ . Also, at large time lags, there is no
correlation between the velocities. This makes the correlation term (the third
term in Eq 1.12) zero. Thus the second order LVSF has a constant value for large
values of τ :
S2(τ) = 2u2(t) (1.13)
1.7 Turbulence and Research
A clear understanding of the Lagrangian framework is essential to correctly
model the behaviors of particles in turbulent systems. In the last decade or so, the
trend in turbulence research has been to explore Lagrangian statistics in different
flows, which was not possible earlier. As mentioned in section 1.3, Lagrangian
study of turbulent fluids requires tracking of particles ‘uninterruptedly’. High
speed cameras used to take hundreds of pictures of particles every second will
then release enormous amount of data, saturating ordinary computer memories in
few seconds. Fortunately, a system capable of extracting only useful information
from the camera images (thereby needing significantly less memory) was devel-
oped at the Voth lab in Wesleyan University. Many graduate students worked
1.7 Turbulence and Research 10
with Professor Greg Voth (at different times) in building this image compression
system. This allowed even simple desktop computers to store information about
the tracked particles continuously for hours, without worrying about memory lim-
itations.
When I joined the Voth lab in the Spring of 2007, thanks to the hardwork of
PhD student Dan Blum, particle tracking data from a few runs of the experiment
were already available. Dan was focussing on the Eulerian perspective of turbu-
lence (which will be explained in due course). In the beginning, I worked on 3D
calibration and particle tracking. Later, I focussed on the Lagrangian analysis of
our data. Since then, I have devoted myself to making various Lagrangian mea-
surements, primarily the Lagrangian structure functions in our experiment.
From the very definition of Lagrangian statistics, it is easy to guess that my
goal is to study whether fluid particles at any given time exhibit the kinetic prop-
erties they gained at another location in an earlier time. We use a flow between
two oscillating grids, which is largely inhomogeneous near the grids and relatively
homogeneous in the center. Particles are set in motion near the grids, and move
to different parts, including to the center, with time. In such a non-uniform tur-
bulence system, it is interesting to investigate to what extent a particle retains
the velocity information from its past along its trajectory.
In looking for the signs of ‘memory’ of particles in our experiment, I stud-
ied the behavior of Lagrangian structure functions when they are conditioned on
large scale velocities. The results have already been published in a paper, and I
consider them my major contribution. Measurement volume bias has long been
1.7 Turbulence and Research 11
considered an issue with Lagrangian statistics, as we will see in Chapter 3. I
developed a method to estimate how big the error due to the bias is at different
timescales in second order LVSFs. Thus we were able to leave out the results
that would have high percentage of error by our estimation. The error estimates
that we had already calculated seems to agree roughly with a recently published
paper by Berg et al. [3]. In the final stages of my research with Prof. Voth, I
also studied the transport of energy in a Lagrangian context in our flow. Here
again, my interest was to study what factors caused the transport of energy from
the inhomogeneous and energetic parts to a quiescent homogenous region of our
experiment. I suggested that some agents were more important than others in
transporting turbulent kinetic energy.
My goal will be to convey those three important results, their consequences
and their accompanying theories in separate chapters. But first, I will describe
the experiment and calculate few quantities to highlight the importance of some
special equipments and materials in our experiment. The final chapters will talk
about the results.
Chapter 2
The Experiment
When I started research in January 2007, the experimental setup was complete
and it had already gone through few successful runs of experiment. The basic
premise on which the experiment was built was keeping track of fluid elements in
a turbulent flow so that important information like the position and the velocity
of the elements is recorded accurately and in sufficient amount. From there,
various statistics like the mean velocity and Lagrangian structure functions are
calculated. This fundamental need to track fluid elements to study their behavior
as they move around in a fluid is experimentally challenging in many ways. I will
mainly describe the apparatus that was used to meet those challenges, sometimes
taking help of dimensional analysis and arithmetic.
12
2.1.1 Sequence of events
Figure 2.1: The experiment containing a tank, two cameras and two grids (figure
adapted from Blum et al. [4]). The green cylinder is the expanded laser beam.
The main body of the apparatus consists of a transparent tank containing wa-
ter with tracer particles in it. Two oscillating grids in the tank bring turbulent
motion in the water. An expanded laser beam illuminates a region in the tank.
Two cameras take pictures of the laser-illuminated part of the tank. The camera
images go through an image compression circuit, which extracts only useful in-
formation from the images. This information from the circuit is then sent to the
computer, where the partcles are identified and their 3D positions and velocities
determined. Data files in hard disks store all the information about the detected
2.1 Apparatus 14
particles. Computer programs access these files and help understand the behav-
ior of all detected tracer particles along their trajectories. For instance, I access
the 3D velocity information from the data files to generate Lagrangian structure
functions.
2.1.2.1 Tank
We use a large transparent tank to hold water in our experiment. It has
an octagonal cross-section of width 1 m. The height of the tank is 1.5 m. It
is made of Plexiglas, which is strong and light. Plexiglass is also unaffected by
moisture, making it further reliable for our experiment. But most importantly,
it is incredibly transparent, which is a necessity in particle tracking. The tank
rests on a table about 4 feet high, and it can hold approximately 1100 liters (300
gallons) of water. Due to the big volume of water being used, any leakage of water
from the tank can cause damage to other equipment in the lab. Thus, equipment
in the lab is not left on the floor, to keep lab items safe in the event of flooding.
2.1.2.2 Grids
Two identical grids are used inside the tank to generate turbulence in the
water. Each grid is octagonal, to match with the cross section of the tank. The
mesh size of the grids is 8 cm. The grids have 36% solidity. The distance between
the upper grid and the top part of the tank is the same as that between the lower
2.1 Apparatus 15
grid and the bottom. The two grids are 56.2 cm apart all the time. Four rods
pass through both the grids and are eventually connected to a motor. The 11 kW
motor drives the four rods in an identical manner, causing the grid-pair to oscillate
in phase. The stroke of the oscillation is 12 cm peak-to-peak. The oscillation of
the grids can be controlled; while taking the data that I analyzed for this thesis,
the frequency of the grid’s oscillation was 5 Hz.
2.1.2.3 Water
As it was mentioned in Chapter 1, water is an incompressible fluid. For this
and a variety of other properties, water was used as the fluid to be studied in the
experiment. Water in the tank needs to be clear and free of impurities like foreign
particles that can be mistaken for tracer particles. The presence of air bubbles in
water creates the same confusion, just on a bigger scale. So water is filtered and
degassed before being pumped into the tank. About 1100 liters (300 gallons) of
water is contained in the tank during a run of the experiment. In order to avoid
changes in the viscosity or the index of refraction, water in the tank is maintained
within ±0.1C of the initial temperature of about 22C.
2.1.2.4 Tracer particles
Tiny polystyrene tracer particles are seeded in the water inside the tank. Each
such particle is 136 µm in diameter and is neutrally buoyant. The size and the
buoyancy of the particles make them ideal for the experiment, as they are being
used to represent fluid elements. The particles were added until an optimum
2.2 Demands of Lagrangian statistics 16
particle density of about 50 per frame was achieved in each camera. With this
particle density, the amount of data per frame was maximized while at the same
time, the error in tracking was minimized.
2.2 Demands of Lagrangian statistics
Here I quantify the smallest length and time scales in our flow and identify
the camera frame rate necessary to resolve these scales.
Kolmogorov’s First Similarity hypothesis (see Chapter 1) states that a quantity
should only depend on the mean energy dissipation rate ε and the kinematic
viscosity of the fluid ν at the smallest scale during energy cascade. By means
of dimensional analysis, we find that the smallest length scale η of the flow is
uniquely determined by
= 142µm
where ε = 0.00246 m2s−3 is determined from the Eulerian structure functions as
described by Blum et al. [4]. ν = 10−6 m2s−1 is a property of water. We can see
that the diameter of the tracer particles (136 µm) is slightly less than the smallest
length scale of the flow. This consolidates our belief that the tracer particles
can represent the motion of fluid elements correctly. Similarly, we can uniquely
2.3 Detection 17
determine the smallest time scale of our flow using ε and ν:
τη = (ν ε
= 20 milliseconds
This is again a good value, when it is compared with the duration between two
successive frames of the camera. When our camera runs at a speed of 500 Hz, the
time between two frames is 2 milliseconds. This is a tenth of the smallest timescale
of the flow (we are resolving up to one tenths of the Kolmogorov timescale). Thus,
a frame rate of 500 Hz allows us to do particle tracking comfortably in all time
scales in our experiment.
2.3 Detection
We want to track and record the position and the brightness of tracer particles
when water is turbulent. This is accomplished using special equipments like high
speed cameras, laser and an image compression system.
2.3.1 Laser
The tracer particles are so tiny that they have to be illuminated by laser so
that the cameras can detect them. We used a 532 nm pulsed Nd:YAG laser for this
purpose. On average, the laser generates 50 W of power with pulses only 200 ns
in duration. In order to track particles over a reasonable duration, the detection
volume of cameras cannot be very small. So the laser beam was expanded to
2.3 Detection 18
create an illumination volume of dimensions 7 cm × 4 cm × 5 cm in the tank.
The expanded beam can be directed to different places in the tank depending on
whether we want to study the center or the region near the grids.
2.3.2 Cameras
Two cameras took pictures of a fraction of the illuminated region of the tank.
The tracer particles looked bright in the camera images owing to laser illumina-
tion. Both the cameras were Basler A504K video cameras that could produce
images with 1280 × 1024 pixel resolution at a speed of 500 frames per second. As
was shown in Section 2.2, a frame rate of 500 Hz is desirable. However, it posed a
big problem- data was produced by each camera at the rate of 625 Megabytes per
second. This means the 4 Gigabyte Random Access Memory (RAM) of the com-
puter connected to the cameras could hold data only for about 7 seconds. After
this, the cameras would have to be stopped for 7 minutes so that the enormous
amount of data could be downloaded to a computer hard disk from the RAM.
The larger the number of data points, the better the statistical analysis. This is
true for Lagrangian statistics too. From the definition of Lagrangian structure
funcitons, it is easy to see that the more the number of particles we detect, and
the longer the particle trajectories are, the better the measurement. Stopping a
run every 7 seconds to download data would only cut short the trajectories in
view.
2.3.3 Image compression circuit
The computer memory limitation just discussed was tackled using image com-
pression circuit [4]. The image compression circuit was placed between each cam-
era and the computer connected to the camera. From each camera image, the
circuit selects pixels that contain brightness over a certain threshold level and
ignores the rest of the image in real time. This is done to pick only useful in-
formation, that is the particle center position and brightness of every detected
particle. The discarded background information takes up huge amount of storage
units in computer. With this method, data files were compressed 100 to 1000
times their original size. Thus, it was possible to take data for hours and store all
the information in an ordinary hard disk without stopping the experiment.
2.4 Stereomatching: the final step
Data files from the image compression circuit contain the 2D position (of
the center) and brightness of every particle tracked by each camera. Using the
information from the image compression circuit, the 3D position of each particle
is found by the process of stereomatching. The 3D position and the magnification
of each camera have to be known for stereomatching. We obtained an accuracy of
about 11 µm in the stereomatching process. To obtain this level of accuracy, it is
necessary to have very good calibration of camera position parameters. We take
traditional camera position parameters first. Then using those parameters, we
stereomatch known pairs of particles from the two cameras. We run a nonlinear
2.4 Stereomatching: the final step 20
optimization to minimize the error in stereomatching and find the optimal camera
position parameters.
With the camera positions known, it is possible to calculate the 3D position of
a particle from the 2D positions as seen from the perspectives of several cameras.
The pixel coordinates of each particle seen on a camera indicates that a particle
must lie somewhere along a specific line in 3D space. If the minimum distance
between lines from multiple cameras is within a certain tolerance, then we have
the 3D position of a particle. Thus, we obtain a list of the 3D positions of each
particle that came into the field of view as a function of time. From position and
time information, the 3D velocity can be found out. This is the ultimate objective
of particle tracking. We use computer languages (mainly MATLAB) to use this
data for various analysis. This process is shown in a flowchart in Figure 2.2.
2.4 Stereomatching: the final step 21
2D Image from Camera A
2D position and brightness of each particle
2D Image from Camera B
2D position and brightness of each particle
Identification of particles + 3D
position
each frame
Image Compression
Experiment
Figure 2.2: Flowchart showing the determination of 3D velocities from camera images
of tracer particles
3.1 The possible sources of bias
From the description of the experiment in Chapter 2, it is easy to see that we
use a large volume of water and numerous tracer particles in the tank. However,
the cameras can only capture a limited region, called the measurement volume,
of the tank if they are to produce images with desirable resolution. As a result,
the particles that lie outside the measurement volume cannot be detected.
Not all tracer particles in the measurement volume have the same speed. Some
have speeds high enough to move through the measurement volume in a very brief
duration. Others are slow enough to linger for a relatively long time. The longest
duration for which a tracer particle has been detected in our experiment is about
22
3.1 The possible sources of bias 23
3 seconds. When making measurements, we usually take the ensemble average
of a quantity we measure for all detected particles. Since the slow particles stay
in the detection volume for longer durations than their fast counterparts, it is
possible that the data obtained from the cameras is mostly of the slow particles
and not of the high-speed ones. Hence a possible bias towards slow moving parti-
cles in the measured quantity. This causes a statistic like the LVSF (Lagrangian
Velocity Structure Function), which is a velocity ensemble average quantity, to be
underestimate of the actual value.
However, there is another possibility that we need to consider. The probability
of a particle entering the the detection volume depends on its velocity [5]. A high
speed particle is more likely to enter the detection volume than a slower one. This
should lead to velocity measurements biased towards fast-moving particles.
Thus we have two hypotheses here, leading to opposite conclusions. Buchhave
et al. [6] resolved this confusion by showing that as long as measurement is made
for the entire time the particle is in view, the two velocity bias effects mentioned
earlier negate each other exactly. Thus, if we had to measure a single-time ve-
locity statistic like the mean velocity of a particle, we would simply add up the
velocities of the particle at all times inside the detection volume (and of course,
divide the sum by total number of data points).
Structure function, however, is not as simple as a single-time quantity like
the mean velocity. By definition, structure function is a two-time quantity. Per-
haps, this is best explained by invoking the mathematical definition of a pth order
3.2 Estimating the possible bias in structure functions 24
structure function presented earlier as Eq 1.8 in Chapter 1:
Sp(τ) = (uτ )p = (u(t+ τ)− u(t))p
where u(t) is a component of particle velocity at time t, τ is the time lag, and p is
the order of the function. We need velocity information from two different times,
some timelag apart, for every particle in order to obtain the value of structure
function for every time lag. For small time lags, it is possible to use all the
velocity pair-values of particles that lie inside the measurement volume. However,
it is also possible that the timelag τ is sufficiently large that a particle can traverse
the observation volume in time t less than τ . In such cases (t < τ), the particle
in view will not contribute to the measurement of structure function for timelags
greater than τ . Thus the very nature of the structure function prevents a bias-
free evaluation; this bias in the measurement of time-delay statistics is called the
‘measurement volume bias’.
tions
Quantifying the error due to measurement volume bias is very important as
it might be a significant percentage of the quantity we are measuring. Data with
very little percentage of this error, if it exists, can be used to make Lagrangian
analysis reliably. However, if this error is significant, correction is necessary in our
estimation of quantities. Berg et al. [3] have developed a theoretical method of
3.2 Estimating the possible bias in structure functions 25
estimating the error in LVSFs due to measurement volume bias, building on the
ideas first presented by Ott and Mann [7]. With their method, one can estimate
the percentage of error in a pth order LVSF occurring at any timescale τ for an
observation sphere of radius r. We use a different approach to estimating the
bias error in second order LVSFs in our experiment, following a method that we
independently developed before Berg et al. [3] published their results. Later we
compare the results from the two different methods and see that they roughly
agree.
3.2.1 Structure function vs volume
Our first step in studying detection volume effects on second order LVSFs was
examining how the second order LVSF varied with different observation volumes.
The detection region for the cameras was kept constant for the entire run of the
experiment. Instead, we artificially changed the measurement volume, starting
from a very small sphere (radius 0.5 cm) and going up to a sphere (radius 4.5 cm)
comparable in size to the actual observation volume of the experiment. The origin
of the concentric artificial spheres was very close to the center of the tank. For
each artificial volume, we calculated the second order Structure function. When
we plotted all the curves on the same axes, Fig 3.1 was obtained.
The top red dotted curve represents the second order LVSF for all datapoints in
Fig 3.1. Each of the other 9 solid curves is the structure function for a particular
volume. One feature of this multiple plot is the coincidence of all the curves at
very small timelags (τ < 2). This agrees with Buchhave et al. [6]. For small
3.2 Estimating the possible bias in structure functions 26
0 5 10 15 20 0
0.5
1
1.5
2
2.5
3
.5 to 4.5 cm rad; also the run str func
0.5cm 1.0cm 1.5cm 2.0cm 2.5cm 3.0cm 3.5cm 4.0cm 4.5cm Run
Figure 3.1: Second order LVSF vs timescale for different artificial measurement vol-
umes. The radii of volumes range from 0.5 cm to 4.5 cm; each volume is represented
by a color as indicated in the legend (which lists colors and their corresponding radius
value); the top dotted red curve labeled ‘Run’ is the structure function obtained from
all trajectories available with no artificial volume restriction.
timelags, almost all particles in view will be contributing to structure function;
the residence time of the particles cannot be smaller than a certain timescale
on average, and so no bias occurs. When the time intervals get larger, some
particles in the detection volume with small residence time will not contribute
to the structure function (but this is less likely to happen in a bigger volume).
Thus, the structure function plots get progressively higher for bigger measurement
3.2 Estimating the possible bias in structure functions 27
volumes when τ > 2. We also see that the curves for the large volumes (radius > 3
cm) and the one for the run (all detected trajectories) coincide at all timescales.
This opens up the possibility that at large volumes, there might not be many
extra particles (and hence extra velocity information) for each additional volume.
If we are not tracking more particles in proportion with additional volume, then
every new plot of the structure function is going to look the same as the previous
one. Eventually, we will not be able to extrapolate a structure function in a large
radius limit, and estimation of the finite volume error would not be possible. So
we investigated the (tracked) particle density at each artificial volume.
3.2.2 Particle density vs volume
For each artificial volume, we calculated how many particles the cameras had
detected per frame on average. If a particle appeared in five consecutive frames
inside an artificial volume, it would be counted as five different particles. As
long as the number of particles detected in the measurement volume is increasing
proportionally with the volume itself, we need not worry about sufficiency of data.
This is exactly what happens in our experiment, as shown by Fig 3.2. Up to a
radius of 2 cm, the slope of the ‘log (Cr3) vs r’ curve (dashed) is the same as that
of the ‘log(particle number) vs r’ curve (solid). So we can comfortably claim that
the number of particles detected in a volume is a cubed law of the radius of the
corresponding volume as long as the volume in consideration is not more than 2
cm. We observe that the number of particles recorded in a sphere of radius 2.8
cm is approximately equal to that of the 3 cm radius sphere. Subsequent (larger)
3.2 Estimating the possible bias in structure functions 28
0.2 0.5 1 2 3 5 102
104
106
108
1010
Number of particles in the volume Cr3
Figure 3.2: The horizontal axis is the radius of a sphere whose center coincides with
the geometrical center of the tank. The blue line shows the number of particles for a
volume with radius r. The red line is a function Cr3 with C a constant; it is proportional
to the volume corresponding to a radius r. Note that both the axes have been plotted
on logarithmic scales for convenient comparison with power law behavior.
radii also have the same number of particles. The effective maximum observation
volume of the experiment must be about 2 cm. Sufficient number of particles
are being detected up to a sphere of radius 2 cm; a measurement volume of 2.2
cm or 2.4 cm has all the information that greater volumes have. Now we look
at the Structure functions within the reliable volume of radius 2 cm and make
extrapolation.
3.2.3 Structure function vs volume again
Each camera is set to capture about 10 cm ×10 cm of 2D plane, so a measure-
ment volume of about 1000 cm3 is possible at most. In reality, it’s smaller than
this. We decided to stay within a safe region, which is a shell of inner radius 0.6
cm and outer radius 2 cm. The inner radius is fixed at 0.6 cm due to the fact
that the second order LVSF is noisy for smaller volumes. The noise comes from
the detection of insufficient number of particles and short tracks for the detected
ones.
Having established 2 cm as the upper limit of the artificial measurement vol-
ume, and 0.6 cm as the lower limit, we can check how the second order LVSF
varies for different artificial measurement volumes inside the tank. The volumes
are all spheres, with radii ranging from 0.8 cm to 2 cm in increments of 0.2 cm,
and centered at roughly the geometrical center of the tank. The results in Fig
3.3 show that the second order LVSF actually varies over the volume intervals
used for not so small timelags. The bigger the observation volume, the higher the
structure function. This is a clear sign of the presence of measurement volume
bias, especially when compared with the dotted curve (includes all the detected
particles in the experiment).
Fig 3.3 shows that the LVSF for the entire data is about 1.5 times that for
the smallest volume at 7τ/τη. This information is enough to convince us that
compensation is needed against volume bias. The other implication of Fig 3.3 is
that we should be able to extrapolate the LVSF for infinitely big detection vol-
ume, considering the nice pattern of curves for different volumes in Fig 3.3. The
3.3 Extrapolating the Structure Function 30
0 5 10 15 20 0
0.5
1
1.5
2
2.5
3
0.8cm 1.0cm 1.2cm 1.4cm 1.6cm 1.8cm 2.0cm Run
Figure 3.3: Second order LVSF vs timescale for different artificial measurement vol-
umes; The radii of volumes range from 0.8 cm to 2 cm; each volume radius is repre-
sented by a color as indicated in the legend; the top dotted red curve labeled ‘Run’ is
the structure function obtained from all trajectories available with no artificial volume
restriction.
extrapolation, in turn, will enable us to estimate the bias error. The next section
discusses this.
3.3 Extrapolating the Structure Function
Fig 3.3 was a plot of structure function against timescale for different radii.
There is another way of presenting the same data- plotting structure function
3.3 Extrapolating the Structure Function 31
against radii for different timescale. The range of timescales that interests us
mostly is the inertial subrange- from one Kolmogorov unit to the tenth. There
is a pattern in structure function, which can be described by a mathematical
function depending on both the radius of sphere and the timescale. Figure 3.4
below has the details:
We needed a mathematical function of radius r such that it would yield
0.8 1 1.2 1.4 1.6 1.8 2 0.6
0.8
1
1.2
1.4
1.6
1.8
2
2 )
Figure 3.4: Second order LVSF vs measurement volume radius r; each blue curve
represents the structure function for a distinct timescale in the inertial subrange. The
curves get higher for larger timescales.
a limiting value of the structure function at a large radius limit. The function
should also have an additional responsibility of being a good fit for a family of
curves (for different timescales), and not just one curve. Also, there is no well
3.3 Extrapolating the Structure Function 32
known mathematical form that the bias should obey, so we just looked to find
a function that would fit reasonably well with the data. As it turned out, the
following functional form satisfied our conditions:
f(r) = A [ 1−Be−CrD
] , (3.1)
where r is the radius of the artificial measurement volume, and A, B, C and D
are all constants for a given timescale. The values of the constants A, B, C and
D are all unique for a given timescale in the inertial range. In the beginning, we
tried to fit a function f(r) with D = 1 but such a function would not fit well to
the data. In contrast, a function with D as a variable fit relatively well to the
Structure function. Figure 3.5 gives a picture of how the data (blue circles) and
the fit (red line) look like.
A quick calculation reveals that the function represented by Eq 3.1 will have
the value A when r tends to ∞. We fit the structure function to the function
f(r) using a fitting function in MATLAB, and obtain the optimum values of A
for all timescales in the inertial range (as we are mainly interested in the inertial
subrange behavior of structure functions). On plotting the optimum values of A
against corresponding timescale τ , we should get the extrapolation of the LVSF
based on the pattern in Fig 3.3.
The dotted line in Fig 3.6 represents the extrapolation. The solid curve is the
LVSF for the entire data that we have. We can clearly see that the uncompensated
structure function is lower in value than the extrapolated one. This emphasizes
the need for estimating the error due to measurement volume bias in Lagrangian
3.3 Extrapolating the Structure Function 33
0.8 1 1.2 1.4 1.6 1.8 2 0.8
1
1.2
1.4
1.6
1.8
2
2.2
2 s− 2 )
Figure 3.5: Each set of blue circles (connected by a red line) represents ‘Second order
Lagrangian velocity structure function (LVSF) vs ‘measurement volume radius’ for a
distinct timescale in the inertial subrange .Each set of blue circles is the same as a blue
curve in Figure 3.4. Each red line joining the blue circles is the fit of the function in
Equation 3.1 to the blue curves in Figure 3.4.
statistics and making compensation when the error is too high. The extrapolation
is only reliable between the timescales 4τη and 8τη due to statistical reasons and
the difficult nature of the family of curves. Below 4τη, the fit yields unusually large
values for some of the constants due to the flat nature of the Structure function
at small timescales. At timescales larger than 8τη, the extrapolation curve takes
off abruptly, as seen in Fig 3.6.
3.4 Comparison of errors 34
4 5 6 7 8 9 10 11 0.5
1
1.5
2
2.5
3
3.5
4
Extrapolation Data
Figure 3.6: Second order LVSF vs timescale; The red dotted line is the structure
function for all available datapoints; the solid blue curve is our extrapolation of second
order structure function.
3.4 Comparison of errors
As promised earlier, a comparison will now be made between the error esti-
mates using our method and the one developed by Berg et al. [3]. Based on the
concept of Greens function, Berg et al. propose a theoretical method of estimating
the error in pth order Lagrangian structure functions. For a second order struc-
ture function, the error was roughly proportional to timescale τ , with 5% error at
3.3τη a given pair of values. Form there, we extract the percentage error at any
timescale. The discrepancy seen in Fig 3.6 between the two curves is the error
that our method estimates. Since the fitting worked well for times between 4τn
3.4 Comparison of errors 35
and 8τn inclusive, we only look at the error estimates at 4τη, 5τη, 6τη,7τη and
8τη. Table 3.1 lists the error values as percentage of the uncompensated structure
function. The error values that the two methods estimate are somewhat close.
Timescale Our error estimate Error from Berg method
4τη 2.9% 6.1%
5τη 8.7% 7.6%
6τη 13.4% 9.1%
7τη 16.5% 10.6%
8τη 16.3% 12.1%
Table 3.1: Comparing the error estimates from two different methods
Most of the time, our method gives slightly bigger error. With the knowledge of
the size of error, we now study structure functions at timescales less than or equal
to 8τη, beginning from the next chapter.
Chapter 4
Conditional structure functions
and their significance
In the Lagrangian study of fluids, a fluid element is followed along its trajec-
tory and its properties are observed with time. Different from this is the Eulerian
study, which entails studying the properties of fluids at fixed points in space. Ob-
viously, in the Eulerian method, we will not be tracking the same particles all
the time. It is interesting that Lagrangian analysis is a relatively new field com-
pared to Eulerian study, owing to the experimental difficulties in particle tracking
until recent technological advances. Many Lagrangian entities like the structure
function and Kolmogorov’s hypotheses that I have referred to have Eulerian coun-
terparts.
As mentioned in Chapter 1, Richardson’s idea of energy cascade in turbulence
envisions the presence of large scale eddies, which break down into smaller ones.
36
4.1 Eulerian Structure functions 37
The process ends in the smallest scale eddies, also called the Kolmogorov scale
eddies (1921). Kolmogorov’s hypotheses go further to claim that the small scale
statistics are universal and independent of the large ones in high Reynolds number
turbulence (1941). These concepts have been tested for decades by scientists. The
results are mixed; some small scale properties of the flow show dependence on the
large scale while others are independent. However, such studies have been done
in Eulerian perspective only.
My research has centered on the Lagrangian aspect of turbulence. In fact,
investigating small scale dependence on the large scale in a Lagrangian perspec-
tive has been a major topic of my research. My results are actually the very
first ones in the Lagrangian framework to have ever been published. In the same
paper (by Blum et al. [4]), Dan B Blum reports his Eulerian measurements of
large scale dependence. Despite having focused on Lagrangian analysis, I will fol-
low the sequence of history by first introducing Eulerian structure functions and
then presenting evidence of relation between large scales and small scales through
Eulerian structure functions. Lagrangian results will follow them.
4.1 Eulerian Structure functions
Eulerian structure function is obtained from the velocity difference ur be-
tween two fluid elements r distance apart. The pth order longitudinal Eulerian
velocity structure function is defined as
Dp(r) = (ur)p = (u(x + r)− u(x))p (4.1)
4.2 Determining the large and small scale velocities 38
where u(x) is the particle velocity at position x, r is the 3D vector connecting
the two particles, ur = u(x + r)− u(x) is the the projection of the 3D velocity
difference vector onto r and p is the order of the function. With this definition,
the second order longitudinal Eulerian velocity structure function is
D2(r) = ⟨ (ur)
= ⟨ (u(x + r)− u(x))2⟩ (4.2)
Like for the Lagrangian structure functions, Kolmogorov’s hypotheses have pre-
dictions for the Eulerian structure functions. According to K41, in the inertial
subrange,
ur = C(E) p (εr)p/3, (4.3)
where C (E) p is Eulerian Kolmogorov constant and ε is the mean energy dissipation
rate. With this brief introduction to longitudinal Eulerian structure functions, we
now go back to the topic of scale dependence.
4.2 Determining the large and small scale veloc-
ities
The instantaneous velocity of a particle is dominated by the large scales. So
it is reasonable to represent the large scale velocity by the instantaneous veloc-
ity. Lagrangian structure function is a two-time quantity- the calculation of the
function at a particular timescale τ needs the velocities of the particle at two
times, τ apart, along a trajectory. We average the velocities of the particle at
these two times, u(t) and u(t+τ), to get the instantaneous velocity of the particle
4.3 Conditioned structure functions 39
(denoted as Σuz). Thus, the average of the the pair of velocity values used in
evaluating structure functions is the large scale velocity. The small scale velocity,
on the other hand, is determined by the difference in the same velocity pair. In
the Eulerian framework also, the the two velocity values, u(x and u(x, r), that
are used to calculate the structure function, are averaged to get the large scale
velocity (also denoted as Σuz) while their difference is the small scale velocity.
It is worth recalling that by definition, structure function is just the ensemble
average of some power of the difference between a pair of velocity values. The im-
plication of this is the dependence of small scales on the large scales can be studied
by simply conditioning the second order structure function on the instantaneous
velocity for various timescales. The results of such conditioning are discussed in
the following sections.
4.3 Conditioned structure functions
4.3.1 Eulerian structure functions
Figure 4.1 is a plot of the second order longitudinal Eulerian structure functions
conditioned on the vertical component of the large scale velocity. Different curves
represent different separation distances r/η. The vertical axis has the conditioned
structure functions (scaled by their value when Σuz = 0) with the vertical pair
velocity (scaled by √ < u2
z >) on the horizontal axis. The curves show several
remarkable properties. The conditional structure functions are steep parabolas
when plotted against large scale velocity, and they vary by as much as a factor
4.3 Conditioned structure functions 40
2 <!ur|!uz>
1.0
1.5
2.0
2.5
3.0
Figure 4.1: Eulerian second order conditional structure function versus large scale
velocity. Data taken in the center region. Each curve represents the following separation
distances r/η: + = 0 to 40, ∗ = 40 to 70, = 70 to 110, 4 = 110 to 140, = 300 to
370, × = 370 to 440.
of 2.5, showing strong dependence of the small scales on the large ones. Also, all
the curves for different separation distances r/η collapse well, showing that large
scales affect all length scales in the same way.
4.3.2 Lagrangian structure functions
Since the grids inject energy in the vertical direction in our experiment, its in-
teresting to condition (z-component) Lagrangian structure functions on Σuz, the
z-component of instantaneous velocity. In Figure 4.2, we plot several such second
order structure functions conditioned on the vertical component of instantaneous
4.3 Conditioned structure functions 41
10 ­1 
10 0 
10 1 
10 2 
Figure 4.2: Second order Lagrangian velocity structure function (conditioned on the
vertical component of instantaneous velocity) vs τ/τη at the center of the tank. The
colors represent dimensionless vertical velocities, Σuz/ √ u2 z: + = 3.1 to 1.9, ∗ = 1.9
to 0.62, = 0.62 to -0.62, 4 = -0.62 to -1.9 , = -1.9 to -3.1.
4.3 Conditioned structure functions 42
velocity. We see that the structure functions show a dependence on Σuz. The
conditional structure functions for different large scale velocities differ by factor
of about 2, and remain almost parallel for the time range considered.
It is worth mentioning that measurement volume bias affects Lagrangian struc-
ture functions. It was discussed in Chapter 3 that this bias increases with larger
timescales. The bias introduced about 17% error in second order LVSFs at
τ = 8τη. We have not compensated the structure functions for the bias as Berg
et al. suggest. However, we will only focus on conditional structure functions for
timescales τ ≤ 10τη. Eulerian statistics are free of this mesurement volume bias
as we do not track a particle over some duration to evaluate the statistics.
A powerful way of looking at Fig 4.2 is by plotting the structure function
against instantaneous velocity Σuz for different timescales (within 10τη of course).
The structure functions are scaled by their value when Σuz = 0. The instanta-
neous velocity is scaled by √ < u2
z >. This way of visualizing the results of Fig
4.2 has some advantages, which will be clear in Fig 4.3. In Fig 4.3, we see that
for every timescale, the curve is a parabola. The curvature is a good indication
that the small scales are being affected by the large scales. The structure func-
tion has larger values for bigger instantaneous velocity. The graph gives a hint
that the large scales affect small scales in different ways for different timescales,
as the curves do not collapse. We see that the value of structure function differs
by as much as 2.5 times for the different timescales we consider [4]. Notably, the
curvature is less at all lengthscales for the Eulerian structure function (Figure
4.1) when compared with the Lagrangian structure function in Figure 4.3; The
4.3 Conditioned structure functions 43
-4 -3 -2 -1 0 1 2 3 4
1
2
3
4
5
uz "<uz>
Figure 4.3: Lagrangian second order conditional structure function vs large scale
velocity for the center of the tank. The symbols represent the following τ/τη: + = 0.42
, ∗ = 1.3, = 3.5, 4 = 10.
Lagrangian structure functions show stronger dependence on the large scales than
the Eulerian ones.
We also looked at conditional structure functions in a measurement volume
half the original volume of the run. Such a change in observation volume shifts
the parabolic curves down by roughly the deviation between the curves; the de-
pendence is still retained. This highlights that measurement volume bias is not a
significant issue for the conditioned structure functions we have presented.
4.4 Effects of Inhomogeneity 44
4.4 Effects of Inhomogeneity
Figure 4.3 was a result for the center of the tank, which is the most homogenous
region in the experiment. We now turn our attention to the same conditional
structure functions but in a more inhomogeneous region of the tank (near the grid
and below the center). The parabolic curves in Fig 4.4 are a result of conditioning
-4 -3 -2 -1 0 1 2 3 4
2
3
4
5
6
!uz
Figure 4.4: Lagrangian second order conditional structure function vs large scale
velocity for the near grid region of the tank. Symbols represent the following τ/τη: +
= 0.94, ∗ = 2.8, = 8.0.
the Lagrangian structure function on Σuz near the grid. They are remarkably
asymmetrical compared to the corresponding plot for the center of the tank (Fig
4.3). This is due to the fact that the bottom grid is providing kinetic energy to
the fluid particles bound for the detection volume. These upward moving particles
4.5 Other Causes of Dependence 45
tend to have higher velocities (and higher kinetic energies) than their downward-
moving counterparts from the more quiescent center of the tank. This eventually
appears in the structure function calculation as well, as seen by the leftward shift
of the minimum of the parabolas. The curvature of the curves is higher towards the
right and lower towards the left, when compared with the conditioned structure
function at the center. At the center, the incoming fluids (coming from up and
down) are equally energized by the grids, resulting in symmetrical curves.
4.5 Other Causes of Dependence
The set of plots we’ve presented so far indicate dependence of small scales on
the large scales. This leads us to an important question- what causes the condi-
tional structure functions to behave as they do? We have seen that inhomogeneity
plays a significant role in the behavior of the structure functions in the previous
section. There are several other factors that need to be considered in fully explain-
ing the dependence. One possible concern is that kinematic correlation might be
behind the behavior of the curves in Figures 4.3-4.4.
4.5.1 Kinematic Correlation
We obtain the large scale (instantaneous) velocity of a particle by averaging
the pair of velocity values, u(t) and u(t + τ), used in calculating the structure
function. Interestingly, we find the difference between the same two values to get
the structure function. This hints at the possibility of kinematic correlation. A
4.5 Other Causes of Dependence 46
pair of velocity values along a particle trajectory can have a large sum and still
have a large difference. If this is the case, then the structure functions conditioned
on the large scale velocities are bound to have parabolic (or near parabolic) curves,
as shown in Figures 4.3-4.4 above. Several studies have confirmed that velocity
sums and differences are in fact correlated. However, there are evidences that
kinematic correlation is only partly responsible for the effects we have seen. Thus
it is important to determine how much contribution kinematic correlation has on
the conditional structure functions. Blum et al. [4] have estimated that more than
70% of the effect is unexplained by kinematic correlation in the Eulerian structure
functions. The conclusion here is that kinematic correlation might affect large
scale dependence of small scales significantly but not completely.
4.5.2 Reynolds number
Another concern when explaining large scale dependence is low Reynolds num-
ber of the flow. It is often argued that if the Reynolds number of the flow is not
sufficient enough, the separation of the large scales and the small scales is not ad-
equate. This will lead to the small scale statistics retaining some properties of the
large scales. Results published by Blum et al. [4] and Sreenivasan and Dhruba [8]
together serve to point out that ‘insufficient’ Reynolds number does not cause
large scale dependence in the Eulerian case. Figure 4.5 compares the conditional
Eulerian structure functions from the two different experiments- one with a very
large Reynolds number ( Rλ > 104 ) and the other with Rλ = 300. The two
experiments have nearly the same dependence. Also, all lengthscales collapse to
4.5 Other Causes of Dependence 47
-4 -2 0 2 4 0.5
1.0
1.5
2.0
2.5
3.0
Figure 4.5: Eulerian second order conditional structure function versus large scale
velocity. The thin plots are from atmospheric boundary layer data [8], r/η: ∗ = 100, 4
= 400, = 1000, × = 1250. The thick line is from Figure 4.1, which has been overlaid
for comparison, r/η: = 70 to 110.
roughly the same functional form. This suggests the little significance of the size
of Rλ on the large scale dependence of conditioned Eulerian structure functions.
4.5.3 Anisotropy
Anisotropy might also be a suspect for causing the dependence of small scales
on the large scales, especially as our flow is somewhat anisotropic. Such anisotropy
in our flow can be seen from the difference in the velocity statistics in different
directions, presented in Table 4.1. The standard deviation of z-component veloc-
ities is about 1.5 times greater than that of the x- and y-component velocities.
The magnitude of the velocity is a non-directional quantity, so the structure
4.5 Other Causes of Dependence 48
Statistic x-direction y-direction z-direction
Standard deviation 0.0106 0.0108 0.0160
Table 4.1: Comparing the standard deviation of different components of particle ve-
locity for the center of the tank. The values are in units of cm/frame.
function should not show dependence on the magnitude of the instantaneous ve-
locity. Our data (Figure 4.6) shows that the structure function exhibits stronger
dependence on the magnitude of the instantaneous velocity than on its vertical
component. The same was observed for the conditioned Eulerian structure func-
tions by Blum et al. [4], and so we conclude that anisotropy is not a major cause
of the dependence.
4.5.4 Large Scale Intermittency
Our discussion of the various possible causes of the dependence of small scales
on the large scales concluded that inhomogeneity is an important factor but it does
the fully explain the magnitude of the dependence. Other causes like kinematic
correlation, low Reynolds number and anisotropy are not very significant. This
leaves large scale intermittency as a possible significant contributor.
Large scale intermittency is not easy to quantify [4]. It is just the fluctuation
(in time) of the large scales that occur on timescales longer than the eddy turnover
time. Fernando and De Silva [9] have shown that large scale intermittency can
occur in an experiment like ours, where oscillating grids create the flow, depending
4.6 Conclusion 49
2
4
6
8
10
12
<Δ u2 τ| Σ
Figure 4.6: Lagrangian second order conditional structure function vs large scale
velocity magnitude for the center of the tank. The colors represent distinct timescale
τ , as shown in the legend.
on the boundary conditions. Blum et al. [4] show that there are clear signatures
of large scale intermittency in our flow. Large scale intermittency is a topic that
needs further exploration so that our understanding of the dependence of small
scales on the large scales is enhanced.
4.6 Conclusion
We see that in a flow like ours, which is not homogeneous everywhere, the
large scales do influence the small scales in both the Lagrangian and the Eu-
lerian perspectives. We have also identified that inhomogeneity and large scale
4.6 Conclusion 50
intermittency are mainly responsible for this behavior of turbulence. However,
these very first Lagrangian results of large scale dependence should be examined
in other flows. Our results here also contribute to understanding what properties
of turbulence are actually universal.
Chapter 5
Energy transport in turbulence
Lagrangian transport of energy in turbulence plays an important role in pro-
cesses like mixing and dispersion. However, it has been very difficult to quantify
turbulence transport in complex flows. Traditional measurement tools have not
been able to capture the full 3D features of the flows, which are required for
evaluating many quantities. One available model about the dynamics of energy
transport in a turbulent flow is in a recently published paper by Berg et al.. The
group claims that the rate of change of kinetic energy is wholly determined by the
mean energy dissipation rate. Our calculation shows that this claim is not true.
We know that, on many circumstances, Lagrangian quantities have had Eulerian
‘ancestors’, the structure functions being very good examples. I identify what
factors are mainly responsible for transporting energy in the Eulerian description
of the flow. These first Eulerian transport measurements and their inferences can
be the bases for the foundation of Lagrangian energy transport.
51
5.1 Decomposition of quantities
5.1.1 Velocity decomposition
Figure 5.1: The axial component of velocity against time on the centerline of a tur-
bulent jet (figure taken from the experiment of Tong and Warhaft (1995) as published
by Pope [1]).
Figure 5.1 is a representation of the axial component of the velocity of a fluid
element at a fixed point along the centerline of a jet varying with time. The mean
velocity is constant for the flow, although the fluctuation is random around the
mean value of the velocity. If the mean velocity were deducted from the actual
velocity signal, the corresponding plot would still be like Figure 5.1. Only the
curve is shifted down by the mean velocity. Notably, the mean velocity of the
new curve will be zero. The same idea can be applied to the velocity field in a
5.1 Decomposition of quantities 53
turbulent flow in general:
U(x, t) = u(x, t) + U(x, t) (5.1)
where, U(x, t) is the actual velocity field, u(x, t) is the fluctuation velocity field,
and U(x, t) is the mean velocity field. Equation 5.1 is called the Reynolds de-
composition. So far, vector notation has been used to express quantities and
equations. However, tensor notation is more convenient in deriving many equa-
tions. We will be using the tensor notation very frequently in this chapter. As a
start, we rewrite Equation 5.1 as:
Ui = ui + Ui (5.2)
where, Ui is an actual velocity component, ui is fluctuating velocity component,
and Ui is the mean velocity component. With Eqn 5.2 in mind, the velocity
field can now be seen as the sum of the mean velocity field and the fluctuating
velocity field.
5.1.2 Decomposing kinetic energy
Like the velocity field, the mean of the kinetic energy (per unit mass) of a
fluid element can also be decomposed. We start with the definition of the kinetic
energy E(x, t):
2 U ·U (5.3)
Taking the mean of Equation 5.3, substituting U(x, t) with u(x, t)+U(x, t) and
using the fact that u = 0, we get the following expression for the mean of the
5.2 Energy Equation 54
kinetic energy E(x, t):
E = E + k, (5.4)
where,
E = 1 2 U ·U is the mean of the kinetic energy,
E = 1 2 U · U is the kinetic energy of the mean flow, and
k = 1 2 u · u is the turbulent kinetic energy.
In tensor notation,
2 uj · uj (5.7)
Both the velocity and the mean of the kinetic energy have been decomposed
into the mean part and the turbulent part in tensor form. With this, one can
derive the insightful energy equation.
5.2 Energy Equation
5.2.1 Reynolds equation
We start with the Navier Stoke’s equation in tensor notation (the vector form
is in Chapter 1):
∂ui ∂xi
= 0 (5.10)
When the Reynolds decomposition is substituted in the Navier Stokes equation
(Equation 5.8), with some tensor algebra, we get the Reynolds equation:
DUj Dt
− ∂uiuj ∂xi
DUj Dt
, (5.12)
and p is the mean pressure field. The velocity covariances uiuj are called
Reynolds stress.
5.2.2 The equation for turbulent kinetic energy
Basically, the derivation of the equation for turbulent kinetic energy involves
substituting the Reynolds decomposition, the ‘energy’ decomposition, and the
incompressibility condition in the Navier Stokes and Reynolds equations. The
derivation of this energy equation is done in many steps. Here we explain the
different terms in the equation and try to estimate them for our flow. Like the
5.2 Energy Equation 56
evolution equation for any quantity in a continuum field theory, the turbulent
kinetic energy equation has the flux (transport) part, the source and the sink:
Dk
5.2.3 Estimating the terms of energy equation
Evaluating any of the terms in the energy equation is very difficult with tradi-
tional measurement methods. Since we have full 3D measurements, we can extract
some of the terms.
P = −uiuj ∂Uj ∂xi
(5.14)
The production is usually positive. It can be roughly estimated for our flow.
First, it is important to understand that the production term has two distinct
terms that can be evaluated separately. The two terms are uiuj (Reynolds
stress) and ∂ Uj ∂xi
(mean velocity gradient). In our relatively homogeneous flow,
the off-diagonal Reynolds stresses are much smaller than the diagonal terms that
represent the kinetic energy in each velocity component. With this assumption,
we can approximate the production term to be
P ≈ −u2 j ∂Uj ∂xj
(5.15)
The advantage of using only the isotropic parts of the Reynolds stress is the
production term becomes very easy to evaluate. Along each axis xi, we divide
5.2 Energy Equation 57
the detection volume into 5 same-sized subvolumes. Each subvolume is a sphere
of radius 0.5 cm. The distance between two successive subvolumes is 1cm. For
all the particle tracks within each subvolume (for every frame), we calculate two
terms- u2 j and Uj in each axis. Then we find the ensemble average of the two
quantities for each subvolume. We choose the value of u2 j of central subvolume
(very close to the value of other spheres). To find the mean velocity gradient, we
just use the slope of the plot of Uj against position in each axis. The product of
the ‘fluctuation velocity component squared’ term and the mean velocity gradient
gives the production term for each axis. Finally, adding the three production
values for the three axes gives us the estimate of the production.
5.2.3.2 Sink
ε ≡ 2νsijsij, (5.16)
The dissipation term itself is always non-negative (sum of non-negative numbers),
so with the negative sign in front, it acts as the sink. Dissipation is quite difficult
to measure, but because it is the central quantity in Kolmogorov’s description of
turbulence, there have been several methods developed for measuring it. In fact,
the energy dissipation term has even been estimated using a two dimensional
surrogate, as pointed out by Pope [1].
5.2 Energy Equation 58
5.2.3.3 Flux

+ 2νujsij ] , (5.18)
where, p′ = p − p is the fluctuating pressure field. Unlike the production and
dissipation, evaluating the transport term has been difficult. One reason is there
are three distinct terms, and not all of them are easy to calculate. The first term
can be written as
2 u2 i (5.19)
which is the kinetic energy transported by velocity. This triple correlation term
can be estimated quite easily, by dividing the observation volume into multiple
same-sized subvolumes along each axis, like earlier when we evaluated the pro-
duction term. For each subvolume along the xj axis, we find the average of the
term xixjxj of all particles lying inside the subvolume for each frame. A plot of
the triple correlation term versus the position of the subvolume can be generated.
The gradient of this plot gives us an estimate of the velocity transport term (and
its error).
The second term of T ′j , the pressure transport term, cannot be evaluated easily.
In fact, we are trying to estimate roughly how much contribution this term has on
the energy budget by either evaluating or neglecting other terms in the equation.
The third transport term, which is essentially stress transport, is negligible.
5.2 Energy Equation 59
5.2.3.4 Mean substantial derivative
Dt , of the turbulent kinetic energy has two
terms, the time derivative of the turbulent kinetic energy, dk
dt and the advection
term Uj dk
dxj . The advection term is small owing to the fact that it is a product
of the mean flow velocity (which is almost zero for our experiment) and another
term. The time derivative of the kinetic energy is zero on average, for the turbulent
kinetic energy is oscillating around a mean value of zero over time at the center.
5.2.4 Energy budget for the center of the tank
At the center of the tank in our flow, with oscillating grids feeding energy at
the top and bottom of the tank, production of energy is not significant. Instead,
turbulent kinetic energy is obtained through the velocity transport of energy by
particles coming from the top and bottom of the tank (near the grids). Obviously,
the kinetic energy is lost (or rather transformed into heat) through viscous dis-
sipation. Table 5.1 has the values of these three terms for our flow. From Table
Quantity Value (×10−5 m2s−3)
P 13.25
ε 246
Velocity transport −179± 40
Table 5.1: The estimated values of different terms in the turbulent kinetic energy
equation.
5.3 Energy decay 60
5.1, we see that, despite a big error in the transport term, for the center of our
tank, velocity transport provides the main input of turbulent kinetic energy for
particles. However, the velocity transport term is only about three-fourths of the
total amount of energy dissipated. The pressure transport term likely accounts
for the remaining energy transport into the volume.
5.3 Energy decay
As I mentioned in the very first part of this chapter, it has been difficult to
quantify Lagrangian energy transport. However, we can still investigate changes
in the kinetic energy of a particle (on average) as it enters the center region of the
tank. We see clear effects of sample bias in our study of energy change with time
for particles.
5.3.1 Single measurement volume
We look at the experimental data on how on average, the kinetic energy of a
particle entering the measurement volume varies with time. We study such decay
of energy in cubic and slab-shaped measurement volumes at the center of the tank.
The slabs have two parallel surfaces, above and below the center of the tank, with
no boundary on the sides. The cubic volumes have six planes of same size, and are
also positioned very close to the center of the tank. We chose only those particles
that entered the slab or the cube through one of the surfaces. Moreover, those
particles were considered only while they were inside the measurement volume.
5.3 Energy decay 61
The kinetic energy values at all times are ensemble averages. The results are
plotted in Figure 5.2.
0.006
0.008
0.01
0.012
0.014
0.016
2 s− 2 )
Slab (independent of sample) All particle average Cube (independent of sample) Slab Cube
Figure 5.2: The mean kinetic energy (KE) of a particle vs time (as a multiple of
Kolmogorov timescale) along particle trajectory; The two lower curves, red and green
in color, are plots obtained by removing the sample length bias of particle trajectories.
The two higher ones, blue and orange lines, have sample bias in them. The dotted
line represents the average KE for all particles that were detected in one run of the
experiment.
We first looked at a slab and a cube, each of height 3.96 cm. Such a measure-
ment volume just encloses a sphere of radius 2.8 cm, which is about the largest
volume that has ‘healthy’ particle density, as seen in Chapter 3. First, we investi-
gate the top two curves. The energy decay curve for particles entering the slab is
5.3 Energy decay 62
slightly higher than the one for the cube. The slab has bigger surface areas than
the cube in the xy-plane, which is perpendicular to grid movement. Also, both
the surfaces of entrance are perpendicular to the vertical axis for the slab. The
dotted line, which marks the ensemble average of the kinetic energy of a particle
for the entire run of the experiment, is less than three-fourth of the initial

Recommended