+ All Categories
Home > Documents > Data Acquisition and Waveforms (1)

Data Acquisition and Waveforms (1)

Date post: 29-Sep-2015
Category:
Upload: erdvk
View: 217 times
Download: 1 times
Share this document with a friend
Description:
Data Acquisition and Waveforms (1)
75
www.ni.com CHAPTER 1 Transducers, Signals, and Signal Conditioning Topics • Data Acquisition Overview • Transducers • Signals • Signal Conditioning Lesson 8 Data Acquisition and Waveforms
Transcript
  • System Overview

  • Transducer Overview TopicsWhat is a Transducer?Types of Transducers

  • What is a Transducer?A transducer converts a physical phenomena into a measurable signal

  • Signal OverviewTopicsTypes of SignalsInformation in a SignalState, Rate, Level, Shape, and Frequency

  • Signal ClassificationYour SignalAnalogDigital

  • Two possible levels:High/On (2 - 5 Volts)Low/Off (0 - 0.8 Volts)Two types of information:StateRate

    Digital SignalsDigitalYour Signal

  • Digital Signal InformationDigitalYour Signal

  • Analog SignalsYour SignalAnalogContinuous signalCan be at any value with respect to timeThree types of information:LevelShapeFrequency (Analysis required)

  • Analog Signal InformationYour SignalAnalogAnalysisRequired

  • Signal Conditioning OverviewTopicsPurpose of Signal ConditioningTypes of Signal Conditioning

  • Why Use Signal Conditioning?Signal Conditioning takes a signal that is difficult for your DAQ device to measure and makes it easier to measure Signal Conditioning is not always requiredDepends on the signal being measured

  • AmplificationUsed on low-level signals (i.e. thermocouples)Maximizes use of Analog-to-Digital Converter (ADC) range and increases accuracyIncreases Signal to Noise Ratio (SNR)

  • DAQ Hardware Overview TopicsTypes of DAQ HardwareComponents of a DAQ deviceConfiguration Considerations

  • Data Acquisition HardwareDAQ Hardware turns your PC into a measurement and automation system

  • Terminal Block and CableTerminal Block and Cable route your signal to specific pins on your DAQ deviceTerminal Block and Cable can be a combination of 68 pin or 50 pin50 pin connector

  • DAQ DeviceMost DAQ devices have:Analog InputAnalog OutputDigital I/OCountersSpecialty devices exist for specific applicationsHigh speed digital I/OHigh speed waveform generationDynamic Signal Acquisition (vibration, sonar)Connect to the bus of your computerCompatible with a variety of bus protocolsPCI, PXI/CompactPCI, ISA/AT, PCMCIA, USB, 1394/Firewire

  • Configuration ConsiderationsAnalog InputResolutionRangeGainCode WidthMode (Differential, RSE, or NRSE)Analog OutputInternal vs. External Reference VoltageBipolar vs. Unipolar

  • ResolutionNumber of bits the ADC uses to represent a signalResolution determines how many different voltage changes can be measured

    Example: 12-bit resolutionLarger resolution = more precise representation of your signal# of levels = 2resolution = 212 = 4,096 levels

  • Resolution Example3-bit resolution can represent 8 voltage levels16-bit resolution can represent 65,536 voltage levels

  • RangeMinimum and maximum voltages the ADC can digitizeDAQ devices often have different available ranges0 to +10 volts -10 to +10 voltsPick a range that your signal fits inSmaller range = more precise representation of your signalAllows you to use all of your available resolution

  • RangeProper RangeUsing all 8 levels to represent your signalImproper RangeOnly using 4 levels to represent your signal

  • GainGain setting amplifies the signal for best fit in ADC rangeGain settings are 0.5, 1, 2, 5, 10, 20, 50, or 100 for most devicesYou dont choose the gain directlyChoose the input limits of your signal in LabVIEWMaximum gain possible is selectedMaximum gain possible depends on the limits of your signal and the chosen range of your ADC Proper gain = more precise representation of your signalAllows you to use all of your available resolution

  • Gain ExampleInput limits of the signal = 0 to 5 VoltsRange Setting for the ADC = 0 to 10 VoltsGain Setting applied by Instrumentation Amplifier = 2

  • Code Width is the smallest change in the signal your system can detect (determined by resolution, range, and gain)

    Smaller Code Width = more precise representation of your signalExample: 12-bit device, range = 0 to 10V, gain = 1Code Width

  • Grounding IssuesTo get correct measurements you must properly ground your systemHow the signal is grounded will affect how we ground the instrumentation amplifier on the DAQ deviceSteps to proper grounding of your system:Determine how your signal is groundedChoose a grounding mode for your Measurement System

  • Signal Source CategoriesGroundedFloatingSignal Source

  • Grounded Signal SourceSignal is referenced to a system groundearth groundbuilding groundExamples:Power suppliesSignal GeneratorsAnything that plugs into an outlet groundGroundedSignal Source

  • Floating Signal SourceFloatingSignal is NOT referenced to a system groundearth groundbuilding groundExamples:BatteriesThermocouplesTransformersIsolation AmplifiersSignal Source

  • Measurement SystemThree modes of grounding for your Measurement SystemDifferentialReferenced Single-Ended (RSE)Non-Referenced Single-Ended (NRSE)Mode you choose will depend on how your signal is grounded

  • Measurement SystemDifferential ModeDifferential ModeTwo channels used for each signalACH 0 is paired with ACH 8, ACH 1 is paired with ACH 9, etc.Rejects common-mode voltage and common-mode noise

  • RSE ModeReferenced Single-Ended (RSE)Measurement made with respect to system groundOne channel used for each signalDoesnt reject common mode voltageMeasurement System

  • NRSE ModeNon-Referenced Single-Ended (NRSE)Variation on RSEOne channel used for each signalMeasurement made with respect to AISENSE not system groundAISENSE is floatingDoesnt reject common mode voltageMeasurement System

  • Choosing Your Measurement System

  • Options for Grounded Signal Sources

  • Options for Floating Signal SourcesRSENRSEDifferential BEST+ Rejects Common-Mode Voltage- Cuts Channel Count in Half- Need bias resistors BETTER+ Allows use of entire channel count+ Dont need bias resistors- Doesnt reject Common-Mode Voltage GOOD+ Allows use of entire channel count- Need bias resistors- Doesnt reject Common-Mode Voltage

  • DAQ Software Overview TopicsLevels of DAQ SoftwareNI-DAQ OverviewMeasurement & Automation Explorer (MAX) Overview

  • Levels of SoftwareDAQ DeviceUser

  • What is NI-DAQ?Driver level softwareDLL that makes direct calls to your DAQ deviceSupports the following National Instruments software:LabVIEWMeasurement StudioAlso supports the following 3rd party languages:Microsoft C/C++Visual BasicBorland C++Borland Delphi

  • What is MAX?MAX stands for Measurement & Automation ExplorerMAX provides access to all your National Instruments DAQ, GPIB, IMAQ, IVI, Motion, VISA, and VXI devicesUsed for configuring and testing devicesFunctionality broken into:Data NeighborhoodDevices and InterfacesScalesSoftware

  • Data NeighborhoodProvides access to the DAQ Channel WizardShows configured Virtual ChannelsIncludes utilities for testing and reconfiguring Virtual Channels

  • DAQ Channel WizardInterface to create Virtual Channels for:Analog InputAnalog OutputDigital I/OEach channel has:Name and DescriptionTransducer typeRange (determines Gain)Mode (Differential, RSE, NRSE)Scaling

  • Devices and InterfacesShows currently installed and detected National Instruments hardwareIncludes utilities for configuring and testing your DAQ devicesPropertiesTest Panels

  • PropertiesBasic Resource TestBase I/O Address Interrupts (IRQ)Direct Memory Access (DMA)Link to Test PanelsConfiguration for:Device NumberRange and Mode (AI)Polarity (AO)AccessoriesOPC

  • Test PanelsUtility for testingAnalog InputAnalog OutputDigital I/OCountersGreat tool for troubleshooting

  • ScalesProvides access to DAQ Custom Scales WizardShows configured scalesIncludes utility for viewing and reconfiguring your custom scales

  • DAQ Custom Scales WizardInterface to create custom scales that can be used with Virtual ChannelsEach scale has its own:Name and DescriptionChoice of Scale Type (Linear, Polynomial, or Table)

  • Sampling ConsiderationsAnalog signal is continuous

    Sampled signal is series of discrete samples acquired at a specified sampling rate

    Faster we sample the more our sampled signal will look like our actual signal

    If not sampled fast enough a problem known as aliasing will occur

  • AliasingAdequately Sampled SignalAliased Signal

  • Nyquist TheoremNyquist TheoremYou must sample at greater than 2 times the maximum frequency component of your signal to accurately represent the FREQUENCY of your signal

    NOTE: You must sample between 5 - 10 times greater than the maximum frequency component of your signal to accurately represent the SHAPE of your signal

  • Nyquist ExampleAliased SignalAdequately Sampled for Frequency Only(Same # of cycles)Adequately Sampled for Frequency and Shape

  • Data Acquisition Palette

  • DAQ Channel Name Data Type

  • Analog Input Palette

  • Single-Point AI VIsPerform a software-timed, non-buffered acquisition + Good for battery testing, control systems - Not good for rapidly changing signals due to software timingAI Sample ChannelAcquires one point on one channelAI Sample ChannelsAcquires one point on multiple channels

  • Multiple-Point (Buffered) AI VIsPerform a hardware-timed, buffered acquisitionHighly recommended for most applicationsAllows triggering, continuous acquisition, different input limits for different channels, streaming to disk, and error handlingAI ConfigConfigures your device, channels, bufferAI StartStarts your acquisition, configure triggersAI ReadReturns data from the bufferAI ClearClears resources assigned to the acquisition

  • AI ConfigInterchannel DelayDetermines the time (in seconds) between samples in a scanInput LimitsMax and Min values for your signalUsed by NI-DAQ to set gainDeviceNumber of the device (from MAX) you are addressingChannelsChooses what channel(s) you are addressingBuffer SizeNumber of scans the buffer can holdA scan acquires one sample for every channel you specify1000 scans x 2 channels = 2000 total samplesTask IDPasses configuration information to other VIsError In/OutReceives/Passes any errors from/to other VIs

  • Different Gains for Different ChannelsAI Config allows different gains for different channelsThe first element of the input limits array corresponds to the first element of the channel arrayGain = 2Gain = 20Range = 0 to +10V

  • AI StartTask ID In/OutReceives/Passes configuration information to/from other VIsNumber of Scans to AcquireTotal number of scans acquired before the acquisition completesDefault value (-1) sets # of Scans to Acquire = Buffer Size (AI Config)A value of 0 acquires continuouslyScan RateChooses the number of scans per secondError In/OutReceives/Passes any errors from/to other VIs

  • AI Read & AI ClearNumber of Scans to ReadSpecifies how many scans to retrieve from the bufferDefault value (-1) sets # of Scans to Read = # of Scans to Acquire (AI Start)If # of Scans to Acquire (AI Start) = 0, default for # of Scans to Read is 100Scan BacklogNumber of unread scans in the bufferWaveform DataReturns t0, dt (inverse of scan rate), and Y array for your dataClears resources assigned to the device

  • Error Cluster

  • Buffered Acquisition Flowchart

  • Buffered AcquisitionAI Start begins the acquisitionAcquisition stops when the buffer is fullAI Read will wait until the buffer is full to return dataIf error input is true then Config, Start, and Read pass the error on but dont execute; Clear passes AND executes

  • Continuous Acquisition FlowchartNOYES

  • Continuous Buffered AcquisitionDifferences from a buffered acquisition# of scans to acquire = 0While loop around AI ReadNumber of Scans to read does not = buffer sizeScan backlog tells how well you are keeping up

  • Analog Output ArchitectureChannel 0Channel 1Most E-Series DAQ devices have a Digital-to-Analog Converter (DAC) for each analog output channelDACs are updated at the same timeSimilar to Simultaneous Sampling for Analog Input

  • Analog Output Palette

  • Single-Point AO VIsPerform a software-timed, non-buffered generation + Good for generating DC voltages, or control systems - Not good for waveform generation because software timing is slowAO Update ChannelGenerates one point on one channelAO Update ChannelsGenerates one point on multiple channels

  • AO Update ChannelsDeviceNumber of the device (from MAX) you are addressingIgnored if using virtual channelChannelsChooses what channel(s) you are addressingCan either be a number or a virtual channel nameUses the DAQ Channel Name controlValues1-D array of data The first element of the array corresponds to the first channel in your channels input

  • Multiple-Point (Buffered) AO VIsPerform a hardware-timed, buffered generationHighly recommended for most applicationsAllows continuous generation, triggering, and error handlingAO ConfigConfigures your device, channels, bufferAO WriteWrites data to the bufferAO StartStarts your generationAO WaitWaits until the generation is completeAO ClearClears resources assigned to the generation

  • Buffered Generation Flowchart

  • Buffered GenerationAO Write fills the buffer with waveform dataAO Start begins the generationWithout AO Wait the generation would start (AO Start) and then end immediately after (AO Clear)If error input is true then Config, Write, Start, and Wait pass the error on but dont execute; Clear passes AND executes

  • AO Write One UpdateYour analog output channel will continue to output the last value written to it until either:The device is reset (power off, reset VI)A new value is writtenUse AO Write One Update at the end of your generation to set the channel back to 0

  • Continuous Generation FlowchartNOYES

  • Continuous GenerationDifferences from a buffered generationnumber of buffer iterations = 0No AO WaitAO Wait would hang because the generation never completesWhile loop with AO WriteThe second AO Write is used for error checking ONLY

    *Chapter 1 of this course will introduce us to the basics of Data Acquisition. We will learn what Data Acquisition is, the components that make up a Data Acquisition system, and we will focus on the first three components of a Data Acquisition system: Transducers, Signals, and Signal Conditioning. We will discuss each component in detail giving real world examples of each to help you correlate the topics to your specific application.*The purpose of a Data Acquisition system is to measure a physical phenomenon such as light, temperature, pressure, sound, etc. The building blocks of a Data Acquisition system are as follows: Transducer Signal Signal Conditioning eXtensions for Instrumentation (SCXI) Data Acquisition (DAQ) device Driver level and application level softwareThese five building blocks allow you to bring the physical phenomena you want to measure into your computer for analysis and presentation. In the following pages, we will discuss each one of these blocks individually to give you knowledge of each building block, and how they fit together to make up your Data Acquisition system. *In our discussion of transducers, you will learn what a transducer does, and what types of transducers to use for measuring the following physical phenomena: Temperature Light Sound Force Pressure Position Fluid flow pH levels

    *The purpose of a transducer is to convert a physical phenomena (light, temperature, pressure, sound, etc.) into a measurable electrical signal, such as voltage or current.*With the help of a transducer we have converted a physical phenomena (light, temperature, pressure, sound, etc.) into a signal. Not all signals are measured in the same manner, so we will need to learn how to categorize our signal as one of two types: Digital AnalogOnce we have categorized our signal we need to figure out what type of information we want out of that signal. The possible types of information we can obtain from a signal are: State Rate Level Shape FrequencyThe next section will discuss all five types of information that can be obtained from a signal and give real world examples.

    Note: Our discussion of signals assumes that we are acquiring the signal. However, most of the points apply to generating a signal as well. The only exception being that you dont need to do analysis to generate a signal with a specific frequency.

    *A signal can fall into one of two categories: Digital AnalogNext we will see what makes a signal either digital or analog. We will also see how the distinction of either digital or analog affects the way we will measure our signal.*A digital signal has only two possible states: ON or OFF. ON is also called high logic and OFF is also called low logic. Digital signals are often referred to as a TTL (Transistor-to-Transistor Logic) signal. The specifications for a TTL signal state that a voltage level between 0 - 0.8 Volts it is considered low logic, and a voltage level between 2 - 5 Volts is considered high logic. Most digital devices in industry accept a TTL compatible signal.Since a digital signal only has two states, we can only measure two quantities of a digital signal: state or rate. The following pages will discuss measuring state and rate as well as give some real world examples of both. *As we learned earlier, we can measure two quantities of a digital signal: state or rate. We will discuss these options one by one.StateA digital signal only has two possible states: ON or OFF. Thus one of the quantities of a digital signal we can measure is whether the state is ON or OFF.RateA digital signal also changes state with respect to time. Therefore, the other quantity of a digital signal we can measure is the rate, or in other words how the digital signal changes states with respect to time.*Unlike a digital signal, an analog signal can be at any voltage level with respect to time. Since an analog signal can be at any state at any time, the physical quantities we want to measure differ from those of a digital signal. We can measure the level, shape, or frequency of an analog signal. *As we just learned, we can measure three quantities of an analog signal: level, shape, and frequency. We will go through these options one by one.LevelMeasuring the level of an analog signal is similar to measuring the state of a digital signal. The only difference is that an analog signal can be at any voltage state, whereas a digital signal can only be at one of two states. ShapeBecause analog signals can be at any state with respect to time, the shape of the signal is often important. For instance, a sine wave has a different shape than a sawtooth wave. Measuring the shape of a signal opens the door to further analysis on the signal itself such as peak values, slope, integration, etc.FrequencyMeasuring the frequency of an analog signal is similar to measuring the rate of a digital signal. However, you cannot directly measure the frequency of an analog signal. Software analysis of the signal is required to extract the frequency information. The analysis is usually done by an algorithm called a Fourier Transform. *We have now taken our physical phenomena, converted it into a signal with our transducer, and decided the type information in our signal we want to measure. However, it is not always possible to connect our signal directly to our Data Acquisition device. We might need to alter the signal to make it suitable for our Data Acquisition device to measure. We can alter our signal with signal conditioning hardware. National Instruments main signal conditioning product line is referred to by the acronym SCXI which stands for Signal Conditioning eXtensions for Instrumentation. For more information on National Instruments SCXI products as well as other signal conditioning hardware please visit http://www.ni.com/sigcon In the following section we will discuss the purpose of signal conditioning, and the following common types of signal conditioning: Amplification Excitation Linearization Isolation Filtering*Signal Conditioning Extension for InstrumentationAs we learned in our discussion of transducers, most transducers need some sort of external hardware in order to perform their job. For instance, RTDs need excitation current, and strain gauges need a configuration of resistors called a Wheatstone bridge. In addition to needing external hardware, not all transducers produce a perfect voltage for our Data Acquisition device to measure. The signal from the transducer could be noisy, or if could be too small or too large for the range of our DAQ device. For instance, thermocouples, strain gauges, and microphones all produce a voltage in the millivolt range making it hard to detect changes in the signal.Most transducers need some form of signal conditioning whether it is to provide an excitation current or to turn the signal from the transducer into one that can be easily measured by a DAQ device. We will now discuss some common types of signal conditioning and their uses.

    *Amplification is a way of increasing a signal from a transducer that is too small for your DAQ device to accurately measure. A common example is a thermocouple. Thermocouples output a voltage in the millivolt range. If you were to send the signal from your thermocouple straight to your DAQ device, it is feasible that a change of a degree or two in temperature would not be detected by your system. However, if we amplify the signal we will be measuring a signal that is better suited to the range of our DAQ device. Your signal can either be amplified on the DAQ device or externally. The problem with amplifying the signal on the DAQ device is that we also amplify the noise the signal has picked up on its way to the DAQ device. In order to minimize the amount of noise that is amplified it is best to place the amplifier as close to the signal source as possible. Thus it is usually best to use some form of external amplification. As we will see next, we can show the benefit of external amplification with an index called the Signal to Noise Ratio. *We will start by discussing the basic hardware that is used in a Data Acquisition System. We will see that a Data Acquisition uses three types of hardware: a terminal block, a cable, and a DAQ device. Then we will focus specifically on the components of a DAQ device and what each component is used for. We will then learn some considerations that are important when we configure our DAQ device.*Now that we have converted a physical phenomena into a measurable signal (with or without signal conditioning), we need to acquire that signal. To do this we will need a terminal block, a cable, a Data Acquisition device, and a computer. By using this combination of hardware, we can transform a standard computer into a measurement and automation system. Next we will discuss each piece of our data acquisition system in more detail.*Terminal BlockThe purpose of a terminal block is to provide a place to connect your signals. A terminal block consists of screw terminals for connecting your signals and a connector for attaching a cable to connect the terminal block to your DAQ device. Terminal blocks can either have 100, 68, or 50 screw terminals. The choice between the three will depend mostly on the board, but it can also depend on how many signals you are measuring. For instance, terminal blocks with 68 screw terminals offer more ground terminals to connect your signal to than one with 50 screw terminals. Having more ground pins prevents the need to overlap wires to reach a ground terminal which can cause interference between the signals. The pinout for a 50-pin terminal block is shown above. Terminal blocks can also be either shielded or non-shielded. Shielded terminal blocks offer better protection against noise. Some terminal blocks have extra features such as cold-junction compensation that is necessary to properly measure a thermocouple. CableThe purpose of a cable is to transport your signal from the terminal block to your DAQ device. Cables come in a variety of 100, 68, or 50 pin configurations. Choosing a configuration will depend on the terminal block and the DAQ device you are using. Cables are either shielded or non-shilded (ribbon).To learn more about specific types of terminal blocks and cables check out the Data Acquisition section of the National Instruments catalog, or go to www.ni.com/catalog.*Most DAQ devices have four standard elements: Analog Input, Analog Ouput, Digital I/O, and Counters. The most common National Instruments DAQ devices are called the E-Series. A typical E-Series device consists of 16 analog input channels, 2 analog output channels, 8 digital lines, and 2 counters. We also offer specialty devices for applications where an E-Series board is not applicable. For instance, we have high speed digital devices that offer timed digital I/O, high speed analog output devices for advanced waveform generation, and Dynamic Signal Acquisition (DSA) devices for doing analysis of rapidly changing signals such as vibration, or sonar.All DAQ devices use the PC as a platform. The signal you have measure with your DAQ device can be transferred to the computer through a variety of different bus structures. For instance, you could have a DAQ device that plugs into the PCI bus of your PC, you could have a DAQ device connected to the PCMCIA socket of your laptop, you could have a DAQ device connected to the USB port of your computer, or you could use the PXI/CompactPCI form factor to have a portable, versatile, and rugged measurement system. To learn more about specific types of Data Acquisition devices check out the Data Acquisition section of the National Instruments catalog, or go to www.ni.com/catalog.*Now that we have learned the different components of a DAQ device, we will focus on aspects of our analog input and analog output circuitry that will affect how we configure our DAQ device. Specific to analog input we will discuss the resolution and range of our Analog-to-Digital Converter; the gain applied by the instrumentation amplifier; combining the resolution, range, and gain to calculate a property called the code width, and the mode of our DAQ device. Specific to analog output we will discuss the choice between an internal versus an external voltage reference, and how this affects the range of the Digital-to-Analog Converter (DAC), as well as the choice between generating a bipolar versus unipolar signal.*As we learned earlier an Analog-to-Digital Converter (ADC) takes an analog signal and turns it into a binary number. Therefore, each binary number from the ADC represents a certain voltage level. The ADC returns the highest possible level without going over the actual voltage level of the analog signal. Resolution refers to the number of binary levels the ADC can use to represent a signal. To figure out the number of binary levels available based on the resolution you simply take 2Resolution. Therefore, the higher the resolution, the more levels you will have to represent your signal. For instance, an ADC with 3-bit resolution can measure 23 or 8 voltage levels, while an ADC with 12-bit resolution can measure 212 or 4096 voltage levels. Even though ADCs are not made with only 3-bit resolution let us further examine our example of a 3-bit ADC. The lowest voltage level will correspond to 000, the next highest to 001, and so on all the way up to 111. As we will see next this is usually not enough resolution to properly represent a signal.*Let us examine how a sine wave would look if it is passed through ADCs with different resolutions. We will compare a 3-bit ADC and a 16-bit ADC. As we learned earlier a 3-bit ADC can represent 8 discrete voltage levels. A 16-bit ADC can represent 65,536 discrete voltage levels. As you can see the representation of our sine wave with 3-bit resolution looks more like a step function than a sine wave. However, the 16-bit ADC gives us a clean looking sine wave. One way to think of resolution is by considering your television screen. The higher the resolution of the screen, the more pixels you have to show the picture, so you will get a better picture. Another way to think resolution is by considering the amount of colors your computer monitor uses to display an image. If you are only using 16 colors the picture is choppy and doesnt look very good, but if you use 16-bit color the picture is smooth and looks great. Keep in mind that resolution is a fixed quantity of an ADC, and it depends on the DAQ device that you use. Your standard National Instruments DAQ device has either 12-bit or 16-bit resolution.*We just learned that the resolution of our ADC determines the number of discrete voltage levels we can represent, but how does the ADC know what voltage level to start at and finish at? Well, ADCs also have a parameter called the range. The range refers to the minimum and maximum analog voltage levels the ADC can digitize. Unlike the resolution of the ADC, the range of the ADC is selectable. Most DAQ devices offer a range from 0 - +10 or -10 to +10. The range is chosen when you configure your device in NI-DAQ. We will learn how to configure our DAQ device in software later in this chapter. Keep in mind that the resolution of the ADC will be spread over whatever range you choose. The larger the range, the more spread out your resolution will be, and you will get a worse representation of your signal. Thus it is important to pick your range to properly fit your input signal. As an example let us reconsider the colors we use to represent an image on our computer monitor. As we said earlier a picture looks better when more colors are used to represent it. Now let us examine the effect that changing the range would have on our picture. Let us compare a picture with 16 color resolution in black and white to a picture with 16 color resolution in color. Our black and white picture will be clearer because our resolution is only spread across two colors instead of all colors. Next we will see this affect with our analog signal. *Choosing the proper range for a signal is very important to help maximize the resolution of our ADC. To illustrate this, let us revisit our sine wave and our 3-bit ADC. Due to poor resolution we are still not going to be able to represent our sine wave very well. However, an improper choice of range can make our representation of the sine wave even worse. Our sine wave has a minimum value of 0 Volts and a maximum value of +10 Volts. If we choose our range as 0 - +10 Volts we will have 8 different voltage levels we can represent. If we were to improperly choose a range of -10 to +10 Volts we would now only have 4 voltage levels to represent our signal, because the other 4 levels would be used by the 0 to -10 Volt range. Our smallest detectable voltage would change from 1.25 to 2.50 and we would get a worse representation of our sine wave. As you can see improperly choosing the range will negatively impact the representation of your signal. However, we do not always have a choice as to what range to pick. For instance, if our sine wave actually went from -2 to +8 Volts, we could not choose 0 to +10 Volts as our range, because the signal does not fit within that range. We would be forced to choose a range of -10 to + 10, even though it spreads out our resolution.*As we just learned, properly choosing the range of your ADC is one way to make sure you are maximizing the resolution of your ADC. Another way to help your signal maximize the resolution of the ADC is by applying a gain. Gain refers to any amplification or attenuation of a signal. The gain is not applied by your ADC. Instead the gain is applied by the instrumentation amplifier that proceeds the ADC on your DAQ device. The gain setting is a scaling factor. For example, possible gain settings for an E-Series device are 0.5, 1, 2, 5, 10, 20, 50, or 100. Each voltage level on your incoming signal is multiplied by the gain setting to achieve the amplified or attenuated signal. Unlike resolution that is a fixed setting of the ADC, and range that is chosen when the DAQ device is configured, the gain is specified indirectly. Nowhere in NI-DAQ or in LabVIEW will you find a place to set the gain. The gain is chosen indirectly through a setting called input limits. Input limits refers to the minimum and maximum values of your actual analog input signal. The input limits are specified in LabVIEW. Based on the input limits you set, the largest possible gain is applied to your signal that will keep the signal within the chosen range of the ADC. So instead of needing to calculate the best gain based on your signal and the chosen range, all you need to know is the minimum and maximum values of your signal. If you dont set the input limits of your signal a gain of 1 (no change) will be applied.*Applying a gain to an analog input signal is very similar to amplifying a your voice with a microphone. If you tried speaking in a stadium for 100, 000 people without a microphone, very few of the 100,000 people will be able to hear your voice. However, if you amplify your voice with a microphone you can maximize the number of people that can hear you. In the same way a small signal will not be able to use the entire resolution of the ADC, unless a gain is applied to amplify the signal. Let us take a look at how the gain setting affects an analog input signal. Assume we have a sine wave with a range of 0 to +5 Volts and an ADC range of 0 to 10 Volts. As you can see above if we applied a gain of 1 (no change) to our signal we would only be taking up half of the range, and thus using only half of our resolution. However, if we apply a gain of 2 to our signal we now have a sine wave with a range of 0 to +10 Volts. Now our signal fits exactly in our range and we will be maximizing the use of our resolution. Now let us consider a sine wave with a range of 0 to +6 Volts with the same ADC range of 0 to +10 Volts. We can no longer apply a gain of 2, because our sine wave would have a range of 0 to +12 Volts which exceeds our ADC range. The only gain we can apply is a gain of 1. It is also important to note that if we put a 0 to +5 Volt signal into our device, our graph in LabVIEW will show a 0 to +5 Volt signal regardless of the gain that is applied. The gain setting is only used to maximize the use of the ADC resolution. It will not affect your measurement.*Now that we have learned about resolution, range, and gain we can use them to help calculate a property called the code width. Code width is the smallest change in your signal that your system can detect. The formula for the code width is shown above. As you can see the code width is a property of the resolution, range, and gain. The smaller our code width is the better we can represent our signal. The formula confirms what we have already learned in our discussion of resolution, range, and gain: Larger resolution = smaller code width = better representation of the signal Larger gain = smaller code width = better representation of the signal Larger range = larger code width = worse representation of the signalAn example is shown above. Being able to calculate the code width is important in selecting a DAQ device. If you have a signal with a range from 0 to +10 Volts and you need to measure that signal with a precision of 2mV do you need to purchase a DAQ device with a 12-bit ADC or a 16-bit ADC? The next exercise will address just such a question.*Previously, we have learned about transducers, the signals they produce, conditioning those signals, the components of a DAQ device, and considerations to help us optimize the representation of our signal. We are now ready to discuss connecting the signal to our DAQ device. In order to get correct measurements it is very important to properly ground your system. The two components that we are concerned with are the signal source and our measurement system. The term measurement system is used because our system could include signal conditioning hardware as well as a DAQ device. However, for the remainder of our discussion on grounding we will assume that our measurement system only consists of a DAQ device. First we must determine how our signal source is grounded. Then based on how the signal source is grounded we can choose a grounding mode for our measurement system. Throughout our discussion of grounding, Vs will refer to the voltage level of our signal source, and Vm will refer to the voltage measured by our DAQ device. *Our signal source can be placed in one of two categories: Grounded FloatingIt is very important to properly categorize your signal source, because how your signal source is grounded will affect how you ground your measurement system. Next we will discuss each grounding category, and give examples of signal sources that fall into each category.*A grounded signal source is one in which the voltage signals are referenced to a system ground, such as earth or building ground. Note that the negative terminal of the signal source shown above is referenced to ground. The most common examples of grounded signal sources are devices, such as power supplies and signal generators, that plug into the building ground through a wall outlet.

    Note: The grounds of two independently grounded signal sources generally will not be at the same potential. The difference in ground potential between two instruments connected to the same building ground system is typically 10mV to 200mV. The difference can be higher if power distribution circuits are not properly connected.*A floating signal source is one in which the voltage signal is NOT referenced to a system ground, such as earth or building ground. Note that neither the positive or the negative terminal are referenced to ground. Common examples of floating signal sources are batteries, thermocouples, transformers, and isolation amplifiers.*Now that we have learned how to categorize our signal as grounded or floating, we must learn about the three modes of grounding for our measurement system: Differential, Referenced-Single Ended (RSE), and Non-Referenced Single-Ended (NRSE). Next we will discuss how the three different modes ground our instrumentation amplifier.*In a differential measurement system neither input to the instrumentation amplifier is referenced to a system ground. As you can see in the picture above, the AIGND pin, and our amplifier itself are referenced to system ground, but neither of our input terminals references ground in any way. Also note that when we are in differential mode we are using two analog input channels for one signal, thereby cutting our channel count in half. So, a 16 channel DAQ device becomes an 8 channel DAQ device when it is in differential mode. The pairing of the analog input channels holds with the following rules: Positive Terminal - ACH(n) Negative Terminal - ACH(n+8)If I wanted to measure a signal on analog input channel 5, I would connect the positive terminal of my signal to ACH5 and the negative terminal of my signal to ACH 13. The pairing of channels is shown in the picture above.So if putting my DAQ device in differential mode cuts my channel count in half, why would I ever want to use differential mode?Placing your measurement system in differential mode will give you better measurements, because it allows the amplifier to reject common-mode voltage and any common-mode noise that is present in your signal. Common-mode voltage is any voltage present at the instrumentation amplifier inputs with respect to the amplifier ground.*A Referenced Single-Ended measurement system references its measurements to system ground. As you can see above the negative terminal of your signal source is connected to AIGND, which in turn is connected to the system ground. Since we are using AIGND for the negative terminal of our signal, we only need to use one analog input channel per signal. So, a 16 channel DAQ device in RSE remains a 16 channel DAQ device. If I wanted to measure a signal on analog input channel 10, I would connect the positive terminal of my signal to ACH10 and the negative terminal of my signal to AIGND. As you can see on the picture of the pinout, multiple AIGND pins are provided to prevent the overlapping of input wires that can cause interference between signals. While RSE mode does maintain the channel count of your DAQ device, it does not reject common-mode voltages. Too much common-mode voltage can cause measurement errors and may damage your device.*National Instruments DAQ Devices offer a variant on RSE mode called Non-Referenced Single-Ended (NRSE) mode. In NRSE mode, all measurements are still made with respect to a common reference as in RSE mode, but unlike RSE mode the voltage of this reference can vary with respect to system ground. As you can see above, the negative terminal of your signal is connected to the AISENSE pin, and AISENSE is not referenced to ground at all. Therefore the voltage of AISENSE is floating. As you can see in the pinout above, the board has only one AISENSE pin for connecting your signal source, because we need to make sure each signal uses the same reference. Similar to RSE mode, NRSE mode maintains the channel count of the DAQ device, and does not reject common-mode voltages. So when would I want to use RSE versus NRSE? That question will be answered in the following pages as we discuss how to choose a grounding mode for our measurement system based on how our signal source is grounded. *We have separately discussed how our signal source is grounded, as well as how we can ground our measurement system. Now it is time to put both pieces together. Based on how our signal source is grounded we will learn how to choose the proper measurement system grounding mode.*Assume you have a grounded signal. What grounding mode should you choose for your measurement system? We will go through each measurement mode, discuss the benefits and drawbacks of each mode, and draw conclusions as to which mode is the best for a grounded signal source.DifferentialDifferential mode will cut the channel count of your DAQ device in half, however differential mode offers better measurements because it allows the instrumentation amplifier to reject common-mode voltage and any common-mode noise that is present in the signal.Reference Single-Ended (RSE)RSE mode is NOT RECOMMENDED for use with a grounded signal source. As was mentioned earlier during our discussion of grounded signal sources, the grounds of two independently grounded signal sources generally will not be at the same potential. Both our signal source and our measurement system are grounded. The difference in potential between the signal source ground and the measurement system ground is called a ground loop. Any time you can draw a line on your circuit diagram directly from one ground in your system to another ground, you have a ground loop. A ground loop may result in erroneous measurements. A ground loop introduces both AC and DC noise to the measurement in the form of a power-line frequency component (60Hz AC), and offset errors (DC).*Now, assume you have a floating signal. What grounding mode should you choose for your measurement system? We will go through each measurement mode, discuss the benefits and drawbacks of each mode, and draw conclusions as to which mode is the best for a floating signal source.DifferentialDifferential mode will cut the channel count of your DAQ device in half, however differential mode offers better measurements because it allows the instrumentation amplifier to reject common-mode voltage and any common-mode noise that is present in the signal. You will also need to use bias resistors to provide a path to ground for any bias current in the instrumentation amplifier.Reference Single-Ended (RSE)RSE mode will maintain the channel count of your DAQ device, and bias resistors are not needed, because a path to ground is provided by the measurement system. However, RSE mode does not reject common-mode voltages.Non-Reference Single Ended (NRSE)NRSE mode will maintain the channel count of your DAQ device. However, bias resistors are necessary to provide a path to ground for any bias current in the instrumentation amplifier. Also, NRSE mode will not reject common-mode voltages.

    *The final component of a complete Data Acquisition System is the software. In this section we will discuss the different levels of DAQ software that are used to program your DAQ device. The three levels are NI-DAQ, Measurement & Automation Explorer (MAX), and LabVIEW. The remainder of this chapter will contain an overview of NI-DAQ and MAX. The remaining four chapters will focus on using LabVIEW.*We have come all the way from the point of wanting to measure some physical phenomena to turning that phenomena into a binary number with the Analog-to-Digital Converter on our DAQ device. We have now left the hardware realm and are entering the software realm. Before we start to discuss software it is important to know what software we have, and how that software interacts. Our lowest level of software is NI-DAQ. NI-DAQ is the software closest to your DAQ device, and LabVIEW is the software that is closest to the user. MAX (Measurement & Automation eXplorer) lies between LabVIEW and NI-DAQ. The rest of this chapter will focus on NI-DAQ and MAX, and Chapters 3 - 6 will focus on using LabVIEW.*As we saw on the previous page, NI-DAQ is a driver level software that communicates with your National Instruments DAQ device. The main component of NI-DAQ is a DLL called nidaq32.dll. The nidaq32.dll contains function calls for programming your National Instruments DAQ device. It is important to note that you cannot use NI-DAQ with 3rd party Data Acquisition devices. The vendor of the device will have to provide you with a driver specific to their device. The 3rd party driver is usually a DLL that can be called from LabVIEW. NI-DAQ is compatible with the following software programs: National Instruments LabVIEW National Instruments Measurement Studio Microsoft Visual C/C++ Visual Basic Borland C++ Borland DelphiNI-DAQ comes with example programs for each one of the software languages listed above. *The next level of software we are concerned with is called Measurement & Automation Explorer (MAX). MAX is a software interface that gives you access to all of your National Instruments DAQ, GPIB, IMAQ, IVI, Motion, VISA, and VXI devices. The shortcut to MAX will be placed on your desktop after installation. A picture of the icon is shown above. MAX is mainly used to configure and test your National Instruments hardware, but it does offer other functionality such as checking to see if you have the latest version of NI-DAQ installed. The functionality of MAX is broken into four categories: Data Neighborhood Devices and Interfaces Scales SoftwareWe will now step through each one of these categories and learn about the functionality each one offers. *MAX has four folders that provide a range of functions from configuring your device to testing your device to updating your software. The first of these folders is Data Neighborhood. Data Neighborhood is the home of your virtual channels. We will learn more about what a virtual channel is in a moment. The Data Neighborhood folder will show you all of your currently configured virtual channels, and provides utilities for testing and reconfiguring those virtual channels. Data Neighborhood also provides access to the DAQ Channel Wizard which allows you to create new virtual channels. Lets learn more about the DAQ Channel Wizard and virtual channels.*The DAQ Channel Wizard is a software interface that is used to create new virtual channels. A virtual channel is a shortcut to a configured channels in your system. In other words, you can set up the configuration information for your channel at one time, and give the channel a name that you can later use to access that channel and its configuration information in LabVIEW. For instance, you can document your channel with a description, decide what type of transducer will be used with your channel, set the range (determines gain), choose the grounding mode (Differential, RSE or NRSE), and assign custom scaling for your virtual channel. As was stated earlier you can give each channel a descriptive name instead of having to refer to it with a number. For instance, channel 0 on the DAQ Signal Accessory is hard-wired to a temperature sensor, so you could create a virtual channel for channel 0 and call it Temperature Sensor which tells you much more about what the channel does than just a 0 can. Virtual channels can be created for Analog Input, Analog Output, and Digital I/O.

    *The next folder in MAX is called Devices and Interfaces. As the name implies Devices and Interfaces will show you any currently installed and detected National Instruments hardware. Devices and Interfaces also includes utilities for configuring and testing your devices. The two utilities that are specific to DAQ devices are Properties and Test Panels. We will first discuss Properties and then Test Panels. *Properties is a utility for configuring your DAQ device. When you launch the Properties utility the window shown above should appear. As you can see there are 5 tabs on the top. The 5 tabs are used for configuring your DAQ device. We will go through each tab individually.SystemThe system tab allows you to change your device number, and it also provides two buttons for testing your DAQ device. The first button is the Test Resources button. After you have installed your DAQ device, the first thing you should do is come to the Properties through Devices and Interfaces and press the Test Resources button. The Test Resources button performs a basic test of the system resources assigned to the board. The system resources tested are the Base I/O Address, the Interrupt Request (IRQ), and the Direct Memory Access (DMA). We will briefly describe the purpose of each system resource.Base I/O Address - The DAQ device communicates with the computer primarily through its registers. The driver software writes to configuration registers on the device to configure the device. The software reads data registers on the device to obtain the devices status or a signal measurement. The base I/O address setting determines where in the computers I/O space the devices registers reside.*After your device has passed the basic resource test, and you have configured the System, AI, AO, Accessory, and OPC tab, you should return to the System tab and press the Test Panels button. You will see the window shown above. The Test Panel is a utility for testing the analog input, analog output, digital I/O, and counter functionality of your DAQ device. The Test Panel is a great utility for troubleshooting, because it allows you to test the functionality of your device directly from NI-DAQ. If your device doesnt work in the Test Panel it isnt going to work in LabVIEW. If you are ever having unexplainable trouble with a LabVIEW program that does Data Acquisition it is good practice to double check the Resource test and the Test Panel to make sure the device is working properly. *The third folder in MAX is the Scales folder. The Scales folder is the home of your custom scales. We will learn more about what a custom scale is in a moment. The Scales folder will show you all of your currently configured custom scales, and provides utilities for testing and reconfiguring those custom scales. Scales also provides access to the DAQ Custom Scales Wizard which allows you to create new custom scales. Lets learn more about the DAQ Custom Scales Wizard and custom scales. *The DAQ Custom Scales Wizard is a utility to create custom scales that can be used to provide scaling information for existing virtual channels. Each custom scale can have its own name and description to help identify it. A custom scale can be one of three types: Linear, Polynomial, or Table.LinearA scale that uses the formula y =mx +b.PolynomialA scale that uses the formula y = a0 + a1*x + a2*x^2 + + an*x^n.TableA scale where you enter the raw value and the corresponding scaled value in a table format.

    *When we are acquiring an analog signal we are taking signal that is continuous with respect to time (infinite amount of points), and converting it into a series of discrete samples (finite amount of points). The samples are taken at a rate referred to as the sampling rate. We will learn how to set the sampling rate in software later. The faster we sample, the more points we will acquire, and therefore the better our representation of the signal will be. If we dont sample fast enough we will experience a problem known as aliasing. We will learn about aliasing next.*Aliasing refers to a misrepresentation of our signal frequency due to undersampling of the signal. Examine the diagram above. The signal we are trying to measure is a sine wave. If we adequately sample our sine wave we will get the correct frequency. However if we undersample the signal we will get the incorrect frequency. As you can see the adequately sampled signal goes through three cycles in the same time that our aliased signal goes through one cycle. The undersampled signal is often simply referred to as an alias. Next we will discuss a rule called the Nyquist Theorem that will help us to avoid aliasing our signal.*The Nyquist Theorem is a rule we can follow to prevent aliasing our signal. The Nyquist Theorem states that you must sample at greater than 2 times the maximum frequency component of your signal to accurately represent the frequency of the signal. Notice that the Nyquist Theorem only deals with accurately representing the frequency of the signal. It doesnt mention anything about properly representing the shape of our signal. In order to properly represent the shape of your signal you must sample between 5 - 10 times greater than the maximum frequency component of your signal. Next we will illustrate the Nyquist Theorem with various examples.*Assume you are measuring a 100Hz sine wave. First we will try sampling our signal at exactly 100Hz. Keep in mind that the signals shown above are theoretical approximations. It is often very difficult for both the signal and ths sampling rate to be at exactly the same frequency. According to the Nyquist Theorem this is not fast enough to correctly represent the frequency of our signal. If our signal frequency is exactly 100Hz and we are measuring at exactly 100Hz we will get a straight line. So we are obviously not correctly representing either the shape or the frequency of our signal. Therefore the Nyquist Theorem and our guideline for shape both hold true. Keep in mind that the signals shown above are theoretical approximations. It is often very difficult for both the signal and the sampling rate to be at exactly the same frequency. Now let us sample our signal at 200Hz. Note that this is exactly twice the frequency of our signal, so according to the Nyquist Theorem this is just fast enough to correctly represent the frequency of our signal. However, it is not fast enough to correctly represent the shape of our signal. If our signal is exactly 100Hz and if we sample at exactly 200Hz we will get the triangle wave shown above. Notice that the 100Hz sine wave and the triangle wave have different shapes, but the same frequency. So the Nyquist Theorem and our guideline for shape still hold true. Finally, we will sample our signal at 1kHz. Since we are sampling at 10 times our frequency, we should be able to accurately represent both the frequency and shape of our signal. As you would expect, our sampled signal does look like a sine wave, and it has the same frequency as our measured signal. So the Nyquist Theorem and our guideline for representing the shape of our signal both hold true. *Now that we have learned about transducers, signals, signal conditioning, DAQ hardware, NI-DAQ, and MAX, we can talk about programming with our application software. In the case of this course we will be using LabVIEW as our application software. As was mentioned earlier, NI-DAQ supports software packages other than LabVIEW. If you are interested in using a programming language other than LabVIEW to program your DAQ system, please refer to the NI-DAQ Function Reference Manual and NI-DAQ User Manual for your respective version of NI-DAQ. The DAQ functionality in LabVIEW is located in the Data Acquisition Palette. The Data Acquisition Palette has six different subpalettes and the DAQ Channel Name Constant. The six subpalettes are Analog Input, Analog Output, Digital I/O, Counter, Calibration, and Signal Conditioning. The remainder of this course will focus on the Analog Input, Analog Output, Digital I/O, and Counter palettes. For more information on the Calibration and Signal Conditioning palettes please refer to the LabVIEW Online Help. The DAQ Channel Name constant is new as of LabVIEW 6i and will be discussed later in this section.*Before we learn how to use the analog input VIs, we need to learn about a few of the data types that are used with most of the Data Acquisition VIs. The first of these data types is the DAQ Channel Name. The purpose of the DAQ Channel Name data type is to set the channels that you want to address with your VI. The DAQ Channel Name data type can be used with any of the Data Acquisition VIs that use a channel input. The DAQ Channel Name data type replaces the string data type that was used in LabVIEW 5.1 and earlier. Like the channel string, the DAQ Channel Name data type allows you to either specify your channel as a number or use a virtual channel. Unlike the channel string, the DAQ Channel Name will automatically detect all of your currently configured virtual channels, and display them in a menu ring for you to choose from (shown above). So, you dont have to try and remember the exact name of your virtual channels, because the DAQ Channel Name lists them for you. Remember that you can only create virtual channels for Analog Input, Analog Output, and Digital I/O, so the DAQ Channel Name data type is not as useful when used with Counter VIs. You can either use a DAQ Channel Name control to set your channels on the front panel, or you can use a DAQ Channel Name Constant to hard-wire in your channel choice on the block diagram. The control, terminal on the block diagram, and constant are shown above.*In this chapter we will be learning how to program analog input operations in LabVIEW. The Analog Input palette is shown above. The Analog Input VIs are divided into four categories: Easy, Utility, Intermediate, and Advanced. We will discuss each category separately.Easy VIsThe Easy VIs are designed to fit a specific application. They are very easy to program, but they offer little flexibility in the event the VI doesnt meet your application needs. If you find that an Easy VI doesnt offer the functionality you need for your application you will want to use either a Utility, Intermediate, or Advanced VI. Utility VIsUtility VIs are convenient groups of Intermediate VIs. The Easy VIs are built out of the Utility VIs. Utility VIs are between Easy VIs and Intermediate VIs in terms of flexibility and functionality. If you find that an Easy VI is close to meeting your application, it might be possible to obtain that functionality with a Utility VI. However, if the Utility VI doesnt meet your needs you will want to try an Intermediate VI.*In the spirit of the familiar adage, One step at a time, we will start by learning how to acquire an analog signal one point at a time. The analog input palette has two Easy VIs that are designed to acquire one point at a time. The two VIs are AI Sample Channel and AI Sample Channels. As the names suggest AI Sample Channel will acquire one point on one channel, and AI Sample Channels will acquire one point on multiple channels. AI Sample Channel and AI Sample Channels perform a software-timed, non-buffered acquisition. We will discuss each term separately.Software-timedIn a software-timed acquisition, the rate at which the samples are acquired is determined by the software instead of by the DAQ device. The AI Sample Channel(s) VIs only acquire one point each time they are called, so in order to acquire multiple points you must put them in a software loop. The loop rate will control the acquisition rate, thus the acquisition is software-timed.Non-BufferedIn a non-buffered acquisition, the acquired points are taken directly from the device to LabVIEW without an intermediate buffer in PC memory. The AI Sample Channel(s) VIs only acquire one point each time they are called, so this point can be brought straight into LabVIEW as it is acquired. We will learn later that if multiple points are acquired, a buffer is necessary to store the points before they are brought to LabVIEW. *We have already learned how to acquire one point at a time. Now we will take the next step and learn how to acquire multiple points at a time. To do so we will use the intermediate VIs. The four intermediate VIs we will focus on are AI Config, AI Start, AI Read, and AI Clear. A summary of what each VI does is shown above. We will be learning about each VI specifically later in the chapter. The combination of these four VIs allows you to perform a hardware-timed, buffered acquisition. We will go through each one of these terms separately.Hardware-timedIn a hardware-timed acquisition, the rate of acquisition is controlled by a hardware signal such as a scan clock or a channel clock. A hardware clock can go much faster than a software loop, so you can sample a higher range of frequencies without aliasing your signal. A hardware clock is also more accurate than a software loop. A software loop rate can be thrown off by a variety of actions such as the opening of another program on your computer. A hardware clock is not susceptible to such distractions. *The first intermediate VI we will examine is AI Config. The parameters of AI Config are listed below.Interchannel DelaySets the period of the channel clock. Remember that the channel clock controls the time between samples within a scan. Input LimitsAllows you to set the high and low limit for your signal. The high and low limit are used to determine the gain setting of the instrumentation amplifier. (See Chapter 2).DeviceNumber of the device you want to address. The device number for each device can be found in Measurement & Automation Explorer (MAX). The device number is not necessary if you are using a virtual channel, because the device number is already part of the virtual channel information.ChannelsChooses the channel or channels that you want to acquire the sample from. The channels input can either be a number or a virtual channel. If you use a virtual channel the device, and input limits are unnecessary, because they are already part of the virtual channel information. Notice that the channel input is expecting a 1D array of DAQ Channel Name controls or constants.*One of the benefits of using AI Config is the ability to set different gains for different channels. The channel input to AI Config is a 1D array of DAQ Channel Name Controls that is used to specify the channels you will use in your acquisition. The input limits control is a 1D array of clusters that is used to specify the gain setting for your channels. The first element of the channel array corresponds to the first element of the input limits array. Therefore, you can set different gains for different channels. For instance, the example above shows a gain setting of 20 for channel 0 and a gain setting of 2 for channel 1. The example assumes that the range is set at 0 to +10V. *The second intermediate VI we will examine is AI Start. The parameters for AI Start are listed below.Task ID In/OutReceives/Passes configuration information to/from other VIs.Number of Scans to AcquireSets the total number of scans to acquire before the acquisition completes. The default value is -1, which sets the number of scans to acquire equal to the buffer size that was specified in AI Config. Set number of scans to acquire to 0 for a continuous acquisition.Scan RateSets the number of scans per second for the acquisition.Error In/OutReceives/Passes any error information to/from other VIs.*The final two intermediate VIs we will examine are AI Read and AI Clear. The parameters for both are listed below.Task ID In/OutReceives/Passes configuration information to/from other VIs.Number of Scans to ReadSpecifies how many scans to retrieve from the buffer. The default value is -1, which sets the number of scans to read equal to the number of scans to acquire set in AI Start. If the value of number of scans to acquire in AI Start is set to 0 (continuous acquisition) the default value for number of scans to read is 100.Scan BacklogTells you how many scans are still sitting in the buffer waiting to be read.Waveform DataReturns the data in a 1D array of waveforms. Since we have specified a sampling rate, waveform data will return dt, t0, and our Y array of data.Error In/OutReceives/Passes any error information to/from other VIs.*The error in/out cluster that is used by the intermediate VIs is made up of a boolean, a numeric, and a string. The boolean will be TRUE if an error occurred, and FALSE if no error occurred. The Numeric will tell you the code of the error. A negative code is an error which sets the boolean to TRUE. A positive code is a warning and will not set the boolean to TRUE. A warning is just letting you know that the VI can still run, but a parameter is incorrect and might affect your results. The string will tell you the source of the error. The string description is usually not detailed enough to determine the cause of the error. However, you can right click on the edge of the error cluster and choose Explain Error for a more detailed description of the error. Another option is the use the error handling VIs which are explained next. *Now that we have learned all about the specific intermediate VIs that we can use to perform a hardware-timed buffered acquisition, we can see how they all fit together. Above is a flowchart for a simple buffered acquisition. A buffered acquisition will acquire a set number of points at a specified rate. We start by calling AI Config to configure our device. Then we use AI Start to start the acquisition. Then AI Read will wait until all of the scans are available before returning the data and passing us to AI Clear where we free the resources assigned to the device. After we have cleared the device we can check for any errors.*Now that we understand the flow of a buffered acquisition we can look at how one is actually programmed in LabVIEW. As you can see we have followed the flowchart. AI Config will set up our device, channels, buffer size, and input limits. AI Start will set the scan rate and then start the acquisition. We will then sit inside AI Read until all of the buffer is full. When the buffer is full the acquisition stops and AI Read will return all of the data from the buffer. Notice how we can pass the waveform data type straight into a waveform graph. Neither the number of scans to acquire input for AI Start nor the number of scans to read input for AI Read are necessary because they will default to the buffer size that is chosen in AI Config. Also the scan backlog should be 0 so we dont need to monitor it. Then we will clear the device and display any errors. We are passing both the task ID and the error cluster from one VI to another. The passing of the error cluster is actually very important. If the error cluster is passed into AI Config, AI Start, or AI Read with a value of TRUE, the VI will pass the error on without executing. If the error cluster is passed into AI Clear with a value of TRUE, the respective VI will pass the error on and still execute. Let us go through an example. Assume an error occurred in AI Start. AI Start will cease its execution and pass the error on to AI Read. AI Read will see that the error boolean is set to TRUE, and will not execute. AI Read simply passes the error on. AI Clear will see that the error boolean is set to TRUE, but it will still execute. After AI Clear is done executing it will pass the error on to the Error Handler for display. The reason for this is simple. If an error occurs we will still want to clear the resources assigned to the device, but we wouldnt want to execute AI Config, Start, or Read.*Now that we have mastered a buffered acquisition, we will move on to a continuous buffered acquisition. The main difference between a buffered acquisition and continuous buffered acquisition is the number of points that are acquired. With a buffered acquisition we acquire a set number of points. With a continuous buffered acquisition we can acquire data continuously. The flowchart for a continuous buffered acquisition is shown above. The flowchart is the same as a buffered flowchart for the first three steps. We configure our device with AI Config, start the acquisition with AI Start, and then prepare to read the data with AI Read. At this point the continuous buffered acquisition flowchart strays from the buffered acquisition flowchart. Since we are acquiring data continuously we also need to be reading data continuously. Therefore we have AI Read in a loop. The loop is done when either an error occurs, or the user stops the loop from the front panel. If we are not done, we will continue to read data. If we are done we will go to AI Clear to unassign our resources and display any errors with either the Simple or General Error Handler. *Now that we understand the flowchart of a continuous buffered acquisition we will examine a continuous buffered acquisition in LabVIEW. The VI is very similar to a buffered acquisition with the following changes: The number of scans to acquire input for AI Start is set to 0. AI Read has a while loop around it. The number of scans to read input does not equal the buffer size. The scan backlog is being monitored.So again we will start by configuring our device, channels, buffer size, and input limits with AI Config. Then we will set the scan rate and start a continuous buffered acquisition (number of scans to acquire = 0) with AI Start. We will then enter the while loop with AI Read. The number of scans to read can no longer be equal to the buffer size. In fact, it is good practice to set the number of scans to read to 1/4 or 1/2 the buffer size for a continuous buffered acquisition. Since we are continually sending data into the buffer it is important to monitor the number of unread scans in the buffer to see if we are emptying the buffer fast enough. If the scan backlog is increasing steadily you will most likely overflow your buffer and receive an error. The while loop with AI Read can either be stopped by the user with a button on the front panel, or by an error in AI Read such as a buffer overflow. After the while loop stops we clear all the resources and display our errors.*Before we learn how to write an analog output program in LabVIEW, we need to examine the analog output architecture of our DAQ device. Most E-Series devices have a Digital-to-Analog Converter (DAC) for each analog output channel. All of the DACs are updated at the same time. Therefore, the output of the analog output channels is synchronized, similar to the way analog input channels are synchronized when we perform simultaneous sampling (see Chapter 3). *In this chapter we will be learning how to program analog output operations in LabVIEW. The Analog Output palette is shown above. The Analog Output VIs are divided into four categories: Easy, Utility, Intermediate, and Advanced. We will discuss each category separately.Easy VIsThe Easy VIs are designed to fit a specific application. They are very easy to program, but they offer little flexibility in the event the VI doesnt meet your application needs. If you find that an Easy VI doesnt offer the functionality you need for your application you will want to use either a Utility, Intermediate, or Advanced VI. Utility VIsUtility VIs are convenient groups of Intermediate VIs. The Easy VIs are built out of the Utility VIs. Utility VIs are between Easy VIs and Intermediate VIs in terms of flexibility and functionality. If you find that an Easy VI is close to meeting your application, it might be possible to obtain that functionality with a Utility VI. However, if the Utility VI doesnt meet your needs you will want to try an Intermediate VI.

    *As we did with analog output, we will start by learning how to generate an analog signal one point at a time. The analog output palette has two Easy VIs that are designed to generate one point at a time. The two VIs are AO Update Channel and AO Update Channels. As the names suggest AO Update Channel will generate one point on one channel, and AO Update Channels will generate one point on multiple channels. AO Update Channel and AO Update Channels perform a software-timed, non-buffered generation. We will discuss each term separately.Software-timedIn a software-timed generation, the rate at which the samples are generated is determined by the software instead of by the DAQ device. The AO Update Channel(s) VIs only generate one point each time they are called, so in order to generate multiple points you must put them in a software loop. The loop rate will control the generation rate, thus the generation is software-timed.Non-BufferedIn a non-buffered generation, the samples are sent directly from LabVIEW to the device without an intermediate buffer in PC memory. The AO Update Channel(s) VIs only generate one point each time they are called, so this point can be sent straight out to the device as it is generated. If we want to generate multiple points at one time, a buffer is necessary to store the points before they are sent to the device.

    *The only difference between the AO Update Channel VI and the AO Update Channels VI is that the AO Update Channels VI can generate one point on multiple channels, instead of just one channel. In all other aspects the two VIs are identical. So we will learn about the more complicated of the two: AO Update Channels. The parameters for AO Update Channels are listed below.DeviceNumber of the device you want to address. The device number for each device can be found in Measurement & Automation Explorer (MAX). The device number is not necessary if you are using a virtual channel, because the device number is already part of the virtual channel information.ChannelsChooses the channel or channels that you want to generate the sample on. The channels input can either be a number or a virtual channel. If you use a virtual channel the device input is unnecessary, because it is already part of the virtual channel information. Previous to LabVIEW 6i the channels input was a string data type. In LabVIEW 6i and later the channel input uses the DAQ Channel Name data type. The DAQ Channel Name data type was discussed in Chapter 3.*We have already learned how to generate one point at a time. Now we will take the next step and learn how to generate multiple points at a time. To do so we will use the intermediate VIs. The five intermediate VIs we will focus on are AO Config, AO Write, AO Start, AO Wait, and AO Clear. A summary of what each VI does is shown above. We will be learning about each VI specifically later in the chapter. The combination of these five VIs allows you to perform a hardware-timed, buffered generation. We will go through each one of these terms separately.Hardware-timedIn a hardware-timed generation, the rate of generation is controlled by a hardware signal called the update clock. A hardware clock can go much faster than a software loop, so you can generate a wider range of signal frequencies and shapes. A hardware clock is also more accurate than a software loop. A software loop rate can be thrown off by a variety of actions such as the opening of another program on your computer. A hardware clock is not susceptible to such distractions.*The flowchart for a buffered generation is shown above. We will walk through the flowchart step by step. We begin by configuring our device with AO Config. Next we will use AO Write to send the data to the PC buffer we allocated with AO Config. Once the data has been written into the PC buffer we can start the generation with AO Start. After we have started the generation will sit inside AO Wait until the generation is complete. After our output is complete, we will clear the resources for the device and display any errors that occurred. Notice that in a buffered generation our order of operation is to configure, write, start, wait, and then clear. While our order of operation for a buffered acquisition is to configure, start, read, and then clear. *Now that we understand the flowchart for a buffered generation, we will see how to program it in LabVIEW. AO Config is used to set up our device, channels, and buffer size. We are filling our PC buffer with the waveform data that is wired to AO Write. AO Start will set the number of buffer iterations and the update rate, and then begin the generation. If we didnt use AO Wait, we would start the generation with AO Start and immediately stop the generation with AO Clear. In fact, AO Wait is nothing more than AO Write in a loop. Inside AO Wait, we are checking the value of an output from AO Write called generation complete. When the generation complete output from AO Write is TRUE we will exit AO Wait. The loop rate that we set for calling AO Write within AO Wait is set by taking the check every N updates and dividing it by the update rate. We are passing both the task ID and the error cluster from one VI to another. The passing of the error cluster is actually very important. If the error cluster is passed into AO Config, AO Write, AO Start, or AO Wait with a value of TRUE, the respective VI will pass the error on without executing. If the error cluster is passed into AO Clear with a value of TRUE, the VI will pass the error on and still execute. Let us go through an example. Assume an error occurred in AO Start. AO Start will cease its execution and pass the error on to AO Wait. AO Wait will see that the error boolean is set to TRUE, and will not execute. AO Wait simply passes the error on. AO Clear will see that the error boolean is set to TRUE, but it will still execute. After AO Clear is done executing it will pass the error on to the Error Handler for display. The reason for this is simple. If an error occurs we will still want to clear the resources assigned to the device, but we wouldnt want to execute AO Config, AO Write, AO Start, or AO Wait.*A characteristic of analog output is that once you write a value to an analog output channel, the channel will continue to output that voltage until one of the following happens: The device is reset by the Reset VI located in the Calibration and Configuration palette, or the power to the device is turned off. A new value is written to the analog output channel.Assume we are writing a sine wave out to one of our analog output channels, and the last value in the buffer is 7. We will generate the entire sine wave, and after the generation is complete the analog output channel will still be generating a value of 7. Rather than resetting the device every time, it is easiest to simply write a value of 0 to our channel after the generation is complete. You can use the AO Write One Update VI located in the Utility palette to perform just such an operation. We will illustrate the use of the VI in the following exercise.*Now that we understand how to perform a buffered generation, we will examine the flowchart for a continuous buffered generation. The main difference between a buffered generation and continuous buffered generation is the number of points that are generated. With a buffered generation we generate the data in the buffer a finite amount of times. With a continuous buffered generation we can generate data indefinitely. The flowchart is the same as a buffered flowchart for the first three steps. We configure our device with AO Config, write data to the buffer with AO Write, and start the generation with AO Start. At this point the continuous buffered generation flowchart strays from the buffered generation flowchart. We are now continuously generating data. We need a way to make sure our program doesnt move on to AO Clear, because that will stop the generation. However, we cant use AO Wait, because it waits until the generation is complete, and a continuous generation will never complete. So we must use a loop that will either be stopped by an error or a user input. If the loop is not done we will continue to generate data. If the loop is done we will go to AO Clear to unassign our resources and display any errors with either the Simple or General Error Handler.

    *Now that we understand the flowchart of a continuous buffered generation we will examine a continuous buffered generation in LabVIEW. The VI is very similar to a buffered generation with the following changes: The number of buffer iterations for AO Start is set to 0. We are not using an AO Wait. We are using a second AO Write in a loop.We start by configuring our device, channels, and buffer size with AO Config. Then we write our data to the buffer for generation with AO Write. Next, we set the update rate and start a continuous generation (number of buffer iterations = 0). We have started generating data, but we dont want to call AO Clear until we are done with our generation. In a buffered generation we called AO Wait. However, we cannot call AO Wait, because it is designed to wait until the generation is complete, and a continuous generation will never be complete. So we use must use a loop that will run until we stop our generation. We want our loop to stop when either the user pushes a stop button on the front panel, or when an error occurs in our generation. The second AO Write is used to check for an error in our generation. It is important to understand that the AO Write in the loop used ONLY for error checking purposes. Notice that once we are not writing any new data into the buffer with the second AO Write. The data we have sent to the buffer would continue to be generated even if we didnt have the AO Write in the loop.


Recommended