MENU

Generating a PSD

June 25, 2019

Vibration Research software uses Welch’s method for PSD estimation. To summarize the steps:

♦  The process starts with Gaussian, time domain input data (a time history file).

♦  That data is partitioned into frames of equal length in time. Each frame is transformed into the frequency domain using the Fast Fourier Transform (FFT).

♦  The complex frequency-domain data is converted to power by taking the squared-magnitude of each frequency point. These squared-magnitudes (power values) for each frame are averaged together.

♦  Divide by the sample rate to normalize to a single Hz.

There are a few additional points, specifically windowing and overlapping, that influence the overall result.

Let’s look at each step in more detail.

Gaussian, Random Data

The graph below shows 5 seconds of Gaussian, random vibration data – a time history file displaying Acceleration vs. Frequency. It is difficult to extract any meaningful information from this graph, other than that the peak acceleration seems to be around -30G.

Figure 1. 5-second time history graph

Moving to the Frequency Domain

To learn something useful about this vibration data, it must be viewed in the frequency domain. Two calculations are used, the FFT and the PSD.

The Fast Fourier Transform (FFT) transforms the data into an Acceleration vs. Frequency graph. An FFT graph is often used to monitor the frequency spectrum and focus in on changes in that frequency spectrum, while viewing live data or playing through a time history file.  No windowing, averaging, or normalizing functions are used to create an FFT graph. However, to view energy distribution across the frequency spectrum, the PSD must be calculated.

Power Spectral Density (PSD)

(1) Calculation begins by dividing a time history file into frames of equal time length.
Lines of Resolution and the Sample Rate are used to determine the width of each frame. There are 2 samples per analysis line. In the image below, the recorded time history was sampled at 8192Hz, and 4096 Lines of Resolution, resulting in a frame width of 1 second. If the number of lines were changed to 1024, the frame width would be 0.25 seconds. This is an important step to remember when selecting a sample rate. Using a sample rate that is an exponential of 2 (2n) usually ends with a PSD where the lines are spaced at a convenient interval.

Figure 2. The time history graph divided into frames

(2) Then an FFT is calculated for each frame, after first applying a windowing function.
An FFT assumes that the data is an infinite series, meaning the starting and ending points of each frame are interpreted as though they were next to each other. When looking at random data, this is most likely not the case and a windowing function needs to be applied. Without a window function, the starting and ending points may be different, resulting in a transient spike between the two points. This transient spike between two samples would show up in the FFT as high frequency energy. Think of a terminal peak shock pulse; the sharp transition from the peak amplitude to zero acceleration generates a much larger amount of high frequency energy than a smooth pulse like the half-sine. For the FFT, that sharp transition between starting and ending points is a discontinuity. That discontinuity is reflected in an FFT calculation and is referred to as spectral leakage.Applying a windowing function removes the emphasis on the discontinuities and reduces the spectral leakage. In a perfect world, the real data would have identical starting and ending points in every frame of data. Because this is not the case, we must minimize any effects by using a windowing function.

Figure 3. Frames of data after windowing

Each window function has specific characteristics that may make it more suitable for certain applications. A window function is evaluated by two key components, the side lobe and the main lobe. In the image above, the Hanning window is used. It has a very high, wide main lobe and low side lobes that practically reach zero. This means that there is little to no discontinuity between the starting and ending points and results in a very accurate frequency measurement. The grey waveform is the original waveform, the orange waveform is the windowed data.

The Hanning or Blackman window functions are the most commonly used in vibration testing and analysis because they have good frequency resolution and minimal discontinuity and, therefore, minimal leakage. Other windows may be appropriate for other applications. For a description of all commonly used, and many rarely used, windowing functions and their characteristics, see the VRU course Window Functions for Signal Processing.

The windowed data for each frame is used to calculate an FFT, a mathematical function which transforms the signal from the time domain into the frequency domain. This linear transformation provides the ability to observe the frequency content of the time history waveform.

Figure 4. Calculate the FFT for each frame

There are many use cases for the FFT in signal analysis and processing. Most importantly, it shows what frequencies are present in a waveform and in what proportions. This can be used to determine what frequencies are being excited during a section of time, the peak acceleration of each frequency inside of a windowed frame of data, the distribution of peaks, harmonic content, etc.

There are certain weighting factors that can be applied to a FFT. If the goal of this process was to simply generate an FFT, a weighting factor to ensure 1Gpk of the time domain data is equal to 1Gpk in the FFT might be applied. When the goal is to generate a PSD, the window function is normalized to preserve the input power.

(3) The individual FFT’s for each frame are squared and then averaged together.

The term “power” is a common term in electrical engineering, commonly used to refer to the magnitude-squared of any value.

The PSD shows the average energy at a single frequency over a period of time. It will initially have a lot of variance or “hashiness”; as more frames of similar data are included in the average, the overall variance will decrease, accuracy will increase, and the PSD will look much smoother. The total amount of time included in a PSD is related to the averaging parameter, Degrees of Freedom (DOF). The higher the DOF, the more frames of data have been averaged together. More DOF also requires a longer period to acquire and calculate.

Simply put, more FFT’s will result in a better PSD. The issue with this is the time required to collect and calculate large numbers of FFT’s.

(4) Normalize the calculation to a single Hz.

The final step in the process is to take the squared, averaged FFT (Amplitude2) and divide by the Sample Rate. This normalizes everything to a single Hz and creates a Power Spectral Density. For acceleration the resulting unit is G2/Hz.

Figure 6. The PSD

Using the PSD, the response of the product under test is clear. As more frames of data are added to the PSD, the variance will continue to decrease, and the PSD will become smoother.

For a more in depth look at variance and methods of PSD smoothing, watch the webinar on Instant Degrees of Freedom, a patented feature from Vibration Research that quickly and effectively reduces the variance and creates a very smooth PSD. This method is the only mathematically justified method for displaying a smooth PSD trace in a short period of time.


There is one additional technique that is often used, though not required, during PSD creation.

Overlapping is used to include more of the original data in the PSD and to generate more DOF for a period of time. With 0% overlap, each frame of data is completely separated. This means that there is some data that is not accounted for in each frame due to the windowing functions being applied. Also, for each frame of data included in the PSD, 2 DOF are calculated for the total average.

This means, if I create a PSD with 0% overlap, 120 DOF, 8192 Hz Sample Rate, 4096 Lines of Resolution and a Hanning window function (resulting in 1 second frames) I need to average 60 seconds worth of data to achieve 120 degrees of freedom.

A 50% overlap means there would only be 0.5 seconds between the start of each frame. Each frame would still be 1 second in length. Once frames are overlapped, they do not result in 2 DOF per FFT. A 50% overlap will result in around 1.85 DOF per FFT and 75% overlap 1.2 DOF per FFT. So for a 50% overlap, 120 DOF, 8192Hz, 4096 Lines of Resolution, Hanning Window PSD I can achieve my desired PSD in 64.8 frames but it only takes 32.4 seconds to acquire.

Figure 7. Overlapping, windowed frames

In the original example, with 0% overlap, there were 5 frames of data, resulting in 5 FFT’s. With a 50% overlap,as shown in Figure 7, the same section of data results in 9 FFT’s.


The last parameter to consider in the PSD calculation is the “Lines of Resolution”. This parameter, along with the sample rate, determines how far apart each analysis point is spaced on the PSD. A higher number of lines will result in a much more accurate PSD but requires a larger number of samples to properly calculate.

There are many test standards that require a certain number of lines to be included inside a resonance to properly display the peak. If too few lines are used in a PSD, the result is similar to under-sampling a waveform; the distance between analysis points is too great and the gap between is not appropriately accounted for. A minimum requirement is 3+ lines is needed to properly resolve a resonance.