Direct digital phase noise measurement

Back to projects


This apparatus measures phase noise down to wideband noise floor levels below -170 dBc/Hz. Historically, such measurements were either difficult or expensive to make. Based on the work [1] of Grove et al., the method described here is cheap, sensitive, accurate and requires no calibration. It is a differential measurement between a device under test (DUT) and a reference oscillator, which are connected to the SMA ports at the extreme left. Ideally, the reference oscillator should be an order of magnitude (or more) quieter, so DUT noise predominantes.

The largest board is a Xilinx SP605 FPGA evaluation board, attached to which (by the FMC connector) is a Linear Technology DC1525A quad ADC evaluation board, minus its top-right corner. The two boards to which the DUT and reference are connected are custom-made power splitters, feeding four ADC inputs through coaxial cables. The part-populated yellow board, spare from my GPS project, was re-purposed as a clean power supply for a Crystek CPRO33-77.760 crystal oscillator, which can just be seen mounted vertically on the ADC clock input SMA.

ADC samples are received by the FPGA on eight LVDS pairs, each of which can run at up to 1Gbps. In the FPGA, the four ADC channels are digitally down-converted to IQ complex baseband, low pass filtered, decimated, and phase-demodulated using a CORDIC arc-tangent block. The resultant phase data is streamed to a Windows laptop via UDP over Ethernet. Phase data is converted to power spectral density by Fast Fourier Transform (FFT) and displayed in real-time. The quality of the measurement improves the longer it runs.

The inputs are down-converted by mixing with a quadrature NCO. NCO rate is set close to input frequency, but cannot remain exactly equal, so the baseband signal is close to but rarely at zero frequency. The CORDIC output is phase noise superimposed on a gentle linear ramp, the gradient of which depends on the small unavoidable difference between input frequency and NCO rate.

Cancel & correlate

It's possible to measure phase noise of a DUT, relative to the sampling clock, using a single ADC channel. In fact, I did so as a first step. So why the two inputs and four ADC channels? Because using a single 14-bit ADC, even assuming zero sampling clock jitter, the simulation-predicted noise floor is worse than -150 dBc/Hz. Typically, more like -140 dBc/Hz. Grove et al. did two things to get the noise floor down below -170 dBc/Hz.

The first neat trick is to make a differential measurement between DUT and reference, by subtracting their phases. The phase noise of the sampling clock is cancelled-out. The quality of the measurement depends on the reference, not the ADC clock. Although the phases are subtracted, since the noises are uncorrelated, their powers add. So the result is the sum of DUT and reference noise. This is why the reference should be an order of magnitude quieter than the DUT. But the ADC clock requirement is less exacting.

Cancelling clock noise still leaves ADC quantisation noise and thermal noise, which raise the noise floor above -150 dBc/Hz, as already mentioned. The second neat trick is to duplicate the measurement. This is the reason for the power splitters. Everything in the FPGA (and downstream in software) is replicated. We simultaneously make two completely independent measurements. Both contain the same (correlated) DUT+Reference noise; but different (uncorrelated) ADC noises. The latter are then greatly attenuated using the cross spectrum experimental method described in [2] by E. Rubiola and F. Vernotte.

DUT and reference can actually be different frequencies! I haven't tried this yet. Reference phase data just needs scaling by the DUT/REF frequency ratio, to cancel clock jitter. Although the sampling points of all four ADC channels are displaced by the same time amount Δt due to sampling clock jitter, this affects the measured phases in proportion to frequency: ΦCLK = ωΔt. It can be beneficial to use a higher reference frequency, because its phase noise contribution, ΦREF, will then be scaled down. All we want from it is stability. ADC noises remain uncorrelated, scaled or not.

Phase wraps

The 32-bit fixed-point binary output of the CORDIC is encoded in semi-circles with 31 bits after the binary point. The most significant bit represents ±1 semi-circle or ± π radians. The angle can be interpreted as signed or unsigned, without ambiguity. Due to unavoidable frequency differences, the measured phases will always be slowly but steadily ramping, either up or down. Phase 'wraps' occur every 2π radians. Real measurements have phase noise superimposed on the ramp and can "chatter" back and forth several times as they go through the wrapping point:

These discontinuities must be removed before the FFT. One way is to 'unwrap' them: integer precision can be extended before the binary point and 1.0 circles (2.0 semi-circles) are added or subtracted every time the raw data wraps. Another way, the method I am currently using, is to convert the phase to frequency by differentiating it. The FFT is then calculating power spectral density of frequency fluctuation, which is converted to power spectral density of phase by dividing by ω² afterwards. Differentiation converts sin(ωt) to ω.cos(ωt) so there is a scaling by ω. The 90 degree phase shift doesn't affect power.

Frequency domain

The final output, (f), is the "one-sided" power spectral density of phase, relative to the carrier, in dBc/Hz. This is plotted on a log/log scale, typically over the frequency range 0.1 Hz to 100 kHz. Samples are transformed into the frequency domain by Fast Fourier Transform. Making two independent measurements doubles the amount of processing required. In order to execute this in real-time, with the same number of data points in each decade, the processing is broken up into a succession of identical stages, decimating by 10 at each step:

The above diagram includes cross-correlation. Complex FFT outputs bins from one measurement are multiplied by the complex conjugate of the corresponding bin from the other. The complex product is averaged for minutes, hours or days. Uncorrelated noises are attenuated by 5log10(N) where N is the number averaged. After a while, the average real part tends to contain only correlated noise. The average imaginary part is an indicator of system noise floor. 5dB steps can been seen in the imaginary part between decades because of decimation. Each decade averages 10 times more data than the next lower in frequency.

ADC sampling rate is 77.76 Msps, decimated to 607.5 ksps in the FPGA. FFT length is typically 1000 points, and bins 10 to 99 are plotted from each decade. Length can be increased to get finer detail. Bins LEN/100 to LEN/10 are always plotted in the middle decades; but more are required (up to LEN * 100/607.5) in the first stage to reach 100 kHz. Fewer are needed in the last stage because the graph starts from 0.1 Hz. Low pass filters are 6th order Butterworth IIR with a normalised cut-off frequency of 20/1000. Scaling is applied to adjust for different FFT bin sizes and LPF growth at each decade.

Scaling is also required to correct for the equivalent noise bandwidth of a raised cosine window function, which is applied before each FFT.


Does this sounds too good to be true? Are you wondering what the snag is? The biggest problem is instrument-generated spurs, due to ADC quantisation. These appear when harmonics of the input signal fall close to harmonics of the sampling clock. I discovered them the hard way. The DC1525A ADC board works up to 125 Msps and I was making measurements of 5 MHz sources using a 124.998 MHz sampling clock! None of the papers I read beforehand led me to expect this, and I was puzzled for a while, until I reproduced the spurs using a very simple simulation. Moving to 77.76 Msps improved matters.

The same signal can be fed to both the DUT and reference inputs with a third power splitter. I used this configuration to estimate system noise floor, before I knew about the imaginary part of the averaged cross-product. When I started making differential measurements, I discovered another problem: low-frequency spurs, due to crosstalk, at the DUT - Reference difference frequency. Fortunately, although they are quite noticeable in the imaginary part of the cross-product, these spurs barely push through into the real part.


Each of the plots below was averaged over several hours. DUT and reference inputs were 5 MHz. Blue trace is (f), the one-sided power spectral density of phase, as estimated by the real part of the average cross-product. Green trace is system noise floor in the imaginary part. The Wenzel ULN is an ultra-low-noise reference oscillator:

The Wenzel 500-03220 is about 1 Hz off-frequency, judging by the low-frequency crosstalk spurs around 1 and 2 Hz, which are evident in three plots, most strongly in the imaginary part. All plots have spurs at the 50 Hz power line frequency. Harmonics of 50 Hz and other "real" spurs are visible on the synthesizer plots. The -160 dBc spur around 21 kHz in (a) is 77759000*58 - 5000001*902. The two Dana 7020 "Digiphase" synthesizers are several years apart in age; but very similar in performance, except #2 has a probem at 410 Hz. Their closed loop responses match figure 26-8 in Garry Gillette's essay on page 290 of "Analog Circuit Design" edited by Jim Williams. Digiphase close-in performance (-90dBc/Hz @ 1Hz) beats the Marconi 2019A by 40dB. The 2019A has a lower wide band noise floor and much narrower PLL loop bandwidth. Wenzel may be as responsible as Dana for the measurements at 0.1 Hz. The Dana synthesizers were set to 5.000000, so fractional compensation was not operative.


The win32 source code (linked below) comprises four files:

LivePlot.cppGUI and worker thread using Windows API
Capture.cpp FPGA control via SPI over platform USB JTAG
Packet.c Ethernet UDP packet capture using winpcap
LogFFT.cpp Multi-decade DSP, FFT and cross correlation

The UI is minimal. Start and stop commands are added to the system menu. Data is crudely plotted in real-time, then dumped to a file, from which the below-linked Python script produces proper graphs, like the above.



This project began with an attempt to make very precise average frequency measurements using a digital PLL. Sampling at 1 Msps, using the 14-bit ADC on the Spartan 3E Starter Kit, it was possible to measure average frequency of a 100 kHz source to µHz precision over a 1 second averaging period:

In order to measure exactly 100000.000000 Hz, the ADC was clocked using a standard frequency output from the same generator that produced the 100 kHz. Despite this, a small frequency offset was observed as the equipment warmed-up. This was due to a time-varying delay (phase shift) in the signal path, equivalent to a frequency shift ω = dθ/dt, which settled down once thermal equilibrium was reached.

The most significant bits of the detected product are zero when the loop is locked, but less significant bits are noisy, and were routed out via a DAC to view the residual noise on an oscilloscope. It was the PLL method of phase noise measurement, implemented digitally. I googled "digital phase noise measurement" and found the Grove et al. paper.

Source code



1. Grove, J. et al., "Direct-digital phase-noise measurement," Proc. of 2004 IEEE International Frequency Control Symposium, Montreal, Canada, pp. 287-291, August 2004.
2. E. Rubiola and F. Vernotte, "The cross-spectrum experimental method," Mar. 2010, arXiv:1003.0113v1 [physics.ins-det].