Why do I need a standard if I have a GPC/SEC system with a light scattering detector?

To many people, obtaining molecular weight information from GPC/SEC analysis involves a lengthy calibration process.  A series of standards of known molecular weight are analyzed and a calibration curve based on the standards’ retention volume is generated.  This can be a tedious procedure, and the result is only relative molecular weight data.

The inclusion of a light scattering detector in a GPC/SEC system eliminates the need to run a series of standards to generate a calibration curve, as the sample’s molecular weight is measured directly by the light scattering detector.  This ability, in addition to the measurement of a sample’s absolute molecular weight, is one of the main advantages of using a system with a light scattering detector, such as the OMNISEC or TDA 305 instruments.  However, while a lengthy process to generate a calibration curve is not needed when using a light scattering detector, a quick calibration step involving the analysis of a single, narrow standard is still required.  There are two reasons for this:

  1. To determine the detector response factors
  2. To calculate detector offsets and corresponding band broadening corrections

The detector response factors are critical as they allow molecular characterization data to be calculated for an unknown sample.  The offset and band broadening calculations are necessary to align and coordinate the responses of multiple detectors positioned in series.  These two aspects will be explored further in their own sections below.

Detector response factors: In the same way that a reference weight is used to calibrate a balance, the detectors in a GPC/SEC system require a reference to properly convert the observed signal output to desired molecular characteristics, such as molecular weight and intrinsic viscosity.  Therefore, a response factor for each detector is determined based on the analysis of a reference sample, typically called a calibration standard.  One of the requirements of the calibration standard is that its concentration, dn/dc value (or dA/dc value if using a UV detector) in the mobile phase used, molecular weight, and intrinsic viscosity are known.  By knowing these values, and considering the equations listed below that describe what sample characteristics affect each detector response, the detector response factors (K) can be determined.

Additionally, the detector response factors compensate for slight variations in manufactured detector components, such as a light source, that might cause different systems to respond differently to the same sample.  Therefore, while there is a general range for each detector constant, different GPC/SEC systems will have their own unique set of detector response factors.

Blog 21994_Figure 1 - equations

To take a closer look at how the detector response factors are calculated, let’s consider the RI response for the calibration standard; using the detector equation, the RI output signal is observed, the dn/dc value for the standard is known, the standard concentration is known, and the injection volume is set by the user. Therefore, the KRI value, the detector response factor for the RI detector, can be calculated. A similar process occurs for each detector present to establish a set of detector response factors that are then automatically saved to the method.

With a calibrated method available, when an unknown sample is analyzed, the underlined molecular parameters in the equations above can be calculated from the observed detector response, the known sample concentration (or known dn/dc value), and the chosen injection volume.  This is how the software computes an unknown sample’s molecular weight, IV, and other molecular data.

Detector offsets and band broadening corrections: These two are grouped together because they result from the arrangement of the detectors in series, where the sample passes from one detector to the next in a single flow path, as shown in the scheme below.  The goal of determining the detector offsets and applying band broadening corrections is to align and coordinate the fractions of sample that elute from each detector at different retention volumes.  To accomplish this, there is one more requirement of the calibration standard: it must be a narrow standard, meaning the dispersity (Mw/Mn) must be 1.10 or lower.

Blog 21994_Figure 2 - GPC schematic

Schematic of an OMNISEC GPC/SEC system showing the detectors arranged in series

The detector offset values represent the volume the sample must travel between the detectors.  At any given time, different fractions of the sample will be present in each detector.  By determining the detector offsets, the method can adjust the data to correctly relate each detector signal to the same sample fraction.

This is noticeable when viewing the calibration standard before and after a method has been applied.  Prior to calibrating the method, the detector signals peak according to their order within the OMNISEC instrument; first the RI detector, then the light scattering detectors, followed by the viscometer detector last.  Once the method has been applied to a data set, the baselines are shifted slightly to account for the offset volume.  This results in the peaks of all detectors for the narrow standard becoming aligned, and the start of each baseline being shifted slightly.  These changes are evident in the figures below.

Blog 21994_Figure 3 - peak alignment

Refractive index (red), light scattering (green & black), and viscometer (blue) detector signals before (left) and after (right) a method is applied

Blog 21994_Figure 4 - baseline shift

Refractive index (red), light scattering (green & black), and viscometer (blue) detector baselines highlighting their shift in retention volume once a method is applied

Additionally, as the sample moves through the chromatography system and is separated into tight bands representing the different molecular sizes of the distribution, those bands inevitably broaden slightly due to diffusion.  This is most significant within the detector cells and long pieces of inter-detector tubing and is a phenomenon known as band broadening.  This band broadening affects the shape of the peak and the fraction of the sample that is present in the detector cells at a given moment.  To accommodate this, the OMNISEC software can correct for the band broadening to ensure that the calculated result is as accurate as possible.

While that might sound like a lot, the actual process of performing this calibration is relatively quick, especially considering it only requires a single injection of a standard.  Most of the work is done automatically by the software, the only action required by the user is to run the narrow standard and then ensure the appropriate concentration, molecular weight, IV, and dn/dc information is entered into the method. A video demonstration of this calibration process can be viewed below.

It is recommended that once a narrow standard has been used to calibrate the detector responses and correct for the inter-detector offsets and the band broadening, a second molecular weight standard is analyzed to verify the calibration.  This usually has a broader molecular weight distribution and a higher dispersity.  If the molecular weight of this second standard is correctly measured by the system, then the calibration has been successful and the user can be confident in the method and move on to an unknown sample.

Previous posts: