Dynamic Light Scattering (DLS) is a powerful and sensitive technique for characterizing particles or macromolecules in dispersion, due to its ability to resolve particle or molecular sizes ranging from sub-nanometer to several microns. This sensitivity also means that DLS is a useful technique to characterize aggregated material, which may occur in far smaller quantities but are of a significant importance in many applications.
Typically, however, the presence of dust (including filter spoil, column shedding, tracer aggregates or material from dirty lab ware) can have detrimental effects on the measurement of smaller sized particles, and algorithms exist to suppress these effects.
We present a new approach to handling DLS data which prevents the skewing of data for small particles whilst retaining insight into the presence of aggregates that may otherwise be lost, whereby the relative abundance and size of aggregates can be deduced.
Figure 1: The appearance of an aggregate, t >8s, in the live data, which can degrade the accuracy of the time-averaged, measurement of the primary peak at 7.6nm.
Materials and methods
The results shown herein were generated by measuring sample of Hen’s egg lysozyme (Sigma-Aldrich) in a pH 4.0 Acetate buffer, with measurements performed on a Zetasizer Ultra. Also shown are results for a mixed dispersion of NIST traceable polystyrene latex particles dispersed in 10mM NaCl. All dispersions were prepared using DI water filtered to 200 nm.
Detecting aggregates that aren’t always present
The detection volume is a DLS measurement is much smaller than the total volume of the sample presented to the instrument. Whilst DLS measures a more statistically significant number of particles than methods such as scanning electron microscopy (SEM) and nanoparticle tracking analysis (NTA), it is possible for distinct particles to diffuse in and out of the detection volume during a measurement.
In a previous application note, we discussed how in an Adaptive Correlation DLS measurement, we group data from a series of sub-measurements into steady state and transient data sets, describing particles that are consistently present in the detection volume of the sample and non-representative particles that diffuse in and out of the detection volume respectively.
Whilst the separation of transient events gives better precision of the primary particle size, the analysis of the transient data allows better characterization of the transient particles.
Figure 2 shows the difference in correlation functions between the transient and steady state correlation functions, as well as the result if no classification had been applied and all the sub run measurements were averaged together. Within this measurement, the transient data represents only a small portion of the data gathered for the sample, and when all the data is averaged, the significant second decay in the transient correlation function is suppressed.
Figure 2: Autocorrelation functions for a sample of lysozyme, showing results for the steady state data, transient data, and unclassified data, i.e. all of the data.
The consequence of this suppression and advantage of the transient classification can be further realized with the example data in Figure 3. Here the unclassified and steady state measurements both show a lysozyme monomer peak at 3.8 nm, as well as an aggregate peak at around 100nm, but the transient measurement also shows that another larger size component is present, with a peak at 5 µm.
This data also demonstrates that Adaptive Correlation is not a data filtering algorithm, as aggregates of a significantly larger size than the primary particle component are reported in the steady state result as they are detected consistently throughout the measurement.
Figure 3: Steady state, transient and unclassified particle size distributions for a sample of aggregated lysozyme.
Characterizing these rare particles
As with any DLS measurement, sub-optimal concentration and sample scattering will limit the reliability of particle size data, but Figure 4 demonstrates that the transient data can be used to infer reliable properties for any rare large particles, as an accurate size is reported for a latex sample doped with particles of a known size.
Figure 4: Intensity weighted particle size distribution for a sample of 60nm latex dispersed in 10mM NaCl, doped with a 1.6um latex at a range of different ratios.
How rare is rare?
Only sub runs that display a statistically significant difference from the main characteristics of the sample will be identified as transient, and as such the amount of data classified as transient will be sample dependent.
The significance of transient particles can be assessed as their frequency of detection is captured by the amount of data that is retained in the steady state result.
Table 1: Numerical results for a series of measurements for a sample of 1mg/ml lysozyme
The data shown in Table 1 shows a series of size measurements for a sample of 1 mg/ml lysozyme which had been thermally stressed. All measurements report similar values for the Z-Average size and are all monomeric given the presence on only one peak size in the steady state data. The run retention, a percentage of the sub runs that were included in the analysis for the steady state result, however decreases over time, showing that the detection of transient scatterers is becoming more significant and demonstrating that this dispersion is not completely stable, whilst still allowing us to report with some confidence on the monomeric hydrodynamic size of the protein.
By using a statistical approach to classify data from a plurality of sub measurements, we can reliably generate a measure of particles that are consistently present in the detection volume of a measurement, and are therefore representative of the sample, and those that are not. This classification means that the steady state data can be reported without the influence of transient scatterers that can otherwise skew size analysis results, but also all the characterization of transient scatterers with a resolution that would otherwise not be achieved without data classification.
The size results from this transient data has been demonstrated to be reliable by measuring samples doped with particles of a known size, and the proportion of data classified as transient can give further insight on the stability of a sample and the presence of rare aggregates, before they exist in such an abundance to become a part of the steady state result.