NTA: Principles and Methodology

The need to characterize different properties of nanomaterials continues to grow rapidly. Since the commercialization of the technique in 2004, Nanoparticle tracking Analysis (NTA) has become increasingly prevalent in a wide variety of different research fields and industrial applications. In this introductory chapter of the Nanoparticle Tracking Analysis (NTA) application and usage review, we discuss the principles and methodology of the technique including sizing, concentration measurement and fluorescence detection.  

Due to the importance and relevance of nanoparticulates and their characterization, a variety of techniques have been developed allowing users to analyze particle size and size distribution. The most common of these techniques include Dynamic Light Scattering (DLS), Electron Microscopy (EM), Atomic Force Microscopy (AFM) and Analytical Ultracentrifugation (AUC). One of the most recent additions to the arsenal of apparatus available to those interested in particle characterization is Nanoparticle Tracking Analysis (NTA), a technique which not only allows the sizing and concentration measurement of nanoscale materials but also benefits from the ability to directly visualize materials within a sample. It is recognized, however, that when comparing any number of technologies in a field each of these methodologies comes with its own unique sets of benefits and limitations (Carr and Wright, 2013).

EM and AFM both offer users images of the particles within a sample with high resolution information about both the size and morphology of the particles present, but both techniques also require time consuming preparation of samples which could be potentially damaging and require the user to spend considerable time on analysis (Syvitski, 1991).

Ultracentrifugation, though not an imaging technique, similarly provides high resolution information on the size distribution of particles in a sample but the technique requires a degree of previous knowledge with regards to the composition of the material, is time consuming and the initial investment for apparatus can be costly (Mächtle, 2006).

The most commonly utilized methodologies for nanoscale characterization are the ensemble techniques based on light scattering which interrogate a large number of particles in a suspension. While these techniques are ideally suited for the analysis of monodispersed systems, they are recognized as having a limited capability to analyze particles size distribution in polydisperse and/or heterogeneous systems. Also, being ensemble methodologies it is not possible to acquire accurate number concentrations of particulates within samples. The most widely used light scattering technique is DLS (alternatively known as Photon Correlation Spectroscopy (PCS) or Quasi Elastic Light Scattering (QELS)). This technique utilizes a digital correlator to analyze the timescales of fluctuations in intensity of light scattered by nanoparticles moving under Brownian motion in suspension. With over 40 years standing as an established technique it has been extensively reviewed (Pecora, 1985).

Through analysis of the resultant exponential autocorrelation function, average particle size can be calculated as well as a polydispersity index. For multi-exponential autocorrelation functions arising from polydisperse samples, deconvolution can furnish only limited information about the particle size distribution profile (Harding et al., 1992). Furthermore, as the relationship between the size of particles and the amount of light that they scatter varies strongly as a function of radius6, the results can be significantly biased towards the larger, higher scattering particles within the sample. The resulting intensity weighted average particle size and particle size distribution information available can therefore be seriously misleading when analyzing polydisperse samples unless users have a very good understanding of the technique and the data produced.

Other optical techniques which normally measure particles smaller than one micron (e.g. static light scattering techniques based on diffractive Fraunhofer scattering or Multi-Angle Light Scattering (MALS) or the widespread techniques of flow cytometry or Coulter Counting can measure smaller particles but suffer, in practice, from a lower particle size analysis limit of between 0.3-0.5 µm diameter.

The addition of NTA to the range of techniques available to researchers offers the ability to directly visualize size and measure concentration of nanoparticles in liquid suspension. The ability of this technique to simultaneously analyze a population of nanoparticles on an individual basis means it is ideally suited for the real-time analysis of polydisperse systems ranging from 10-20 nm up to 1-2 micron in size (depending on particle type). Additional parameters and measurements also allow users to acquire information on nanoparticle concentration, zeta potential, relative intensity of light scattered and also the capability to visualize and analyze fluorescently labelled particles (NanoSight, 2012; Carr et al., 2009).

Principles of operation

The properties of both light scattering and Brownian motion are utilized in order to obtain particle size distributions of samples in liquid suspension in NTA. A laser beam (of arbitrary wavelength but typically those available from commercially available laser diodes operating at 642 nm, 532 nm, 488 nm or 405 nm) is passed through a prism edged glass flat (or equivalent optical element) within the sample chamber. The angle of incidence and refractive index of the glass flat is designed to be such that when the laser reaches the interface between the glass and the liquid sample layer above it, the beam refracts to an intense low profile resulting in a compressed beam with a reduced profile and a high power density. The particles in the path of this beam scatter light in such a matter that they can be easily visualized at 90° via a long working distance, x20 magnification microscope objective fitted to an otherwise conventional optical microscope or equivalent optical train. The system then uses a CCD, EMCCD ((Electron Multiplied) Charged Coupled Device) or high-sensitivity CMOS camera, operating at typically 30 frames per second (fps) to capture a video file of particles moving under Brownian motion within a field of view of approximately 100 μm x 80 μm x 10 μm (Figure 1).

Figure 1 Schematic of the optical configuration used in NTA.
MRK2194_fig01

Sizing of Nanoparticles using NTA

Particles within the field of view are seen moving under Brownian motion, either directly by eye using the microscope oculars or via the image displayed on the computer screen recorded by the camera. The proprietary NTA software takes a video file (of typically 30-60 seconds duration) of the particles viewed and then simultaneously identifies and tracks the center of each particle seen on a frame-by-frame basis. The image analysis software then determines the average distance moved by each particle in the x and y planes. This value allows the particle diffusion coefficient (Dt) to be determined from which, if the sample temperature T and solvent viscosity η are known, the sphere-equivalent hydrodynamic diameter, d, of the particles can be identified using the Stokes-Einstein equation (Eq 1).

Equation 1
MRK2194_eq01

where KB is Boltzmann’s constant. See Fig 2 for a schematic of the basic principles of NTA analysis.

Obviously, Brownian motion occurs in three dimensions but NTA observes motion only in two dimensions. It is possible, however, to determine Dt from measuring the mean squared displacement of a particle in one, two or three dimensions by using the following variations of the Stokes-Einstein equation (Eq 2a-c respectively);

Equation 2a
MRK2194_eq02a
Equation 2b
MRK2194_eq02b
Equation 2c
MRK2194_eq02c

Thus, in the case where measurement of movement in two dimensions is made the following equation can be used;

Equation 3
MRK2194_eq03

Size range detectable

The lower limit of detection for instruments using NTA is determined by several factors, the most significant of which are the amount of light scattered by the particles and the capability of the optics used to detect this light.

The amount of light scattered by a particle in any given direction is a function of many variables including incident illumination power, wavelength, angle and polarization; particle size, refractive index (real and imaginary) and shape, as well as the refractive index of the suspending solvent. Similarly, the amount of light falling on a detector and strength of the resultant signal is dependent on a number of factors including the efficiency of the collection optics (e.g. Numerical Aperture) and the spectral response and sensitivity of the camera.

The theory of light scattering is well established (Bohren et al., 1983; Kerker, 1969) and the formula for Rayleigh scattering of small particles of radius a, refractive index n1 in a liquid of refractive index n2 is given by (Equation 4);

Equation 4
MRK2194_eq04

where λ is the wavelength of the incident light beam, n relative refractive index (n2/n1), Iin is incident power per unit area, Iscat the scattered power per unit area a distance r from the scattering region and ψ is the angle between the input polarization and the scattering direction.

The total scattering (Pscat) into an aperture of collection angle θ (numerical aperture NA = sin θ) is then:

Equation 5
MRK2194_eq05a

Where

MRK2194_eq05b

For a detection system of fixed wavelength and incident laser power, NA and detection angle, the variables associated with the limit of sample NTA detection reduce to that of particle size and the refractive index difference between the material and the solvent in which the particles are suspended.

Thus, for materials of very high refractive index, such as colloidal gold or silver in water, it is possible to accurately determine size down to around 10-15 nm (depending on camera type used). For those particles of only moderate refractive index, such as metal oxides, some condensed polymers or refractile particles of a biological origin, the lower size limit may only be around 25 nm-35 nm. However, this minimum size limit will allow the analysis of most types of virus. For very weakly scattering materials (e.g. polymers, exosomes, liposomes), the smallest particle visible might only be 40 nm in diameter. The upper size limits of the system are approached when the rate of Brownian motion becomes so low that it approaches the same scale as exhibited by the small centering errors inherent in the tracking software and which therefore lead to sizing inaccuracies. This limit is typically found around 1-2 μm for particles in aqueous type systems.

NTA workflow:

  • NTA captures a video of particles moving under Brownian motion

  • NTA automatically locates and follows the centre of each and every particle and measures the average distance it moves per frame

  • This is done simultaneously for all particles until hundreds or thousands of particles have been tracked

  • NTA converts the distances moved into a particle size and plots accumulated results in real time as a particle size distribution profile.

  • NTA analyzes the raw data, fits model distributions or displays different particle parameters (size vs relative intensity vs number) against each other. Concentration is also determined

Concentration ranges measureable

NTA is not an ensemble technique interrogating a very large number of particles, but rather each particle is sized individually, irrespective of the others. This means that in order to achieve statistically viable results it is important that a sufficient number of particles are analyzed within the sample time chosen. The optimal concentration to provide this number of particles within a 30-60 second analysis time typically lies somewhere between 107 to 1010 particles per mL.

Particles visualized by NTA move within a fixed field of view (approximately 100 μm by 80 μm) illuminated by a beam approximately 10 μm in depth. These figures allow a scattering volume of the sample to be estimated and by measuring concentration of the particles within this field of view and extrapolating to a larger volume it is possible to achieve a concentration estimation in terms of particles per mL for any given size class or an overall total.

The effective scattering volume in which particles are detected and concentration measured varies as a function of several factors. These include both the particle size and difference in refractive index between the particles and the medium as well as the power, wavelength and dimensions of the illuminating laser (Eqs 4 and 5). Similarly, adjusting the camera sensitivity will affect the number of particles detected and therefore tracked and this will clearly impact on the concentration reading achieved. As a result of this, for more accurate determination of concentration of particles which scatter significantly differently from those for which the system is ideally optimized, calibration on particulate systems of known concentration is necessary.

Many samples of environmental or biological origin contain a wide range of particles sizes and types which, unless fractionated or partially purified, will frequently exhibit log-normal particle size distribution profiles. Similarly, many industrial nanoscale products are produced by grinding or milling of coarse starting material and even when a design size is reached may contain considerable numbers of fines whose presence is often unsuspected due to their being undetectable by conventional means. It will be appreciated, of course, that these lower number of larger particles (rare-event aggregates, contaminants, etc) may not be detected and concentration measured with the same accuracy as those of a smaller size and higher number.

It should be noted however that few, if any other, techniques are capable of determining nanoparticle concentration with such ease and fewer can be confirmed by the visual image of the sample afforded by the NTA technique.

Samples containing fewer than 107 particles per mL result in only a very limited number of particles being present within the field of view at any one time. Accordingly, extended analysis times (e.g. above 5 minutes) will be required in order to obtain statistically reproducible results. The NTA software will alert users to samples that contain fewer particles than required for optimal analysis and an automated facility is available which proposes suitable analysis times based on an initial estimate of sample concentration.

Samples containing a concentration of particles greater than 1010 particles per mL have a higher likelihood of particle trajectories crossing over one another before an adequate estimate of particle size can be made through tracking any given particle. This will degrade the quality of information obtained about the particles.

Under normal conditions when analyzing optimal concentrations of nanoparticles exhibiting similar optical characteristics such as monodisperse polystyrene, concentration measuring accuracies can reach 5-10% if the sample is diluted to a suitable concentration range.

Given the above however, it is clear that accurate estimates of the number and concentration of any particular class of particles in a polydisperse and/or heterogeneous mixture of different particle types will be subject to the different amounts of light they may scatter which will, in turn, determine their effective scattering volume thus any consequent estimate of concentration.

The typical error to be expected when determining the absolute concentration of even monodisperse sample can be as high as 20%. However, this figure can also be influenced by several parameters and will increase when;

  • the polydispersity within the sample increases.

  • concentration measurement is being made of small or weakly scattering particles on the limit of detection of the instrument.

  • assessing particles on the larger end of the limit for sizing (800 nm+)

  • concentration measurement is being made of highly asymmetrical particles.

Care should be taken if at all possible to reduce the effects that these parameters can have on the outcome of the analysis.

Absolute Accuracy and Resolution

Both NTA and DLS operate on similar basic principles; analyzing light patterns resulting from light scattered by particles moving under Brownian motion. Accordingly, if accurate information about the temperature and the viscosity of the solvent in which nanoparticles are present is given, then the absolute accuracy achieved by both techniques is effectively the same, around 2% under ideal conditions. As both techniques operate in the time domain they benefit from being uncommon examples of absolute methods of measurement in which (re-)calibration is unnecessary.

A simple comparison of the accuracy and reproducibility of NTA against a DLS instrument is shown in Table 1. Calibration polystyrene standards of 50, 100, 200 and 400 nm diameter were analyzed (averages of 5 repeat measurements are shown). As can be seen, NTA compares very favorably with DLS (Table 1).

Table 1NTA (NanoSight)DLS (Zetasizer)
Nominal size (nm)Average size (nm)Standard deviation (nm)Average size (nm)Standard deviation (nm)
5050.60.851.72.2
100100.21.2102.43.6
200200.61.7209.25.2
400398.32.4411.57.3

It should be recognized, however, that the Stokes-Einstein relationship between the measured Dt and the diameter reported assumes the particles are non-interacting, diffusing freely in the infinite dilution limit, spherical and measure the hydrodynamic diameter of the particle (which is that of the physical extent of the particle plus the hydrodynamic shell of structured solvent in close proximity to the particle surface). This hydrodynamic shell normally extends, dependent on solvent characteristics, some 1-2 nm from the surface for aqueous systems. For larger particles this contribution is relatively negligible, but for very small particles below 20 nm it becomes an increasingly significant percentage.

Unlike DLS, NTA measures the Dt of individual particles and, as such, does not suffer from the intensity weighting problems besetting DLS, nor is it an ensemble technique summing the motion of a large number of particles simultaneously. Accordingly, NTA is a technique of inherently higher resolution than DLS which, in practice, can rarely achieve resolutions < 3:1 or 4:1.

NTA estimation of accurate Dt relies, however, on being able to track any given particle’s Brownian motion trajectory for a sufficient number of steps to generate an accurate average value of step-length with which to accurately determine size. Because of the very small depth of the scattering volume smaller particles, in particular, can often be present for a very limited period of time (<10 frames ≡ 0.3 seconds at 30 fps). The effect of this limitation manifests itself as an artifactual broadening of the distribution measured, though the mean of the estimated size remains accurate. The reduction in accuracy associated with these limited duration trajectories can, however, be mathematically modelled and compensated for (Saveyn et al., 2010). Thus, for monodisperse samples, a ‘finite track length adjustment’ (FTLA) algorithm within NTA can be automatically applied which compensates for such effects and recovers the true distribution width (yellow) of narrow distributions of monodisperse, calibration quality nanoparticle suspensions (Figure 2).

Figure 2. FTLA comparison for short track lengths.
MRK2194_fig02

Sample Preparation

In most circumstances sample preparation for use in NTA consists of little more than simple dilution so that particle concentration falls within the optimum ranges previously mentioned. A sample in which significant sedimentation is apparent implies the presence of particles which, by definition, are too large to be freely diffusing nanoparticles. In this case, an aliquot of the supernatant will contain any nanoparticles present and re-suspension of sediment should be avoided. The presence of particles sufficiently large to sediment in a short time period can interfere with NTA. If necessary, removal of particles >1-2 µm can be achieved by centrifugation, filtration or simple settling.

As with any measurement, measuring a blank or control is strongly advised, so when diluting samples it is best to conduct a precautionary analysis of the diluting material in order to ensure the absence of contaminating nanoparticles.

It should be noted that efficient removal of such contaminants can sometimes be a non-trivial task but is clearly necessary to ensure accurate NTA analysis of the sample itself. It should further be recognized that the quoted pore size of common tortuous-pore membrane filters is a mean pore size only and that larger particles can initially pass through the filter and can be present in the early filtrate. Polycarbonate, track-etched membranes (e.g. Nucleopore®) exhibit very low distributions of pore sizes though suffer far lower flow rates.

Samples which, in concentrated form, appear colored rarely cause problems due to the fact that NTA requires significant sample dilution to work and any residual optical absorption of the nanoparticles at this dilution do not cause detectable heating effects. Furthermore, it should be recognized that NTA works in the time domain and the intensity of light scattered by the particles is not the property on which measurement of their size is based. However, significantly optically absorbing solvents must be avoided because they may suffer from thermal convection.

Many samples when analyzed using NTA require dilution by several orders of magnitude in order to reach the analysis optimum concentration of 107 To 109 particles per mL. It is recommended that this dilution is effected through a series of serial dilutions with no one step consisting of a dilution of >200x to reduce the risk of error from dilution technique.

Although it is required for many samples, dilution of materials is not without its problems. The first of these is the possible elimination of low numbers of particles in a population through over-dilution. Further to this is the possible change in sample stability on excessive dilution. Finally, the possible aggregation of unstable samples during dilution should be taken into consideration.

Repeat Measurements

Whenever a sample is characterized using NTA, a bulk sample is taken, often diluted, and sub-samples extracted from the bulk are analyzed. The measurement of sub-samples from a bulk can sometimes lead to sample bias with the result of the analysis only focusing on a certain population of materials. This is why it is recommended that conclusions about the size, distribution and concentration of particles within a sample are never drawn from a single analysis, but rather repeat measurements of different sub-samples should be taken. Owing to the large number of particles within a sample in the NTA dilution regime (around 107 particles per mL) it would be impractical to measure all of them using this technique. However, correct sampling is important to ensure that the particles that are measured in the analyses are representative of the population as a whole. By analysis of only one field of view, the user could be biasing themselves towards a certain population within a sample and which might be unrepresentative of the whole sample.

Variation between repeat samples of the same preparation increases along with the degree of polydispersity and is indicative of an inadequate sampling regime or insufficient analysis time.

The reason for recommending repeat measurements rather than longer records is that repeat measurements ensure an entirely new field of view for each 60 second analysis therefore an entirely new sample population. If we record a longer video, then the majority of particles will stay within the field of view and a few new ones will enter from the sides. The longer the duration of the sample recording the more the rate of new particles entering the field of view reduces.

The number of repeat measurements that should be taken is related to both the sample size and the polydispersity of the sample. The larger the polydispersity the more repeat measurements should be taken in order to ensure the effects of sample bias are reduced.

There is however another way to reduce the effects of sample bias, through the use of precise, slow, constant flow of sample through the field of view. By connecting the sample chamber of the instrument to a digital stepper-motor syringe pump the user is able to apply a constant pressure to the sample causing it to flow in a uniform manner across the field of view. This means new particles are constantly being added to the field of view with a direct correlation between the length of the recording and the number of unique particles analyzed.

The syringe pump also improves the repeatability of concentration measurements, by continually introducing fresh sample volumes during analysis. This, in combination with batch analysis procedures, ensures the most precise and reproducible concentration measurements, especially for concentration measurement of larger contaminating particles and aggregates which are normally present in lower numbers.

The increased sampling population also allows for more accurate analysis of extremely dilute systems which would otherwise require extremely long capture durations to detect and track sufficient particles for statistically robust measurement.

A final advantage of the use of repeat measurements is the ability to use these readings to define the standard error of certain parameters (i.e. mean distribution, standard deviation of size and particle concentration). The standard error is calculated by taking the standard deviation between the repeats and dividing it by the square root of the number of samples (Eq 6).

Equation 6
MRK2194_eq06

Size vs. Intensity vs. Concentration 2D and 3D Plots

NTA uses particle Brownian motion rather than fluctuations in the intensity of light they scatter in order to measure the size of particles. However, one of the unique and beneficial features of NTA is the ability to simultaneously measure the amount of light it scatters (Iscat) and plot the two measureands as a function of each other. This allows particles which may be of a similar size but different composition and refractive index to be successfully discriminated.

When plotted on a graph of particle size against concentration (the regular output from an NTA reading) a mixture of 92 nm polystyrene and 90 nm gold nanoparticles would be seen as a single peak (Figure 3a). However, when the same plot is displayed as particle size against the relative intensity of light scattered it is possible to resolve the two similarly sized populations (Figure 3b).

Figure 3a: Particle size distribution profile of a mixture of 92 nm polystyrene and 90 nm gold nanoparticles showing a single peak when measured by size alone;
MRK2194_fig03a
Figure 3b: A 2D plot of the same mixture when size of each particle is plotted as a function the amount of light it scatters allowing the much higher refractive index gold to be discriminated from the same sized polystyrene nanoparticles.
MRK2194_fig03b

A more complex demonstration of the extra resolving power afforded by this additional parameter can be seen in Figure 4. This plot shows a mixture of 30 nm Au, 60 nm Au and 100 nm polystyrene, all of which can be resolved in the 3D plot of Size v. Intensity v. Number.

Figure 4. 3D representation of particle size against concentration against scatter
MRK2194_fig04

Fluorescence

While any wavelength of laser can be used to visualize and size, as well as measure concentration, nanoparticles under light scattering mode as previously described, shorter wavelength diodes such as blue 405 and 488 nm diodes and green 532 nm lasers can be used to allow fluorescent or fluorescently-labelled particles to be selectively identified and tracked through the use of appropriate optical filters.

It is therefore possible to selectively identify size and measure concentration of only certain sub-populations of particles in a mixture through fluorescent labelling. Such labelling may be effected through, perhaps, the use of antibody-mediated fluorophores allowing phenotyping of particular species of nanoparticles (Dragovic et al., 2011). Alternatively, fluorescent stains and reagents, specific for certain sample constituents such as lipid, protein or nucleic acid can help differentiate nanoparticles from each other.

Of course, the degree to which fluorescence can be used successfully in this regard is dependent on the fluorescent signal being sufficiently intense for the particle to be seen and its dynamic behavior tracked accordingly. This strength of the fluorescent signal will depend on the choice of excitation laser wavelength which must be sufficiently well matched to the fluorophores excitation profile. The degree of absorption, or extinction coefficient, of the fluorophores, and the efficiency with which it subsequently fluoresces (its quantum yield) are also both important parameters determining the usefulness of a fluorophore in any given situation.

Obviously the optical filters used must be compatible with the excitation and emission profiles of the fluorophores used and the wavelength of the exciting source. Stable fluorophores are also required which do not decay within the time period required to allow their tracking by NTA (e.g. >0.5 seconds). There are methodologies for extending the life of fluorophores used in these experiments. The first of these is to synchronize the camera shutter with a pulsed laser. Under normal conditions the laser is permanently on whilst the camera shutter is only open for a few fractions of a second per frame. This means that there is a significant proportion of each frame where the laser is exciting the fluorophore resulting in bleaching, but no signal is recorded by the camera. To circumvent this, on certain models of NTA instrument, the camera can be used to trigger the laser module. This means that the laser is then only illuminated when the shutter of the camera is open to collect data and as the fluorophore is only excited when the laser is active, the bleach rate of the material is significantly reduced.

Furthermore, for more rapidly bleaching fluorophores, it is possible to slowly flow the sample through the laser beam such that the population within the field of view is continuously refreshed. Rather than having material staying within the beam and being bleached by the laser throughout the entire analysis, this technique permits the continuous introduction of unbleached material thus maintaining analysis of larger numbers of particles over extended periods.

Finally, it is necessary for multiple fluorophores to be bound to the target nanoparticle because individual fluorescent molecules rarely generate a sufficient signal for detection by the CCD or even higher sensitivity EMCCD or sCMOS cameras as used by NTA. An interesting exception to this is the use of a class of fluorophores called ‘quantum dots’ which are 4-12 nm semiconductor nano-crystals (usually CdSe) whose optical characteristics (e.g. emission profiles) are related to their size. Quantum dots are extremely bright emitters and, usefully, sufficiently stable to be both detected and tracked by NTA on an individual basis. When functionalized with antibodies, these structures have been successfully used to phenotype biological exosomes by NTA (Dragovic, 2011).

The following example (Fig 5 a & b) shows the analysis of a mixture of polymeric microvesicles, some of which contained the fluorophore Rhodamine B, using a 405 nm laser-illuminated NTA system. From the 2D plot shown (Fig 5a), it can be seen that when analyzed under typical light scattering mode (shown by blue data points (Fig 5a) and curve (Fig 5b)) there is a far higher number of smaller non-labelled structures in the mixture than fluorescently labelled particles (green data points and curve in Fig 5a and Fig 5b respectively). The labelled particles, when measured under fluorescence mode (i.e. suitable optical filters are introduced into the light path), appear lower in number. Note that the larger particles are tracked and sized more successfully when seen under fluorescence labelling mode because when tracked under light scatter mode they frequently scatter so much light they saturate the detection systems and are not analyzed.

Figure 5a: Shows a 2D scattergram of size vs. light scattered (blue data points from all particles) or fluorescently emitted (green data points from Rhoda mine B labelled sub-population) of a mixture of polymeric microvesicles showing there is a far higher number of smaller non-labelled structures in the mixture than fluorescently labelled particles.
MRK2194_fig05a
Figure 5b: 5b confirms these data in a size vs. number plot of the mixture. Note that the larger particles are tracked and sized more successfully when seen under fluorescence labelling mode because under light scatter they frequently scatter so much light they can saturate the detection system.
MRK2194_fig05b

References

  • Carr B and Wright M (2013) Nanoparticle Tracking Analysis- A review of Applications and Usage 2010-2012 Chapter 1 Principles and Methodology, NanoSight In House Publishing

  • Carr B, Hole P, Malloy A, Nelson P, Wright M and Smith J (2009) “Applications of nanoparticle tracking analysis in nanoparticle research - a mini-review”, European Journal of Parenteral & Pharmaceutical Sciences 2009; 14(2): 45-50

  • Bohren CF and Huffman DR (1983) Absorption and Scattering of Light by Small Particles, John Wiley and Sons. Inc.

  • Dragovic RA, Gardiner C, Brooks AS, Tannetta D.S., Ferguson DJP, Hole P, Carr B, Redman CWG, Harris AL, Dobson PJ, Harrison P. and Sargent IL (2011) Sizing and phenotyping of cellular vesicles using Nanoparticle Tracking Analysis, Nanomedicine: Nanotechnology, Biology and Medicine (2011), doi:10.1016/j.nano.2011.04.003

  • Harding SE, Sattelle DB and Bloomfield VA (1992) eds. Laser Light Scattering, in Biochemistry. Cambridge, UK: Royal Society of Chemistry

  • Kerker M (1969), The Scattering of light and other electromagnetic radiation, Academic Press.

  • Mächtle W (2006). Centrifugation in Particle Size Analysis. Encyclopedia of Analytical Chemistry. DOI: 10.1002/9780470027318.a1502

  • Nanosight Limited, (2013) Product specification and usage, www.nanosight.co.uk

  • Pecora R (Ed.) (1985) Dynamic Light Scattering, Applications of Photon Correlation Spectroscopy, Plenum Press, New York.

  • Saveyn H, De Baets B, Thas O, Hole P, Smith J, Van der Meeren P (2010) Accurate particle size distribution determination by nanoparticle tracking analysis based on 2-D Brownian dynamics simulation, Journal of Colloid and Interface Science 352, p593-600Syvitski JPM (Ed.) (1991) Principles, Methods, and Application of Particle Size Analysis, Cambridge University Press, Cambridge, ISBN 0–521–36472–8

Einloggen

Not registered yet? Konto erstellen