Particle Analysis¶
Once you have chosen your segmentation parameters and have successfully prepared a parameters object you can use this to perform Particle Analysis.
>>> import hyperspy.api as hs
>>> data = hs.load('particlespy/data/JEOL HAADF Image.dm4')
>>> params = ps.parameters()
>>> params.load()
>>> particles = ps.particle_analysis(data, params)
Particle Analysis will run the segmentation on your data and calculate a number of parameters for each particle.
The calculated parameters include:
Area (“area”)
Equivalent circular diameter (“equivalent circular diameter”)
Major and minor axes lengths (“major axis length” and “minor axis length”)
Circularity (“circularity”)
Eccentricity (“eccentricity”)
Solidity (“solidity”)
Total particle intensity (“intensity”)
Maximum particle intensity (“intensity_max”)
X and Y coordinates (“x” and “y”)
Bounding box area and diagonal length (“bbox_area” and “bbox_length”)
>>> #Syntax for accessing particle properties.
>>> particles.list[0].properties['area']
For more information on particle properties see properties.
Combining Particles from Multiple Images¶
It is possible to analyse particles from multiple images by passing a previously populated Particle_list object to ParticleAnalysis()
instead of returning a new Particle_list.
For example:
>>> ps.particle_analysis(data, params, particles=particles)
Spectrum Analysis¶
In addition, Particle Analysis will also segment and process simultaneously acquired data if given as an additional dataset in the data list. This could include EDS, EELS or scanning diffraction data. The only requirement is that the navigation dimensions of the additional data are the same as the signal dimensions of the image used for segmentation.
In addition, Particle Analysis can do the following processing on EDS data:
Obtain the EDS spectrum of each particle.
Obtain elemental maps of each particle.
Get the composition of each particle if k-factors or zeta-factors are supplied in the parameters object.
>>> data = [image,eds_si]
>>> params = ps.parameters.load()
>>> params.generate_eds(eds_method='CL',elements=['Pt','Au'],factors=[1.7,1.9],store_maps=False)
>>> particles = ps.particle_analysis(data, params)
Particle Segmentation with a Pre-Generated Mask¶
ParticleAnalysis()
will also accept pre-generated masks, either generated externally or through the manual option of SegUI()
.
In order to use a pre-generated mask it is possible to pass a mask argument to ParticleAnalysis()
.
>>> generated_mask = hs.load('maskfile')
>>> params = ps.parameters.load() # This isn't used if you load a pre-generated mask but you still have to pass it.
>>> particles = ps.particle_analysis(data, params, mask=generated_mask)
If you have used the manual segmentation editor in seg_ui()
you can simply pass ‘UI’ as the mask argument.
>>> particles = ps.particle_analysis(data, params, mask='UI')
Cluster Particles Based on Properties¶
It is possible to do clustering of particles based on their properties.
This can be done using the cluster_particles()
function.
Clustering uses the scikit-learn package, with the ability to use Kmeans, DBSCAN and OPTICS methods.
For example, if you wanted to separate the particles in to two clusters based on their area and ADF intensity, you could do:
>>> clustered_particles = particles.cluster_particles(properties=['area','intensity'],n_clusters=2)
>>> ps.plot(clustered_particles,properties=['area','intensity'])
The variable clustered_particles now contains two separate particle lists.
Clustering can be done for an arbitrary number of properties, including manually added parameters.
Normalize Particle Image Sizes¶
Sometimes further processing requires that all particle images have the same dimensions.
In ParticleSpy this can be readily achieved using the normalize_boxing()
function.
The function will set all image dimensions to the largest x and y values in the particle list.
>>> particles.normalize_boxing()