Chronic lymphocytic leukemia (CLL) screening and abnormality detection based on multi-layer fluorescence imaging signal enhancement and compensation

Chronic lymphocytic leukemia (CLL) is characterized by slow progression, high clinical heterogeneity, and distinct genetic and epigenetic features, making it an ideal model for studying disease progression, personalized treatment, and cancer biomarkers. CLL samples typically exhibit weak and uneven fluorescence signals under fluorescence microscopy, along with high background noise and signal out-of-focus. Therefore, these samples are suitable for testing and validating methods for signal enhancement, denoising, and feature compensation to assess the applicability and stability of the proposed methods in clinical environments. Using XL RB1/DLEU/LAMP1 deletion probes and fluorescence in situ hybridization (FISH), three distinct loci in the 13q14 region of chromosome 13 in humans are targeted to investigate whether the pathogenesis of CLL is associated with fluorescence signal loss (Nelson et al. 2007) (Supplementary Information note 1).

Automated fluorescence microscope imaging process

In traditional image acquisition processes, operational errors and insufficient reproducibility often lead to inconsistent experimental results. Additionally, weak fluorescence signals are prone to blurring or even loss when out of focus, significantly degrading image quality. Manual adjustments of focus and image clarity assessment are also inefficient, failing to meet the demands for high-throughput and rapid processing. To address these issues, this paper proposes an automated imaging process for fluorescence microscopes, which includes four key steps: (1) Sample Positioning: The microscope moves to the target area without low-magnification scanning. (2) Focus Adjustment: The stage descends in 0.8 µm increments, capturing 150 images to determine the optimal focus using an energy gradient algorithm. (3) Image Acquisition: 11 images are taken at 0.44 µm intervals within a 2.2 µm z-stack range for each field of view. (4) Field Repetition: The process is repeated for all sample areas to ensure complete coverage. This automated process improves efficiency, reproducibility, and image quality, meeting high-throughput demands (Fig. 1 and Supplementary Information note 2).

Fig. 1figure 1

Fluorescence microscope. a Image Acquisition. b Collection principle, Scale bar: 50 μm

Fusion of regional and local feature analysis methods

To address common issues in fluorescence imaging, such as uneven brightness distribution, loss of detail, and background noise effect, this section proposes a region-weighted and local feature analysis method. The overall structure is shown in Fig. 2.the method begins by segmenting the nuclei from 11-layer(z-stack) fluorescence feature images, generating individual nuclear images (256 × 256 pixels). Global average and maximum values are dynamically calculated to generate weights, adjusting brightness distribution and achieving a naturally balanced visual effect. Spatial weight maps are generated using local feature maps and convolution, highlighting key areas while suppressing background noise.by combining 16-neighborhood averaging to fine-tune the brightness range, the method compensates for dark areas, midtones, and highlights separately, balancing overall brightness and enhancing local details.

(1)

Global dynamic feature optimization

Fig. 2figure 2

Fluorescence Signal Enhancement Method

The purpose of global feature optimization is to enhance the overall brightness characteristics of grayscale images, making them more balanced and suitable for subsequent processing. The specific method involves calculating the global average and maximum brightness of the entire image using the following formulas:

$$_=\frac\int _(x,y)dxdy$$

(1)

$$_=\text\_(x,y)|(x,y)\in \Omega \}$$

(2)

Here, \(\left|\Omega \right|\) represents the total number of pixels in the image. The pixel grayscale value at \(_\left(x,y\right)\) in a continuous grayscale distribution scenario is calculated, where \(_\) is the global average value, and \(\text\) denotes the least upper bound of the pixel grayscale values, i.e., the global maximum value \(_\).

The average value \(_\) reflects the overall brightness trend, while the maximum value \(_\) represents the brightest part of the brightness range. A dynamic adjustment coefficient \(_\) is generated for the entire image, which is used to control image enhancement, compression, or brightness range mapping. This ensures the visual effect of the image appears more natural.

$$_=(\alpha \bullet _+\beta \bullet _+\gamma )$$

(3)

A representative set of image datasets was selected for analysis, and under each parameter setting, both subjective perceptions and objective indicators (such as image sharpness and signal-to-noise ratio) were calculated for the enhanced images. Through multiple experiments, the parameters α (brightness adjustment), β (contrast enhancement), and γ (detail enhancement) were adjusted to optimize the enhancement effect of the fluorescence image. Professional doctors were invited to subjectively score the fluorescence images under each parameter combination. After multiple experiments and data analysis, the results showed that when α = 0.6, β = 0.4, and γ = 0.1, the enhanced image was closest to the real visual effect in terms of fluorescence characteristics. At this time, the image reached a relatively balanced state in terms of brightness, contrast, and detail expression, which not only retained sufficient image details but also avoided distortion that may be caused by excessive enhancement. the generated \(_\) was ultimately compressed to the range [0,1] for adjusting image brightness or contrast.

After dynamic weight adjustments, the pixel values ensured balanced overall image brightness, making the images more suitable for visual observation or subsequent processing.

$$_(x,y)=_\bullet _(x,y)$$

(4)

(2)

Local dynamic feature enhancement

The purpose of local feature optimization is to highlight prominent local regions in the image (such as edges and high-brightness areas) while suppressing the background or less significant regions.

$$\mu (i,j)=\frac\int _(x,y)dxdy$$

(5)

$$Max(i,j)=\text\_(x,y)|(x,y)\in N(i,j)\}$$

(6)

For each value at position \((i,j)\), the local mean \(\mu (i,j)\) and local maximum \(Max(i,j)\) are calculated.

In this process, \(N\left(i,j\right)\) represents a local (3 × 3) neighborhood window centered on the pixel \((i,j)\). The mean and maximum values from this neighborhood are extracted and concatenated to form a feature map. This feature map, denoted as \(F\left(i,j\right)\), is then processed through a convolution operation:

$$_^}=(Conv(F(i,j)))\bullet _$$

(7)

\(Conv\) represents the convolution kernel function, which is responsible for generating the spatial weight map. This operation ensures a smoother weight distribution across the feature map. By utilizing the local weight map, the weights of individual pixels are adaptively adjusted to emphasize prominent regions and enhance local details. This refinement ultimately leads to the generation of the final output feature map, preserving crucial information while improving spatial consistency.

(3)

Segmented signal region optimization

The local maximum \(_(i,j)\) and local minimum \(_\left(i,j\right)\) represent the least upper bound and greatest lower bound of grayscale values, respectively, within the 3 × 3 neighborhood window centered at \((x,y)\).

The local minimum \(_\) is defined as the greatest lower bound (infimum) of the local minimum values, the local maximum \(_\) is described as the least upper bound (supremum) of the local maximum values:

$$_=\text\_(i,j)|(i,j)\in \Omega \}$$

(10)

$$_=\text\_(i,j)|(i,j)\in \Omega \}$$

(11)

By using a heatmap, high-intensity signal regions in the image (prominent fluorescence signal areas) can be quickly located. The neighborhood average value around the maximum fluorescence signal edge (4 × 4) is calculated, enabling a more precise analysis of the local environment of fluorescence characteristics. Based on this, the parameters of the piecewise linear function can be adjusted accordingly.

$$avg\left(p\right)=\frac_|}\int I(x,y)dxdy$$

(12)

Let \(_\) represent the 16-neighborhood centered at pixel \(p\), and \(avg\left(p\right)\) denote the average pixel value within this neighborhood.

(4)

Dynamic optimization of piecewise linear enhancement

Based on the neighborhood average value range of different images \(I(x,y)\), three piecewise intervals are defined: \(_\):the local minimum grayscale value of the image. \(_\) the local maximum grayscale value of the image.

According to the value range of \(avg\left(p\right)\), the corresponding enhancement slopes (Zhang et al. 2023) \(k_ = S1/R2, k_ = \left( \right)/\left( \right), k_ = \left( \right) / \left( \right)\) are selected to enhance the image dynamically. all current gray values higher than R2 are mapped to a gray value of 255 in the output image (S2 = 255), so we compress the gray value of the original image (R1 is 0), remove the background noise other than the fluorescent features, and then amplify the gray value between (R1 = R2), and the gray value between R1 and R2 is linearly mapped.

$$O\left(x,y\right)=\left\_\bullet I(x,y)& ifavg\left(p\right)<_ \\ _\bullet \left(I\left(x,y\right)-_\right)+_& if_\le avg\left(p\right)<_\\ _\bullet \left(I\left(x,y\right)-_\right)+_& if avg\left(p\right)\ge _\end\right\}$$

(13)

Here: \(_\) is used to enhance the brightness of signal regions. \(_\) ensures a smooth transition in medium-brightness areas, enhancing gradient details in the signal. \(_\) suppresses overly bright regions while preserving details in high dynamic range areas.

After piecewise linear transformation, the final compensated image is generated.

$$R(x,y)=O\left(x,y\right)\bullet _^}$$

(14)

Fluorescence signal compensation network based on cycle-GAN

In order to better compensate for the intensity of the fluorescence signal, Zhu (Goodfellow et al. 2014; Zhu et al. 2017) introduced the Cycle-Consistency Generative Adversarial Network (Cycle-GAN), which performs image translation on unpaired data using a generator and a discriminator. Cycle-GAN employs cycle-consistency loss to ensure that the generated image remains consistent with the original during inverse transformations, thereby preserving fluorescence features. Its independence from paired data, strong detail retention, and flexibility make it a promising solution for enhancing insufficient fluorescence signals.

However, Cycle-GAN has certain limitations, including long training times, limited compensation capability in extremely weak signal regions, and insufficient optimization of fine-grained features. this study optimizes Cycle-GAN by introducing the following enhancements: Preprocessing with Multi-layer Fluorescence Image Feature Fusion and Precise Segmentation: To improve signal intensity characteristics using both regional and local feature analysis. Combination of Unsupervised and Supervised Learning: Incorporates a layer-wise supervision mechanism in the generator, utilizing limited real paired data to optimize generation results. transformer Module in the Generator: Designed with residual blocks to retain fine details and enhance feature extraction capabilities. Enhanced Discriminator: Adds multi-layer convolution operations and a fully convolutional classification layer to improve the discrimination of fluorescence features. Multiple Loss Constraints: Includes generation loss, cycle-consistency loss, and adversarial loss, ensuring greater realism and consistency when generating compensated images. The overall net-work structure is shown in Fig. 3.

1.

Hybrid unsupervised-supervised learning framework

Fig. 3figure 3

Overall Network Structure

Integrates a layer-wise supervision mechanism within the generator, effectively utilizing a limited set of real paired data to guide and refine the generation of fluorescence-enhanced images. The training process uses the Adam optimizer, which dynamically adjusts learning rates based on first and second moment estimates, ensuring faster convergence, improved training stability, and overall efficiency. By balancing unsupervised learning (for broader feature adaptation) with supervised refinement (for precise correction), the model enhances fidelity in fluorescence signal reconstruction.

2.

Transformer-based feature enhancement in the generator

Incorporates residual blocks into the generator to preserve fine-grained image details and improve feature extraction, preventing the loss of important fluorescence information. This allows the generator to focus on critical fluorescence regions and improve spatial relationships between varying fluorescence intensities. Optimized using Adam, which dynamically refines parameter updates, ensuring stable feature learning while mitigating gradient vanishing and over-smoothing.

3.

Enhanced discriminator:

Integrates multi-layer convolutional operations, enabling the model to progressively analyze fluorescence intensity variations across different scales, leading to more robust discrimination of signal artifacts and noise. This enhances the model’s ability to distinguish fluorescence signal patterns while maintaining computational efficiency. The Adam optimizer is used to iteratively refine discrimination accuracy, adapting to complex fluorescence distributions and improving robustness against noise.

4.

Multiple Loss Constraints:

Includes generation loss, cycle-consistency loss, and adversarial loss, ensuring enhanced realism, consistency, and accuracy in the generated compensated images.

The proposed optimizations significantly improve the compensation performance of Cycle-GAN, particularly in weak signal regions, while maintaining fine-grained feature details (Supplementary Information note 3 and Fig. 1).

To enhance fluorescence signal features, a multi-layer fluorescence image dataset is selected as the original image domain \(X\). After regional local feature analysis, an enhanced fluorescence feature image \(x\) is generated, a sample library is constructed, and the image is trained end-to-end. \(Lossfunction\_X\) measures the difference between the enhanced fluorescence image x and the fluorescence feature compensation image \(\) generated by the generator \(G\). \(Lcyc\_X\) represents the difference between the enhanced image x and the converted image \(\) generated by the generator \(F\), and \(_\) represents the distribution difference between the generated fluorescence feature compensation image \(\) and the multi-layer fluorescence feature fusion image domain \(Y\).

The generator G is trained to map \(G:X\to Y\), ensuring that the generated sample \(y=G\left(X\right)\) aligns as closely as possible with the distribution of the real fluorescence image domain Y. Meanwhile, the inverse mapping \(F:Y\to X\) ensures that \(x=G\left(y\right)\) maintains consistency with the distribution of the multilayer fluorescence feature image domain X.

Discriminators \(DX,DY\) are introduced to differentiate between real and generated fluorescence images. These discriminators evaluate whether the generated fluorescence feature images are real or synthetic. Through adversarial training between the generator and discriminators, the system gradually approaches a dynamic equilibrium point (Supplementary Information Fig. 2).

In the supervised learning phase, the model is further optimized with limited paired data. By applying z-stack supervision to the generator, utilizing a small number of real multilayer fused fluorescence images (see Supplementary Information Fig. 3), the model can more accurately capture fluorescence features. This enhances the realism and consistency of the generated images. The robust supervision process alleviates the challenges posed by the limited labeled data, significantly improving the model's ability to learn detailed fluorescence signals and thereby achieving the goal of generating compensated fluorescence images. A small set of feature images serves as dataset X, while real multilayer fused fluorescence feature images (Z) are used for step-by-step training of the generative network, as shown in (Supplementary Information note 4).

Comments (0)

No login
gif