Evident LogoOlympus Logo

博客文章

10 Key Improvements in TruAI™ Technology Advancing Life Science Image Analysis

作者  -

TruAI™ technology is a powerful AI-driven image analysis tool for life science and materials science software. Using deep neural networks, it enables the development of AI models tailored to specific imaging applications. It was introduced by Evident in 2019 with the scanR high-content screening station and later extended to more products.

TruAI technology is designed to operate without requiring coding or programming skills, enabling broader usability for non-programmers. The developed AI models are files that can be easily exchanged across Evident’s analysis packages, including scanR Analysis, cellSens™ Count and Measure, and VS200-Detect.

TruAI technology offers advantages in three key application areas:

  • Object segmentation and classification in image analysis
  • Sample detection in image acquisition workflows
  • Image processing, including denoising and enhancement

In this blog post, we review the evolution of TruAI technology across these life science software packages, showcasing its continuous improvements over the past six years.

1. A Flexible Toolbox (2019)

The first releases of TruAI technology—scanR in 2019 and cellSens in 2020—focused on providing researchers with a flexible toolbox for creating AI-powered semantic segmentation models.

Training an AI model begins with users annotating the ground truth in the images, then loading the annotated images into a training interface. The model analyzes the annotated images, optimizing its predictions to the annotations in an iterative process.

Key features of the early TruAI toolbox included:

  • Manual annotations: Labeling tools to freely annotate ground truth in the images.
  • Automatic annotations: Converts existing segmentations into annotations.
  • Partial labeling: Limits training to the labeled regions, eliminating the need to label entire images.
  • Flexible channel and Z-layer combinations: Enables the use of a single channel or Z-layer for simple tasks while allowing free combinations for complex applications.
  • Training progress monitoring: The training interface automatically splits data into training and validation datasets, with AI predictions viewable in real time.

2. Semantic Segmentation (2019)

Once trained, AI models can be applied to new images (inference) across various Evident analysis packages. In semantic segmentation, the AI model creates a pixel probability map, indicating pixels with a high probability of belonging to the foreground. For final object detection, a threshold is set to this probability map, followed by classical splitting algorithms such as watershed segmentation.

Figure 1. a) In the first step, AI detects nuclei and highlights them with probability intensity (red). b) In the second step, the probability map is segmented by thresholding the probability intensity. c) In the final step, the objects are split.

3. Classification (2021)

A single AI model can segment all objects in an image, with classification based on the measured parameters of these objects, such as area and fluorescence intensity.

Alternatively, multiple AI models can be applied to the same image to perform classification without extracting measurement parameters—for example, one AI model detecting only cells in state 1 and another detecting only cells in state 2.

Both capabilities were possible in the first releases of TruAI technology.

However, classification models refer specifically to a single AI model’s ability to differentiate multiple foreground classes from the background. Final object detection is achieved by applying a threshold to the two or more probabilities rendered by the single AI model.

To maintain flexibility, users can decide during the training whether the classes can overlap.

The main advantage of using classification models is to simplify the analysis when detecting many classes or when there are no clear measurable features for the classification.

Figure 2: Cell compartment localization of fluorescently tagged proteins in yeast. A single TruAI model can classify the cells according to the protein localization. Learn more in our application note, Yeast Protein Localization Classified Using TruAI Deep-Learning Technology.

Figure 3: TruAI technology enables the creation of classification AI models in scenarios where classification is difficult for the human eye, such as when selecting high-quality oocytes from unstained samples for in vitro fertilization. Learn more in this article.

4. Instance Segmentation (2021)

Unlike semantic segmentation AI models, instance segmentation models can directly segment the final objects in one step, eliminating the need for probability map thresholding and additional splitting.

This method simplifies the workflow (compare Figure 4 to Figure 1) and is particularly useful in cases where classical segmentation algorithms struggle to separate the detected objects, such as when cell density is high.

Figure 4. A single instance segmentation model can identify the pixels belonging to the foreground and determine the boundaries of the final objects (nuclei) in one step.

Figure 5. Instance segmentation models are particularly useful in high confluency scenarios involving non-round objects. In this example, a single AI model uses two input channels (blue and gray) to classify overlapping objects as nuclei or cells while defining their boundaries.

5. Scaling (2021, 2023)

AI models have a limited field of view, typically a few hundred pixels, making them sensitive to pixel resolution. Scaling techniques mitigate these limitations.

Scaling during training (2021)

When objects exceed the AI model’s field of view, the model may fail to recognize complete object borders, leading to inefficient detection. To address this, training can incorporate a scaling factor that effectively expands the model’s field of view. This training approach is particularly useful for detecting entire large objects such as zebrafish, organoids, or sizable regions within tissue samples.

Figure 6. Training to detect large organoids in transmission in a well plate. The AI model’s field of view is shown in green. Left: training without scaling yields poor results, even after thousands of iterations. Right: training with 25% scaling achieves high-quality results in just a few hundred iterations.

Scaling during inference (2023)

If an AI model is trained with a 10X objective lens but applied to an image taken with a 40X objective lens, detection will be inefficient. The scaling function of TruAI technology enables users to adapt the image resolution to the AI model’s resolution for optimal performance. In addition, when instance segmentation models cause over-splitting, reducing the resolution often improves the detection of whole objects.

Figure 7. a) Fluorescence image of living cells. b) An instance segmentation AI model causes over-splitting in some cells. c) The same instance segmentation model applied with 50% scaling, resulting in accurate cell detection without over-splitting.

6. Live AI (2021)

Live AI is a feature in cellSens software that applies AI models in real time without requiring image acquisition. The AI probability, or final object segmentation, is displayed directly on the live image. Live AI is intended to support fast quality control checks and streamline tasks such as counting and target identification in applicable workflows (such as the best candidate for cell, sperm, or oocyte selection).

Figure 8. The inference workflow of Live AI. An instance segmentation model with classification capabilities is applied to a well plate. As the user navigates through the well plate, the probability map and the counting of the three cell classes is shown in the lower left side of the live image.

7. Pretrained Models (2021–2024)

For users who do not have time to train their own AI models, Evident introduced pre-trained models that can be used out of the box. Different types of pretrained AI models are available in the following analysis software packages:

  • scanR Analysis: models for nuclei, cells, spots, structures, and nuclei in brightfield
  • cellSens Count and Measure and VS200-Detect: models for nuclei, cells, IHC cell classification (Ki-67 assay), and faint whole sample recognition
  • cellSens FV (FV4000): models for denoising

Furthermore, the pretrained models can be used as a starting point for annotations. Predictions from the pretrained models can be converted into annotations, and users can apply corrections to them. This workflow works especially well in combination with the interactive training (see section 9).

Figure 9. The pretrained IHC classification model detects the center of mass in brown cells and blue cells. When used in cellSens Count and Measure or VS200-Detect, the output is the total cell count together with the percent of each cell type.

Figure 10. Resonant scanner images captured with a FLUOVIEW™ FV4000 confocal microscope (left side of each image) and enhanced with TruAI noise reduction (right side of each image). Resonant scanner imaging effectively captures cellular dynamics at high speeds with low damage. However, this usually comes at the cost of a lower signal-to-noise ratio. TruAI noise reduction is designed to enhance images while maintaining time resolution by using pre-trained AI models based on the noise patterns of the SilVIR™ detector. These pre-trained TruAI noise reduction algorithms can be applied both in real time and during post-processing.

8. Automatic Sample Detection (2022)

AI models can be integrated into acquisition workflows for automatic sample detection, saving time in whole tissue scanning, specific region recognition, or rare event identification. The workflow typically begins with an overview scan at low magnification, followed by a detailed scan at larger magnification with more channels, additional Z-layers, or even a modality shift from widefield to confocal spinning disk microscopy.

Figure 11. Video showing the macro to micro imaging workflow in cellSens software for a widefield microscope.

9. Interactive Training (2023)

A new interactive workflow for training AI models was introduced in 2023. This workflow enables users to annotate images interactively, train the model, review AI predictions, make corrections, and use those corrections as new labels for further training. Users can then continue annotating new images, refining the model iteratively. This approach enables the rapid development of AI models for straightforward applications.

Figure 12. Example of the interactive training workflow.

10. Image Enhancement (2023)

Users can train AI models for both segmentation tasks and image processing operations. For example, a model can be taught to perform denoising or processing techniques such as deconvolution.

Since training occurs across channels, generating ground truth data for the training is straightforward. For example, creating ground truth for a denoising model is as simple as collecting two channels: one with a high signal-to-noise ratio (serving as the ground truth) and another with short exposure and low-light excitation.

A key advantage of this approach is that researchers can train AI models using their own samples. This AI-powered image enhancement is designed to improve performance and reduce common image artifacts such as hallucinations (false, unexplainable structures).

Figure 13. TruAI models can be trained for image processing tasks such as noise reduction (right).

The Continuous Evolution of TruAI Technology

Over the past six years, TruAI technology has continuously evolved, delivering improvements in segmentation, classification, scaling, live analysis, and image enhancement. Its flexibility, ease of use, and integration across Evident’s software platforms make it a powerful tool for AI-driven image analysis in life science.

To learn more about how TruAI models work in practice, reach out to our team of microscopy professionals for a personalized demo.

Related Content

20 Examples of Effortless Nucleus and Cell Segmentation Using Pretrained Deep-Learning Models

Instance Segmentation of Cells and Nuclei Made Simple Using Deep Learning

Predicting Multi-Class Nuclei Phenotypes for Drug Testing Using Deep Learning

生命科学研究部应用专员

Manoel Veiga于圣地亚哥·德·孔波斯特拉大学获得物理化学博士学位,从事皮秒和飞秒时间分辨光谱的研究。在马德里康普顿斯大学和明斯特大学完成两个博士后研究后,他加入PicoQuant担任高级科学家,从事时间分辨光谱、荧光寿命成像显微镜(FLIM)和荧光相关光谱(FCS)领域的研究。现在,Manoel在Evident德国公司担任全球应用专员,侧重于高内涵分析(HCA)和深度学习。

2025年3月25日
Discovery Blog Sign-up

By clicking subscribe you are agreeing to our privacy policy which can be found here.