Skip to main content

Research

Research figure for Learning hidden relationship between environment and control variables for direct control of automated greenhouse using Transformer-based model

Learning hidden relationship between environment and control variables for direct control of automated greenhouse using Transformer-based model

Climate change poses a significant threat to agricultural sustainability and food security. Automated greenhouse systems, which provide stable and controlled environments for crop cultivation, have emerged as a promising solution. However, traditional rule-based greenhouse control algorithms struggle to determine optimal control variables due to the complex relationships between environmental variables. In response, we propose a Transformer-based model, Trans-Farmer, which predicts the control variables by considering the complex interactions among environmental variables. Trans-Farmer leverages the attention mechanism to learn the intricate relationships among the environmental variables. The encoder-decoder structure enables the translation of the environmental variables into the corresponding control variables, analogous to language translation. Experimental results demonstrate that Trans-Farmer outperforms baseline models across all the evaluation metrics, achieving superior accuracy and predictive performance. The attention maps of the encoder visualize how Trans-Farmer comprehends the complex interactions among the environmental variables. Additionally, the compact size of Trans-Farmer is suitable for application in general greenhouses with constrained microcontroller units. This approach contributes to the development of automated greenhouse management systems and emphasizes the potential of artificial intelligence applications in agriculture.

Computers and Electronics in Agriculture
2025
Research figure for Rapid diagnosis of herbicidal activity and mode of action using spectral image analysis and machine learning

Rapid diagnosis of herbicidal activity and mode of action using spectral image analysis and machine learning

Herbicide screening requires a substantial amount of time, effort, and cost, making a new herbicide discovery expensive and time-consuming. Various diagnostic methods have been developed, but most of them are destructive and require significant time and effort to identify herbicide activity. Therefore, this study was conducted to apply spectral image analysis for early and rapid diagnosis of herbicidal activity and modes of action (MOAs). RGB, chlorophyll fluorescence (CF), and infrared (IR) thermal images were acquired after treating herbicides with different MOAs to a model plant, oilseed rape (Brassica napus), and analyzed using MATLAB 2021b to quantify NDI, ExG, Fd/Fm, and plant leaf temperature. Machine learning by Subspace Discriminant algorithm of spectral indices acquired at 6 h enabled the diagnosis of herbicide MOAs with 89.6% accuracy, which gradually increased by adding new spectral indices acquired later time points until 3 DAT, when validation accuracy scored 100%. Overall test accuracy scored 87.5%, verifying the possibility of diagnosing herbicide MOAs based on spectral indices.

Plant Phenomics
2025
Research figure for Volumetric Deep Learning-Based Precision Phenotyping of Gene-Edited Tomato for Vertical Farming

Volumetric Deep Learning-Based Precision Phenotyping of Gene-Edited Tomato for Vertical Farming

Global climate change and urbanization have posed challenges to sustainable food production and resource management in agriculture. Vertical farming, in particular, allows for high-density cultivation on limited land but requires precise control of crop height to suit vertical farming systems. Tomato, a globally significant vegetable crop, urgently requires mutant varieties that suppress indeterminate growth for effective cultivation in vertical farming systems. In this study, we utilized the CRISPR-Cas9 system to develop a new tomato cultivar optimized for vertical farming by editing the Gibberellin 20-oxidase (SlGA20ox) genes, which are well known for their roles in the “Green Revolution”. Additionally, we proposed a volumetric model to effectively identify mutants through non-destructive analysis of chlorophyll fluorescence. The proposed model achieved over 84 ​% classification accuracy in distinguishing triple-determinate and slga20ox gene-edited plants, outperforming traditional machine learning methods and 1D-CNN approaches. Unlike previous studies that primarily relied on manual feature extraction from chlorophyll fluorescence data, this research introduced a deep learning framework capable of automating feature extraction in three dimensions while learning the temporal characteristics of chlorophyll fluorescence imaging data. The study demonstrated the potential to classify tomato plants customized for vertical farming, leveraging advanced phenotypic analysis methods. Our approach explores new analytical methods for chlorophyll fluorescence imaging data within AI-based phenotyping and can be extended to other crops and traits, accelerating breeding programs and enhancing the efficiency of genetic resource management.

Plant Phenomics
2025
Research figure for AraDQ: an automated digital phenotyping software for quantifying disease symptoms of flood-inoculated Arabidopsis seedlings

AraDQ: an automated digital phenotyping software for quantifying disease symptoms of flood-inoculated Arabidopsis seedlings

Plant scientists have largely relied on pathogen growth assays and/or transcript analysis of stress-responsive genes for quantification of disease severity and susceptibility. These methods are destructive to plants, labor-intensive, and time-consuming, thereby limiting their application in real-time, large-scale studies. Here, we present the Arabidopsis Disease Quantification (AraDQ) image analysis tool for examination of flood-inoculated Arabidopsis seedlings grown on plates containing plant growth media. AraDQ offers a simple, fast, and accurate approach for image-based quantification of plant disease symptoms using various parameters.

Plant Methods
2024
Research figure for Development of a deep-learning phenotyping tool for analyzing image-based strawberry phenotypes

Development of a deep-learning phenotyping tool for analyzing image-based strawberry phenotypes

In strawberry farming, phenotypic traits (such as crown diameter, petiole length, plant height, flower, leaf, and fruit size) measurement is essential as it serves as a decision-making tool for plant monitoring and management. An image-based Strawberry Phenotyping Tool (SPT) was developed using two deep-learning (DL) architectures, namely "YOLOv4" and "U-net" integrated into a single system. The results demonstrate the efficiency of our system in recognizing the aforementioned six strawberry phenotypic traits regardless of the complex scenario of the environment of the strawberry plant. This tool could help farmers and researchers make accurate and efficient decisions related to strawberry plant management, possibly causing increased productivity and yield potential.

Frontiers in Plant Science
2024
Research figure for NeRF-based 3D reconstruction pipeline for acquisition and analysis of tomato crop morphology

NeRF-based 3D reconstruction pipeline for acquisition and analysis of tomato crop morphology

Recent advancements in digital phenotypic analysis have revolutionized the morphological analysis of crops, offering new insights into genetic trait expressions. This manuscript presents a novel 3D phenotyping pipeline utilizing the cutting-edge Neural Radiance Fields (NeRF) technology, aimed at overcoming the limitations of traditional 2D imaging methods. Our approach incorporates automated RGB image acquisition through unmanned greenhouse robots, coupled with NeRF technology for dense Point Cloud generation. This facilitates non-destructive, accurate measurements of crop parameters such as node length, leaf area, and fruit volume. Our results, derived from applying this methodology to tomato crops in greenhouse conditions, demonstrate a high correlation with traditional human growth surveys. The manuscript highlights the system's ability to achieve detailed morphological analysis from limited viewpoint of camera, proving its suitability and practicality for greenhouse environments. The results displayed an R-squared value of 0.973 and a Mean Absolute Percentage Error (MAPE) of 0.089 for inter-node length measurements, while segmented leaf point cloud and reconstructed meshes showed an R-squared value of 0.953 and a MAPE of 0.090 for leaf area measurements. Additionally, segmented tomato fruit analysis yielded an R-squared value of 0.96 and a MAPE of 0.135 for fruit volume measurements. These metrics underscore the precision and reliability of our 3D phenotyping pipeline, making it a highly promising tool for modern agriculture.

Frontiers in Plant Science
2024
Research figure for Development of a Low-Cost Plant Growth Chamber for Improved Phenotyping Research

Development of a Low-Cost Plant Growth Chamber for Improved Phenotyping Research

This study aimed to develop a research-use plant growth chamber from which plant researchers can easily acquire data and in which plants can be effectively grown by improving the sealing and control performance of the plant grower. The low-cost plant growth chamber presented in this paper enables control of internal temperature, LED lighting, and aeroponics and features easy remote operation using open-source technology. This prototype is advantageous for researchers who have difficulty investing in commercial phenotype equipment and facilitates access to phenotype data processing through a programming approach using canopy images.

Journal of Biosystems Engineering
2023
Research figure for A Deep Learning Model to Predict Evapotranspiration and Relative Humidity for Moisture Control in Tomato Greenhouses

A Deep Learning Model to Predict Evapotranspiration and Relative Humidity for Moisture Control in Tomato Greenhouses

The greenhouse industry achieves stable agricultural production worldwide. Various information and communication technology techniques to model and control the environment have been applied as data from environmental sensors and actuators in greenhouses are monitored in real time. The current study designed data-based, deep learning models for evapotranspiration (ET) and humidity in tomato greenhouses. Using time-series data and applying long short-term memory (LSTM) modeling, an ET prediction model was developed and validated in comparison with the Stanghellini model. Training with 20-day and testing with 3-day data resulted in RMSEs of 0.00317 and 0.00356 kgm−2 s−1, respectively. The standard error of prediction indicated errors of 5.76 and 6.45% in training and testing, respectively. Variables were used to produce a feature map using a two-dimensional convolution layer which was transferred to a subsequent layer and finally connected with the LSTM structure for modeling. The RMSE in humidity prediction using the test dataset was 2.87, indicating a performance better than conventional RNN-LSTM models. Irrigation plans and humidity control may be more accurately conducted in greenhouse cultivation using this model.

Agronomy
2022
Research figure for A Hyperspectral Data 3D Convolutional Neural Network Classification Model for Diagnosis of Gray Mold Disease in Strawberry Leaves

A Hyperspectral Data 3D Convolutional Neural Network Classification Model for Diagnosis of Gray Mold Disease in Strawberry Leaves

Gray mold disease is one of the most frequently occurring diseases in strawberries. Given that it spreads rapidly, rapid countermeasures are necessary through the development of early diagnosis technology. In this study, hyperspectral images of strawberry leaves that were inoculated with gray mold fungus to cause disease were taken; these images were classified into healthy and infected areas as seen by the naked eye. The areas where the infection spread after time elapsed were classified as the asymptomatic class. Square regions of interest (ROIs) with a dimensionality of 16 × 16 × 150 were acquired as training data, including infected, asymptomatic, and healthy areas. Then, 2D and 3D data were used in the development of a convolutional neural network (CNN) classification model. An effective wavelength analysis was performed before the development of the CNN model. Further, the classification model that was developed with 2D training data showed a classification accuracy of 0.74, while the model that used 3D data acquired an accuracy of 0.84; this indicated that the 3D data produced slightly better performance. When performing classification between healthy and asymptomatic areas for developing early diagnosis technology, the two CNN models showed a classification accuracy of 0.73 with regards to the asymptomatic ones. To increase accuracy in classifying asymptomatic areas, a model was developed by smoothing the spectrum data and expanding the first and second derivatives; the results showed that it was possible to increase the asymptomatic classification accuracy to 0.77 and reduce the misclassification of asymptomatic areas as healthy areas. Based on these results, it is concluded that the proposed 3D CNN classification model can be used as an early diagnosis sensor of gray mold diseases since it produces immediate on-site analysis results of hyperspectral images of leaves.

Frontiers in Plant Science
2022
Research figure for Depth image conversion model based on CycleGAN for growing tomato truss identification

Depth image conversion model based on CycleGAN for growing tomato truss identification

On tomato plants, the flowering truss is a group or cluster of smaller stems where flowers and fruit develop, while the growing truss is the most extended part of the stem. Because the state of the growing truss reacts sensitively to the surrounding environment, it is essential to control its growth in the early stages. With the recent development of information and artificial intelligence technology in agriculture, a previous study developed a real-time acquisition and evaluation method for images using robots. Furthermore, we used image processing to locate the growing truss to extract growth information. Among the different vision algorithms, the CycleGAN algorithm was used to generate and transform unpaired images using generated learning images. In this study, we developed a robot-based system for simultaneously acquiring RGB and depth images of the growing truss of the tomato plant. The segmentation performance for approximately 35 samples was compared via false negative (FN) and false positive (FP) indicators. For the depth camera image, we obtained FN and FP values of 17.55 ± 3.01% and 17.76 ± 3.55%, respectively. For the CycleGAN algorithm, we obtained FN and FP values of 19.24 ± 1.45% and 18.24 ± 1.54%, respectively. When segmentation was performed via image processing through depth image and CycleGAN, the mean intersection over union (mIoU) was 63.56 ± 8.44% and 69.25 ± 4.42%, respectively, indicating that the CycleGAN algorithm can identify the desired growing truss of the tomato plant with high precision. The on-site possibility of the image extraction technique using CycleGAN was confirmed when the image scanning robot drove in a straight line through a tomato greenhouse. In the future, the proposed approach is expected to be used in vision technology to scan tomato growth indicators in greenhouses using an unmanned robot platform.

Plant Methods
2022