Abstract:Orchids are highly valued ornamental plants whose growth conditions directly impact the economic returns of the horticultural industry. The substrate, acting both as a physical support and a nutrient reservoir, is critical for orchid development. Therefore, the careful selection of an appropriate growth substrate is of paramount importance. However, existing research on the relationship between orchid growth and substrate properties relies mainly on manual measurements of physiological indicators, with limited application of high-throughput phenotyping (HTP) platforms. In this study, we evaluated three distinct substrate types, peat soil mixed with perlite, pine bark, and river sand, which were applied to two orchid species, Cymbidium goeringii and Cymbidium faberi. Using the high-throughput Plantarray lysimetric system, we continuously recorded environmental parameters (photosynthetically active radiation, humidity, and temperature) as well as key growth metrics (biomass accumulation, canopy conductance, and transpiration rate). This platform enabled precise and rapid quantification of orchid growth indicators. The results show that the type of substrate significantly affects orchid growth. Under controlled conditions, mixed substrates that provide balanced nutrition and excellent drainage enhanced orchid growth compared to other substrates. Additionally, when the data obtained from the HTP platform were compared with those from traditional manual measurements, the automated system showed higher reliability and accuracy. This study not only provides practical guidance for selecting cultivation substrates for orchids, but also establishes a robust scientific framework for integrating advanced phenotyping technologies into orchid cultivation practices.
Abstract:Fusarium head blight (FHB), a frequent disease in wheat cultivation, can lead to substantial yield losses and the production of mycotoxins in grains. Therefore, the development of wheat varieties resistant to FHB is an important strategy to reduce related losses. In this respect, manual surveys of FHB are time-consuming and labor-intensive. To overcome this issue, this paper proposes a method for detecting and evaluating wheat FHB using color imaging and deep learning. Initially, a lightweight convolutional neural network model based on the You Only Look Once (YOLO) v8s artificial intelligence (AI) model was designed to detect wheat spikes from color images. Testing revealed that the model’s mean average precision in spike detection reached 0.964. Moreover, another lightweight model was developed for detecting wheat spikelet and FHB. To enhance the detection capability of the model for small objects, space-to-depth convolution (SPD-Conv) and BiFormer attention modules were integrated. The results indicated that the model can accurately detect spikelet and FHB, with a mean average precision of 0.936. Finally, based on the wheat spikelet detection results, the rate of diseased wheat spikes (RD_S) and the disease index for wheat (DI_W) were calculated to evaluate the severity of wheat FHB. For RD_S and DI_W, the coefficients of determination between phytologists’ evaluations and the estimates derived from the proposed method were 0.71 and 0.93, respectively. These results demonstrate that the proposed method facilitates the accurate and efficient detection of wheat FHB and contributes to the quantitative evaluation of FHB in the field.
Wang ZHANG, Yi REN, Zidi GUO, Han LI, Man ZHANG, Jie LIU, Ruicheng QIU
Abstract:The automated assessment of tomato ripeness is vital for modern greenhouse operations, yet challenges remain due to variable environmental conditions. To provide a solution, we propose rank-aware You Only Look Once (YOLO), a novel detection framework that incorporates the biological prior of top-to-bottom ripening within fruit clusters. This is achieved through two key innovations: an efficient position-aware head for regressing relative height for fruits and a dynamic margin-aware ranking loss (DM-RankLoss) that enforces the correct spatial sequence. Evaluated on a 3500-image dataset from a solar greenhouse, our plug-and-play module could boost the mean average precision (mAP) at intersection over union (IoU) threshold of 0.50 (mAP50) of multiple YOLO architectures by up to 5.66 pecentage points. The model effectively learns the cluster topology, achieving a height-mean absolute error (H-MAE) of 0.107 (normalized) and a pairwise ranking accuracy (PRA) of 84.59%, while it reduces the parameter count by over 10% compared to the baseline for efficient deployment. Visualizations confirm that the model leverages spatial context to resolve color ambiguities. Our work offers a sensor-free, accurate, and efficient solution for in situ phenotyping in agricultural robotics.
Abstract:Tea diseases, including brown and gray blight, result in significant yield and quality losses, especially in Longjing tea production. Traditional detection methods are prone to errors, while existing deep learning models often struggle to be robust under natural field conditions. To address these challenges, an improved lightweight detection model, asymmetric multi-level (AML) mechanism, dynamic snake convolution (DSC), and scalable intersection over union (SIoU) loss function-You Only Look Once (YOLO) (ADS-YOLO), was developed and validated. In the method, a dataset comprising 5694 smartphone-captured images of tea leaves was established under natural lighting. Enhancements were implemented in the YOLO11n baseline algorithm through incorporation of the SIoU loss function for better bounding box regression, DSC, which realizes adaptive feature extraction based on the dynamic spatial context, and an AML mechanism, which achieves lightweight feature fusion via adaptive multi-scale design. The results showed that ADS-YOLO achieved a precision of 0.935 and a recall of 0.870, compared to 0.894 and 0.818, respectively, when the baseline YOLO11n was used. Importantly, ADS-YOLO demonstrated a real-time performance of 137.1 frames per second (FPS), coupled with reduced computational costs. ADS-YOLO improved the mean average precision (mAP) at intersection over union threshold of 0.5 (mAP@0.5) by 6.4% compared with YOLOv5n and achieved up to 44.6% higher accuracy than YOLOv7t. In conclusion, ADS-YOLO achieved high accuracy, providing a scalable solution for real-time crop health monitoring and sustainable precision agriculture for tea production.
Jinxian TAO, Xiaoli LI, Jingfei ZHANG, Muhammad SHOAIB, Muhammad Adnan ISLAM, Ibrar AHMAD, Yong HE, Sitan YE, Yujie WANG, Binhui LIAO, Mostafa GOUDA
Abstract:Accurate rapeseed yield and biomass estimation at the meter scale prior to harvest is crucial for precision harvesting. However, there is a scarcity of structured research on the estimation of rapeseed biomass yield. This study aims to address this gap by focusing on rapeseed in Jiangsu Province. Multispectral and RGB images captured by unmanned aerial vehicles (UAVs) were taken during key growth stages (budding, flowering, and podding stages). Using the extracted multidimensional features, we developed biomass-yield estimation models using four machine learning techniques. Subsequently, we employed ensemble learning with multidimensional, multi-stage data and used Shapley additive explanation (SHAP) for feature contribution analysis, thereby constructing a framework for predicting rapeseed harvest characteristics with high estimation accuracy and interpretability. Our analysis indicates that spectral‒texture is the most effective feature combination for biomass estimation, whereas the optimal combination for yield estimation includes three-dimensional (3D) spectral‒textural‒structural features. The synergy of these features, coupled with an ensemble learning model, significantly enhanced the accuracy of rapeseed biomass-yield estimation (biomass: coefficient of determination (R2)=0.72, relative root mean square error (rRMSE)=14.35%; yield: R2=0.68, rRMSE=13.67%). The proposed model also achieved stable prediction results across the variety‒density interaction. Overall, this study presents an accurate and generalizable approach for estimating rapeseed biomass yield across various planting patterns, offering new insights for precision harvesting.
Abstract:Accurate quantification of crop residue cover (CRC) is crucial for monitoring and evaluating conservation tillage practices, yet it poses a significant image segmentation challenge. The subtle visual distinctions between fragmented residue and soil, compounded by variable illumination and shadows in field imagery, often lead to poor segmentation performance. To overcome these limitations, we introduce RCTUnet, a novel deep learning architecture designed for robust crop-residue-soil segmentation and precise CRC estimation. RCTUnet’s architecture synergistically integrates three key components: (1) a ResNet50 backbone for deep, multi-scale feature extraction; (2) a convolutional block attention module (CBAM) to adaptively focus on salient residue features across both channel and spatial dimensions; and (3) a transformer-based global context fusion module (GCFM) to model long-range spatial dependencies, which is critical for interpreting heterogeneous residue patterns. We evaluated RCTUnet on a dataset of 1220 field-acquired images spanning four typical crop rotations. Experimental results show that, compared to traditional models: (1) RCTUnet achieves significantly higher crop-residue-soil segmentation accuracy than classic models including Unet, Unet++, DeepLabV3, segmentation network (SegNet), and fully convolutional network (FCN), with improvements of 3.24%, 3.42%, 4.88%, 8.28%, and 6.05% in overall accuracy, respectively; (2) RCTUnet yields superior residue-soil segmentation performance, with increases in residue recall of 7.67%, 7.37%, 14.09%, 27.05%, and 16.91%, respectively; (3) RCTUnet shows enhanced CRC estimation accuracy, achieving a root mean square error (RMSE) of 4.875, representing a 45.5% improvement over Unet (RMSE=8.941). These results demonstrate the efficacy of our hybrid approach, which combines deep hierarchical features, dual-domain attention, and global context modeling. RCTUnet provides a robust and reliable tool for automated CRC assessment, advancing the capabilities of in-field agricultural monitoring.
Ting LI, Yang LIU, Haikuan FENG, Meiyan SHU, Hao YANG, Yuanyuan FU, Xin XU, Yinghao LIN, Hongbo QIAO, Wei GUO, Xinming MA, Lei SHI, Jibo YUE
Abstract:Multi-temporal remote sensing data in large-scale crop phenology identification and classification have become increasingly utilized, particularly for precision management in arid oasis agricultural regions with complex cropping systems. In this study, we developed a deep learning framework integrating Sentinel-2 multi-temporal imagery and normalized difference vegetation index (NDVI) time series for mapping cotton, winter jujube, and tiger nut crops in Tumushuke City, Xinjiang Uygur Autonomous Region, China. We employed the minimum redundancy maximum relevance (mRMR) algorithm for spectral and vegetation index feature selection, followed by Savitzky-Golay (S-G) filtering and double logistic function fitting, to automatically extract the key phenological parameters (start of season (SOS), peak of season (POS), and end of season (EOS)), significantly improving phenological feature extraction accuracy. By incorporating multi-temporal Sentinel-2 data and a multi-scale feature fusion approach, we could systematically compare five classification models (multi-layer perceptron (MLP), residual network-18 (ResNet-18), convolutional long short-term memory (ConvLSTM), Transformer, and random forest classifier (RFC)), demonstrating that high-resolution spatial details substantially enhance crop boundary delineation and classification consistency in complex environments. Further optimization of Transformer’s spatial representation through multi-scale window analysis revealed that the use of 1×1+3×3+5×5 convolutional windows achieves an optimal balance between accuracy and computational efficiency. Independent validation on unseen areas confirmed robust model transferability, with F1 scores of 94.37%, 87.75%, and 86.35% for the three crops (winter jujube, cotton, and tiger nut), respectively. This study validates the high-precision identification potential of Sentinel-2 temporal data and deep neural networks for multi-crop environments, enabling the precise spatial mapping of crop distributions and providing methodological support for smart agricultural decision-making in arid oasis regions.
Fan Qu, Rong Li, Wei Sun, Ge Lin, Rong Zhang, Jing Yang, Li Tian, Guo-gang Xing, Hui Jiang, Fei Gong, Xiao-yan Liang, Yan Meng, Jia-yin Liu, Li-ying Zhou, Shu-yu Wang, Yan Wu, Yi-jing He, Jia-yu Ye, Song-ping Han, Ji-sheng Han