Journal articles on the topic 'Plant segmentation'

To see the other types of publications on this topic, follow the link: Plant segmentation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Plant segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Murray, Carl, and Mark O'Malley. "Segmentation of plant cell pictures." Image and Vision Computing 11, no. 3 (April 1993): 155–62. http://dx.doi.org/10.1016/0262-8856(93)90054-k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mahajan, Vatsal, Dilip Jain, and Abhinav Dua. "Plant Leaf Segmentation Invariant of Background." International Journal of Computer & Organization Trends 12, no. 1 (September 25, 2014): 24–26. http://dx.doi.org/10.14445/22492593/ijcot-v12p305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Diao, Zhi Hua, Yin Mao Song, Huan Wang, and Yun Peng Wang. "Study Surveys on Image Segmentation of Plant Disease Spot." Advanced Materials Research 542-543 (June 2012): 1047–50. http://dx.doi.org/10.4028/www.scientific.net/amr.542-543.1047.

Full text
Abstract:
Segmentation is a fundamental component of many image-processing applications. Various algorithms were proposed so far for segmentation of plant disease images. The researchers raised some corresponding solutions to different characteristics of disease spot, and these algorithms are continually improved to enhance the speed and veracity. Based on current progress, this paper gives a study on the image segmentation classification. In addition, this article also makes a comprehensive expatiation on how to solve the problem of plant disease spot by using image segmentation techniques. In the end, open problems and future trend of segmentation algorithm were discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

钟, 旭升. "Segmentation of Plant Point Cloud Segmentation Based on Dynamic Graph Convolution Network." Computer Science and Application 12, no. 03 (2022): 690–96. http://dx.doi.org/10.12677/csa.2022.123070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cao, Liying, Hongda Li, Helong Yu, Guifen Chen, and Heshu Wang. "Plant Leaf Segmentation and Phenotypic Analysis Based on Fully Convolutional Neural Network." Applied Engineering in Agriculture 37, no. 5 (2021): 929–40. http://dx.doi.org/10.13031/aea.14495.

Full text
Abstract:
HighlightsModify the U-Net segmentation network to reduce the loss of segmentation accuracy.Reducing the number of layers U-net network, modifying the loss function, and the increase in the output layer dropout.It can be well extracted after splitting blade morphological model and color feature.Abstract. From the perspective of computer vision, the shortcut to extract phenotypic information from a single crop in the field is image segmentation. Plant segmentation is affected by the background environment and illumination. Using deep learning technology to combine depth maps with multi-view images can achieve high-throughput image processing. This article proposes an improved U-Net segmentation network, based on small sample data enhancement, and reconstructs the U-Net model by optimizing the model framework, activation function and loss function. It is used to realize automatic segmentation of plant leaf images and extract relevant feature parameters. Experimental results show that the improved model can provide reliable segmentation results under different leaf sizes, different lighting conditions, different backgrounds, and different plant leaves. The pixel-by-pixel segmentation accuracy reaches 0.94. Compared with traditional methods, this network achieves robust and high-throughput image segmentation. This method is expected to provide key technical support and practical tools for top-view image processing, Unmanned Aerial Vehicle phenotype extraction, and phenotype field platforms. Keywords: Deep learning, Full convolution neural network, Image segmentation, Phenotype analysis, U-Net.
APA, Harvard, Vancouver, ISO, and other styles
6

Et. al., Rajendra Prasad Bellapu,. "PERFORMANCE COMPARISON OF UNSUPERVISED SEGMENTATION ALGORITHMS ON RICE, GROUNDNUT, AND APPLE PLANT LEAF IMAGES." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 2 (April 13, 2021): 1090–105. http://dx.doi.org/10.17762/itii.v9i2.457.

Full text
Abstract:
This paper focuses on plant leaf image segmentation by considering the aspects of various unsupervised segmentation techniques for automatic plant leaf disease detection. The segmented plant leaves are crucial in the process of automatic disease detection, quantification, and classification of plant diseases. Accurate and efficient assessment of plant diseases is required to avoid economic, social, and ecological losses. This may not be easy to achieve in practice due to multiple factors. It is challenging to segment out the affected area from the images of complex background. Thus, a robust semantic segmentation for automatic recognition and analysis of plant leaf disease detection is extremely demanded in the area of precision agriculture. This breakthrough is expected to lead towards the demand for an accurate and reliable technique for plant leaf segmentation. We propose a hybrid variant that incorporates Graph Cut (GC) and Multi-Level Otsu (MOTSU) in this paper. We compare the segmentation performance implemented on rice, groundnut, and apple plant leaf images for various unsupervised segmentation algorithms. Boundary Displacement error (BDe), Global Consistency error (GCe), Variation of Information (VoI), and Probability Rand index (PRi), are the index metrics used to evaluate the performance of the proposed model. By comparing the outcomes of the simulation, it demonstrates that our proposed technique, Graph Cut based Multi-level Otsu (GCMO), provides better segmentation results as compared to other existing unsupervised algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Guiling, Xinglong Jia, and Tianyu Geng. "Plant Diseases Recognition Based on Image Processing Technology." Journal of Electrical and Computer Engineering 2018 (2018): 1–7. http://dx.doi.org/10.1155/2018/6070129.

Full text
Abstract:
A new image recognition system based on multiple linear regression is proposed. Particularly, there are a number of innovations in image segmentation and recognition system. In image segmentation, an improved histogram segmentation method which can calculate threshold automatically and accurately is proposed. Meanwhile, the regional growth method and true color image processing are combined with this system to improve the accuracy and intelligence. While creating the recognition system, multiple linear regression and image feature extraction are utilized. After evaluating the results of different image training libraries, the system is proved to have effective image recognition ability, high precision, and reliability.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yi, and Lihong Xu. "Unsupervised segmentation of greenhouse plant images based on modified Latent Dirichlet Allocation." PeerJ 6 (June 28, 2018): e5036. http://dx.doi.org/10.7717/peerj.5036.

Full text
Abstract:
Agricultural greenhouse plant images with complicated scenes are difficult to precisely manually label. The appearance of leaf disease spots and mosses increases the difficulty in plant segmentation. Considering these problems, this paper proposed a statistical image segmentation algorithm MSBS-LDA (Mean-shift Bandwidths Searching Latent Dirichlet Allocation), which can perform unsupervised segmentation of greenhouse plants. The main idea of the algorithm is to take advantage of the language model LDA (Latent Dirichlet Allocation) to deal with image segmentation based on the design of spatial documents. The maximum points of probability density function in image space are mapped as documents and Mean-shift is utilized to fulfill the word-document assignment. The proportion of the first major word in word frequency statistics determines the coordinate space bandwidth, and the spatial LDA segmentation procedure iteratively searches for optimal color space bandwidth in the light of the LUV distances between classes. In view of the fruits in plant segmentation result and the ever-changing illumination condition in greenhouses, an improved leaf segmentation method based on watershed is proposed to further segment the leaves. Experiment results show that the proposed methods can segment greenhouse plants and leaves in an unsupervised way and obtain a high segmentation accuracy together with an effective extraction of the fruit part.
APA, Harvard, Vancouver, ISO, and other styles
9

Guadarrama, Lili, Carlos Paredes, and Omar Mercado. "Plant Disease Diagnosis in the Visible Spectrum." Applied Sciences 12, no. 4 (February 20, 2022): 2199. http://dx.doi.org/10.3390/app12042199.

Full text
Abstract:
A simple and robust methodology for plant disease diagnosis using images in the visible spectrum of plants, even in uncontrolled environments, is presented for possible use in mobile applications. This strategy is divided into two main parts: on the one hand, the segmentation of the plant, and on the other hand, the identification of color associated with diseases. Gaussian mixture models and probabilistic saliency segmentation are used to accurately segment the plant from the background of an image, and HSV thresholds are used in order to achieve the identification and quantification of the colors associated with the diseases. Proper identification of the colors associated with diseases of interest combined with adequate segmentation of the plant and the background produces a robust diagnosis in a wide range of scenarios.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Dawei, Jinsheng Li, Shiyu Xiang, and Anqi Pan. "PSegNet: Simultaneous Semantic and Instance Segmentation for Point Clouds of Plants." Plant Phenomics 2022 (May 23, 2022): 1–20. http://dx.doi.org/10.34133/2022/9787643.

Full text
Abstract:
Phenotyping of plant growth improves the understanding of complex genetic traits and eventually expedites the development of modern breeding and intelligent agriculture. In phenotyping, segmentation of 3D point clouds of plant organs such as leaves and stems contributes to automatic growth monitoring and reflects the extent of stress received by the plant. In this work, we first proposed the Voxelized Farthest Point Sampling (VFPS), a novel point cloud downsampling strategy, to prepare our plant dataset for training of deep neural networks. Then, a deep learning network—PSegNet, was specially designed for segmenting point clouds of several species of plants. The effectiveness of PSegNet originates from three new modules including the Double-Neighborhood Feature Extraction Block (DNFEB), the Double-Granularity Feature Fusion Module (DGFFM), and the Attention Module (AM). After training on the plant dataset prepared with VFPS, the network can simultaneously realize the semantic segmentation and the leaf instance segmentation for three plant species. Comparing to several mainstream networks such as PointNet++, ASIS, SGPN, and PlantNet, the PSegNet obtained the best segmentation results quantitatively and qualitatively. In semantic segmentation, PSegNet achieved 95.23%, 93.85%, 94.52%, and 89.90% for the mean Prec, Rec, F1, and IoU, respectively. In instance segmentation, PSegNet achieved 88.13%, 79.28%, 83.35%, and 89.54% for the mPrec, mRec, mCov, and mWCov, respectively.
APA, Harvard, Vancouver, ISO, and other styles
11

Thesma, Vaishnavi, and Javad Mohammadpour Velni. "Plant Root Phenotyping Using Deep Conditional GANs and Binary Semantic Segmentation." Sensors 23, no. 1 (December 28, 2022): 309. http://dx.doi.org/10.3390/s23010309.

Full text
Abstract:
This paper develops an approach to perform binary semantic segmentation on Arabidopsis thaliana root images for plant root phenotyping using a conditional generative adversarial network (cGAN) to address pixel-wise class imbalance. Specifically, we use Pix2PixHD, an image-to-image translation cGAN, to generate realistic and high resolution images of plant roots and annotations similar to the original dataset. Furthermore, we use our trained cGAN to triple the size of our original root dataset to reduce pixel-wise class imbalance. We then feed both the original and generated datasets into SegNet to semantically segment the root pixels from the background. Furthermore, we postprocess our segmentation results to close small, apparent gaps along the main and lateral roots. Lastly, we present a comparison of our binary semantic segmentation approach with the state-of-the-art in root segmentation. Our efforts demonstrate that cGAN can produce realistic and high resolution root images, reduce pixel-wise class imbalance, and our segmentation model yields high testing accuracy (of over 99%), low cross entropy error (of less than 2%), high Dice Score (of near 0.80), and low inference time for near real-time processing.
APA, Harvard, Vancouver, ISO, and other styles
12

Scharr, Hanno, Massimo Minervini, Andrew P. French, Christian Klukas, David M. Kramer, Xiaoming Liu, Imanol Luengo, et al. "Leaf segmentation in plant phenotyping: a collation study." Machine Vision and Applications 27, no. 4 (December 12, 2015): 585–606. http://dx.doi.org/10.1007/s00138-015-0737-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Quiñones, Rubi, Francisco Munoz-Arriola, Sruti Das Choudhury, and Ashok Samal. "Multi-feature data repository development and analytics for image cosegmentation in high-throughput plant phenotyping." PLOS ONE 16, no. 9 (September 2, 2021): e0257001. http://dx.doi.org/10.1371/journal.pone.0257001.

Full text
Abstract:
Cosegmentation is a newly emerging computer vision technique used to segment an object from the background by processing multiple images at the same time. Traditional plant phenotyping analysis uses thresholding segmentation methods which result in high segmentation accuracy. Although there are proposed machine learning and deep learning algorithms for plant segmentation, predictions rely on the specific features being present in the training set. The need for a multi-featured dataset and analytics for cosegmentation becomes critical to better understand and predict plants’ responses to the environment. High-throughput phenotyping produces an abundance of data that can be leveraged to improve segmentation accuracy and plant phenotyping. This paper introduces four datasets consisting of two plant species, Buckwheat and Sunflower, each split into control and drought conditions. Each dataset has three modalities (Fluorescence, Infrared, and Visible) with 7 to 14 temporal images that are collected in a high-throughput facility at the University of Nebraska-Lincoln. The four datasets (which will be collected under the CosegPP data repository in this paper) are evaluated using three cosegmentation algorithms: Markov random fields-based, Clustering-based, and Deep learning-based cosegmentation, and one commonly used segmentation approach in plant phenotyping. The integration of CosegPP with advanced cosegmentation methods will be the latest benchmark in comparing segmentation accuracy and finding areas of improvement for cosegmentation methodology.
APA, Harvard, Vancouver, ISO, and other styles
14

Moëll, Mattias K., and Lloyd A. Donaldson. "COMPARISON OF SEGMENTATION METHODS FOR DIGITAL IMAGE ANALYSIS OF CONFOCAL MICROSCOPE IMAGES TO MEASURE TRACHEID CELL DIMENSIONS." IAWA Journal 22, no. 3 (2001): 267–88. http://dx.doi.org/10.1163/22941932-90000284.

Full text
Abstract:
Image analysis is a common tool for measuring tracheid cell dimensions. When analyzing a digital image of a transverse cross section of wood, one of the initial procedures is that of segmentation. This involves classifying a picture element (pixel) as either cell wall or lumen. The accuracy of tracheid measurements is dependent on how well the result of the segmentation procedure corresponds to the true distributions of cell wall or lumen pixels. In this paper a comparison of segmentation methods is given. The effect of segmentation method on measurements is investigated and the performance of each method is discussed.We demonstrate that automated segmentation methods remove observer bias and are thus capable of more reproducible results. The contrast for confocal microscope images is of such quality that one of the fastest and simplest automatic segmentation methods may be used.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Dawei, Yan Cao, Xue-song Tang, Siyuan Yan, and Xin Cai. "Leaf Segmentation on Dense Plant Point Clouds with Facet Region Growing." Sensors 18, no. 11 (October 25, 2018): 3625. http://dx.doi.org/10.3390/s18113625.

Full text
Abstract:
Leaves account for the largest proportion of all organ areas for most kinds of plants, and are comprise the main part of the photosynthetically active material in a plant. Observation of individual leaves can help to recognize their growth status and measure complex phenotypic traits. Current image-based leaf segmentation methods have problems due to highly restricted species and vulnerability toward canopy occlusion. In this work, we propose an individual leaf segmentation approach for dense plant point clouds using facet over-segmentation and facet region growing. The approach can be divided into three steps: (1) point cloud pre-processing, (2) facet over-segmentation, and (3) facet region growing for individual leaf segmentation. The experimental results show that the proposed method is effective and efficient in segmenting individual leaves from 3D point clouds of greenhouse ornamentals such as Epipremnum aureum, Monstera deliciosa, and Calathea makoyana, and the average precision and recall are both above 90%. The results also reveal the wide applicability of the proposed methodology for point clouds scanned from different kinds of 3D imaging systems, such as stereo vision and Kinect v2. Moreover, our method is potentially applicable in a broad range of applications that aim at segmenting regular surfaces and objects from a point cloud.
APA, Harvard, Vancouver, ISO, and other styles
16

Guo, Jingwei, and Lihong Xu. "Automatic Segmentation for Plant Leaves via Multiview Stereo Reconstruction." Mathematical Problems in Engineering 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/9845815.

Full text
Abstract:
This paper presented a new method for automatic plant point cloud acquisition and leaves segmentation. Quasi-dense point cloud of the plant is obtained from multiview stereo reconstruction based on surface expansion. In order to overcome the negative effects from complex natural light changes and to obtain a more accurate plant point cloud, the Adaptive Normalized Cross-Correlation algorithm is used in calculating the matching cost between two images, which is robust to radiometric factors and can reduce the fattening effect around boundaries. In the stage of segmentation for each single leaf, an improved region growing method based on fully connected conditional random field (CRF) is proposed to separate the connected leaves with similar color. The method has three steps: boundary erosion, initial segmentation, and segmentation refinement. First, the edge of each leaf point cloud is eroded to remove the connectivity between leaves. Then leaves will be initially segmented by region growing algorithm based on local surface normal and curvature. Finally an efficient CRF inference method based on mean field approximation is applied to remove small isolated regions. Experimental results show that our multiview stereo reconstruction method is robust to illumination changes and can obtain accurate color point clouds. And the improved region growing method based on CRF can effectively separate the connected leaves in obtained point cloud.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Yinglun, Weiliang Wen, Xinyu Guo, Zetao Yu, Shenghao Gu, Haipeng Yan, and Chunjiang Zhao. "High-throughput phenotyping analysis of maize at the seedling stage using end-to-end segmentation network." PLOS ONE 16, no. 1 (January 12, 2021): e0241528. http://dx.doi.org/10.1371/journal.pone.0241528.

Full text
Abstract:
Image processing technologies are available for high-throughput acquisition and analysis of phenotypes for crop populations, which is of great significance for crop growth monitoring, evaluation of seedling condition, and cultivation management. However, existing methods rely on empirical segmentation thresholds, thus can have insufficient accuracy of extracted phenotypes. Taking maize as an example crop, we propose a phenotype extraction approach from top-view images at the seedling stage. An end-to-end segmentation network, named PlantU-net, which uses a small amount of training data, was explored to realize automatic segmentation of top-view images of a maize population at the seedling stage. Morphological and color related phenotypes were automatic extracted, including maize shoot coverage, circumscribed radius, aspect ratio, and plant azimuth plane angle. The results show that the approach can segment the shoots at the seedling stage from top-view images, obtained either from the UAV or tractor-based high-throughput phenotyping platform. The average segmentation accuracy, recall rate, and F1 score are 0.96, 0.98, and 0.97, respectively. The extracted phenotypes, including maize shoot coverage, circumscribed radius, aspect ratio, and plant azimuth plane angle, are highly correlated with manual measurements (R2 = 0.96–0.99). This approach requires less training data and thus has better expansibility. It provides practical means for high-throughput phenotyping analysis of early growth stage crop populations.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Bin, and Chenhua Guo. "MASPC_Transform: A Plant Point Cloud Segmentation Network Based on Multi-Head Attention Separation and Position Code." Sensors 22, no. 23 (November 27, 2022): 9225. http://dx.doi.org/10.3390/s22239225.

Full text
Abstract:
Plant point cloud segmentation is an important step in 3D plant phenotype research. Because the stems, leaves, flowers, and other organs of plants are often intertwined and small in size, this makes plant point cloud segmentation more challenging than other segmentation tasks. In this paper, we propose MASPC_Transform, a novel plant point cloud segmentation network base on multi-head attention separation and position code. The proposed MASPC_Transform establishes connections for similar point clouds scattered in different areas of the point cloud space through multiple attention heads. In order to avoid the aggregation of multiple attention heads, we propose a multi-head attention separation loss based on spatial similarity, so that the attention positions of different attention heads can be dispersed as much as possible. In order to reduce the impact of point cloud disorder and irregularity on feature extraction, we propose a new point cloud position coding method, and use the position coding network based on this method in the local and global feature extraction modules of MASPC_Transform. We evaluate our MASPC_Transform on the ROSE_X dataset. Compared with the state-of-the-art approaches, the proposed MASPC_Transform achieved better segmentation results.
APA, Harvard, Vancouver, ISO, and other styles
19

Altukhov, V. G. "Plant disease severity estimation by computer vision methods." Siberian Herald of Agricultural Science 51, no. 2 (June 7, 2021): 107–12. http://dx.doi.org/10.26898/0370-8799-2021-2-13.

Full text
Abstract:
The first stage results within the framework of the thesis “Investigation of computer vision methods and algorithms in the field of plant diseases detection” are presented. The analysis of the work related to the automatic assessment of plant disease severity was carried out. It was established that for solving problems in this field, convolution neural networks are promising methods, which are currently superior to classical methods of computer vision in terms of accuracy. To assess the severity degree, classification and segmentation architectures of convolutional neural networks are used. Classification architectures are able to take into account disease visual features at different stages of the disease development, but information about the actual affected area is unavailable. On the other hand, solutions based on segmentation architectures provide actual data on the lesion area, but do not grade severity levels according to disease visual features. Based on the result of the research into the application of convolutional neural networks and options for their use, the goal of this study was determined, which is to develop an automatic system capable of determining the lesion area, as well as to take into account disease visual features and the type of immunological reaction of the plant at different stages of disease progress. It is planned to build a system based on the segmentation architecture of a convolutional neural network, which will produce multi-class image segmentation. Such a network is able to divide image pixels into several classes: background, healthy leaf area, affected leaf area. In turn, the class "affected leaf area" will include several subclasses corresponding to the disease visual features at different stages of disease progress.
APA, Harvard, Vancouver, ISO, and other styles
20

Itakura, Kenta, and Fumiki Hosoi. "Automatic Leaf Segmentation for Estimating Leaf Area and Leaf Inclination Angle in 3D Plant Images." Sensors 18, no. 10 (October 22, 2018): 3576. http://dx.doi.org/10.3390/s18103576.

Full text
Abstract:
Automatic and efficient plant monitoring offers accurate plant management. Construction of three-dimensional (3D) models of plants and acquisition of their spatial information is an effective method for obtaining plant structural parameters. Here, 3D images of leaves constructed with multiple scenes taken from different positions were segmented automatically for the automatic retrieval of leaf areas and inclination angles. First, for the initial segmentation, leave images were viewed from the top, then leaves in the top-view images were segmented using distance transform and the watershed algorithm. Next, the images of leaves after the initial segmentation were reduced by 90%, and the seed regions for each leaf were produced. The seed region was re-projected onto the 3D images, and each leaf was segmented by expanding the seed region with the 3D information. After leaf segmentation, the leaf area of each leaf and its inclination angle were estimated accurately via a voxel-based calculation. As a result, leaf area and leaf inclination angle were estimated accurately after automatic leaf segmentation. This method for automatic plant structure analysis allows accurate and efficient plant breeding and growth management.
APA, Harvard, Vancouver, ISO, and other styles
21

Kitzler, Florian, Helmut Wagentristl, Reinhard W. Neugschwandtner, Andreas Gronauer, and Viktoria Motsch. "Influence of Selected Modeling Parameters on Plant Segmentation Quality Using Decision Tree Classifiers." Agriculture 12, no. 9 (September 6, 2022): 1408. http://dx.doi.org/10.3390/agriculture12091408.

Full text
Abstract:
Modern precision agriculture applications increasingly rely on stable computer vision outputs. An important computer vision task is to discriminate between soil and plant pixels, which is called plant segmentation. For this task, supervised learning techniques, such as decision tree classifiers (DTC), support vector machines (SVM), or artificial neural networks (ANN) are increasing in popularity. The selection of training data is of utmost importance in these approaches as it influences the quality of the resulting models. We investigated the influence of three modeling parameters, namely proportion of plant pixels (plant cover), criteria on what pixel to choose (pixel selection), and number/type of features (input features) on the segmentation quality using DTCs. Our findings show that plant cover and, to a minor degree, input features have a significant impact on segmentation quality. We can state that the overperformance of multi-feature input decision tree classifiers over threshold-based color index methods can be explained to a high degree by the more balanced training data. Single-feature input decision tree classifiers can compete with state-of-the-art models when the same training data are provided. This study is the first step in a systematic analysis of influence parameters of such plant segmentation models.
APA, Harvard, Vancouver, ISO, and other styles
22

Bande, Priyanka, and Mr Kranti Dewangan. "Plant Disease Detection using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 858–65. http://dx.doi.org/10.22214/ijraset.2022.43900.

Full text
Abstract:
Abstract: Farming is critical to the nation's economy and progress. Precision Farming (PA), on the other hand, is still in its development when it comes to technology-driven growth. Various plant diseases have caused pain to untold millions of people around the world over the years, with an estimated annual yield loss of 14% globally. Computerized disease segmentation and diagnosis from based on leaf photos has the potential to be more effective than the current method. Image capture, preprocessing, and segmentation are followed by augmentation, feature extraction, and classification using models for automatic plant disease diagnosis. This project employs VGG-16, ResNet-50, AlexNet, DenseNet-169, and InceptionV3 Deep Learning models to identify plant illnesses from photos in the Plant Village Dataset and reliably classify them into two classes. The results of the experiment revealed that the ResNet-50 has achieved highest accuracy of 97.80 % as compare to other applied deep learning models for disease classification. Keywords: VGG16, ResNet50, Inception V3, CNN, GoogleNet, AlexNet
APA, Harvard, Vancouver, ISO, and other styles
23

Kakran, Nandini, Pratik Singh, and K. P. "Detection of Disease in Plant Leaf using Image Segmentation." International Journal of Computer Applications 178, no. 35 (July 18, 2019): 29–32. http://dx.doi.org/10.5120/ijca2019919229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Fathu Nisha, M., P. Vasuki, and S. Mohamed Mansoor Roomi. "Fabric Defect Detection Using the Sensitive Plant Segmentation Algorithm." Fibres and Textiles in Eastern Europe 28, no. 3(141) (June 30, 2020): 84–87. http://dx.doi.org/10.5604/01.3001.0013.9025.

Full text
Abstract:
Fabric quality control and defect detection are playing a crucial role in the textile industry with the development of high customer demand in the fashion market. This work presents fabric defect detection using a sensitive plant segmentation algorithm (SPSA) which, is developed with the sensitive behaviour of the sensitive plant biologically named “mimosa pudica”. This method consists of two stages: The first stage enhances the contrast of the defective fabric image and the second stage segments the fabric defects with the aid of the SPSA. The SPSA proposed was developed for defective pixel identification in non-uniform patterns of fabrics. In this paper, the SPSA was built through checking with devised conditions, correlation and error probability. Every pixel was checked with the algorithm developed to be marked either a defective or non-defective pixel. The SPSA proposed was tested on different types of fabric defect databases, showing a much improved performance over existing methods.
APA, Harvard, Vancouver, ISO, and other styles
25

DAYANG, Paul, and Armandine Sorel KOUYIM MELI. "Evaluation of Image Segmentation Algorithms for Plant Disease Detection." International Journal of Image, Graphics and Signal Processing 13, no. 5 (October 8, 2021): 14–26. http://dx.doi.org/10.5815/ijigsp.2021.05.02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Xiang, Rong. "Image segmentation for whole tomato plant recognition at night." Computers and Electronics in Agriculture 154 (November 2018): 434–42. http://dx.doi.org/10.1016/j.compag.2018.09.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Thimmegowda, Thirthe Gowda Mallinathapura, and Chandrika Jayaramaiah. "Cluster-based segmentation for tobacco plant detection and classification." Bulletin of Electrical Engineering and Informatics 12, no. 1 (February 1, 2023): 75–85. http://dx.doi.org/10.11591/eei.v12i1.4388.

Full text
Abstract:
Tobacco is one of the major economical crops in the agriculture sector. It is essential to detect tobacco plants using unmanned aerial vehicle (UAV) images for improved crop yield and plays an important role in the early treatment of tobacco plants. The proposed research work is carried out in three phases: In the first phase, we collect images from UAV’s and apply the French Commision Internationale de l'eclairage (CIE) L*a*b colour space model as pre-processing operations and segmentation. And then two prominent motion descriptors namely histogram of flow (HOF) and motion boundary histogram (MBH) are combined with the optimal histogram of oriented gradients (HOG) descriptor for exploring optimal motion trajectory and spatial measurements. And finally, the spatial variations with respect to the scale and illumination changes are incorporated using the optimal HOG descriptor. Here both dense motion patterns and HOG are refined using hierarchical feature selection using principal component analysis (PCA). The proposed model is trained and evaluated on different tobacco UAV image datasets and done a comparative analysis of different machine learning (ML) algorithms. The proposed model achieves good performance with 95% accuracy and 92% of sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Shanwen, and Chuanlei Zhang. "Modified U-Net for plant diseased leaf image segmentation." Computers and Electronics in Agriculture 204 (January 2023): 107511. http://dx.doi.org/10.1016/j.compag.2022.107511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Saxena, Onkar, Shikha Agrawal, and Sanjay Silakari. "Plant Disease Detection Techniques based on Deep Learning Models: A Review." Computer Science & Engineering: An International Journal 12, no. 1 (February 28, 2022): 147–56. http://dx.doi.org/10.5121/cseij.2022.12115.

Full text
Abstract:
Plants must be checked at an early stage of their life cycle in order to avoid illnesses. Visual observation, which takes longer, and costly expertise are the conventional approach utilised for this monitoring. Therefore, illness detection systems need to be automated in order to speed up this procedure. This study analyses the possibility of technologies for the identification of pest leaf diseases in plants to support agricultural growth. It covers many processes, such as image retrieval, image segmentation, extraction of features and classification. Two key phases comprise plant disease detection technology: segmentation of an open input to detect the ill portion and an extraction approach to extract the image feature and classify the functionality that is removed using different classifiers. The technology consists of two important steps. In this study, segmentation, characteristic removal, and classification approaches are examined and clarified from the perspective of different parameters.
APA, Harvard, Vancouver, ISO, and other styles
30

Altukhov, V. G. "Plant leaf images computerized segmenation." IOP Conference Series: Earth and Environmental Science 957, no. 1 (January 1, 2022): 012002. http://dx.doi.org/10.1088/1755-1315/957/1/012002.

Full text
Abstract:
Abstract In this paper the comparison of RGB, HSV and CIELab color spaces is considered in view of diseased leaf images segmentation by color thresholding method. In such tasks HSV and CIELab outperform RGB. Thresholding method based upon HSV or CIELab color spaces can be applied to measuring leaves total area, diseased and healthy surfaces area, as well as dataset composing in machine learning.
APA, Harvard, Vancouver, ISO, and other styles
31

I., Chandra. "Optimization Techniques for Detection and Recognition of Plant Leaf Diseases Using IOT and Image Segmentation." Revista Gestão Inovação e Tecnologias 11, no. 4 (July 10, 2021): 297–308. http://dx.doi.org/10.47059/revistageintec.v11i4.2108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yasrab, Robail, Jincheng Zhang, Polina Smyth, and Michael P. Pound. "Predicting Plant Growth from Time-Series Data Using Deep Learning." Remote Sensing 13, no. 3 (January 20, 2021): 331. http://dx.doi.org/10.3390/rs13030331.

Full text
Abstract:
Phenotyping involves the quantitative assessment of the anatomical, biochemical, and physiological plant traits. Natural plant growth cycles can be extremely slow, hindering the experimental processes of phenotyping. Deep learning offers a great deal of support for automating and addressing key plant phenotyping research issues. Machine learning-based high-throughput phenotyping is a potential solution to the phenotyping bottleneck, promising to accelerate the experimental cycles within phenomic research. This research presents a study of deep networks’ potential to predict plants’ expected growth, by generating segmentation masks of root and shoot systems into the future. We adapt an existing generative adversarial predictive network into this new domain. The results show an efficient plant leaf and root segmentation network that provides predictive segmentation of what a leaf and root system will look like at a future time, based on time-series data of plant growth. We present benchmark results on two public datasets of Arabidopsis (A. thaliana) and Brassica rapa (Komatsuna) plants. The experimental results show strong performance, and the capability of proposed methods to match expert annotation. The proposed method is highly adaptable, trainable (transfer learning/domain adaptation) on different plant species and mutations.
APA, Harvard, Vancouver, ISO, and other styles
33

Rawat, Shivangana, Akshay L. Chandra, Sai Vikas Desai, Vineeth N. Balasubramanian, Seishi Ninomiya, and Wei Guo. "How Useful Is Image-Based Active Learning for Plant Organ Segmentation?" Plant Phenomics 2022 (February 24, 2022): 1–11. http://dx.doi.org/10.34133/2022/9795275.

Full text
Abstract:
Training deep learning models typically requires a huge amount of labeled data which is expensive to acquire, especially in dense prediction tasks such as semantic segmentation. Moreover, plant phenotyping datasets pose additional challenges of heavy occlusion and varied lighting conditions which makes annotations more time-consuming to obtain. Active learning helps in reducing the annotation cost by selecting samples for labeling which are most informative to the model, thus improving model performance with fewer annotations. Active learning for semantic segmentation has been well studied on datasets such as PASCAL VOC and Cityscapes. However, its effectiveness on plant datasets has not received much importance. To bridge this gap, we empirically study and benchmark the effectiveness of four uncertainty-based active learning strategies on three natural plant organ segmentation datasets. We also study their behaviour in response to variations in training configurations in terms of augmentations used, the scale of training images, active learning batch sizes, and train-validation set splits.
APA, Harvard, Vancouver, ISO, and other styles
34

Luo, Zifei, Wenzhu Yang, Ruru Gou, and Yunfeng Yuan. "TransAttention U-Net for Semantic Segmentation of Poppy." Electronics 12, no. 3 (January 17, 2023): 487. http://dx.doi.org/10.3390/electronics12030487.

Full text
Abstract:
This work represents a new attempt to use drone aerial photography to detect illegal cultivation of opium poppy. The key of this task is the precise segmentation of the poppy plant from the captured image. To achieve segmentation mask close to real data, it is necessary to extract target areas according to different morphological characteristics of poppy plant and reduce complex environmental interference. Based on RGB images, poppy plants, weeds, and background regions are separated individually. Firstly, the pixel features of poppy plant are enhanced using a hybrid strategy approach to augment the too-small samples. Secondly, the U-Shape network incorporating the self-attention mechanism is improved to segment the enhanced dataset. In this process, the multi-head self-attention module is enhanced by using relative position encoding to deal with the special morphological characteristics between poppy stem and fruit. The results indicated that the proposed method can segmented out the poppy plant precisely.
APA, Harvard, Vancouver, ISO, and other styles
35

Miao, Chenyong, Alejandro Pages, Zheng Xu, Eric Rodene, Jinliang Yang, and James C. Schnable. "Semantic Segmentation of Sorghum Using Hyperspectral Data Identifies Genetic Associations." Plant Phenomics 2020 (February 4, 2020): 1–11. http://dx.doi.org/10.34133/2020/4216373.

Full text
Abstract:
This study describes the evaluation of a range of approaches to semantic segmentation of hyperspectral images of sorghum plants, classifying each pixel as either nonplant or belonging to one of the three organ types (leaf, stalk, panicle). While many current methods for segmentation focus on separating plant pixels from background, organ-specific segmentation makes it feasible to measure a wider range of plant properties. Manually scored training data for a set of hyperspectral images collected from a sorghum association population was used to train and evaluate a set of supervised classification models. Many algorithms show acceptable accuracy for this classification task. Algorithms trained on sorghum data are able to accurately classify maize leaves and stalks, but fail to accurately classify maize reproductive organs which are not directly equivalent to sorghum panicles. Trait measurements extracted from semantic segmentation of sorghum organs can be used to identify both genes known to be controlling variation in a previously measured phenotypes (e.g., panicle size and plant height) as well as identify signals for genes controlling traits not previously quantified in this population (e.g., stalk/leaf ratio). Organ level semantic segmentation provides opportunities to identify genes controlling variation in a wide range of morphological phenotypes in sorghum, maize, and other related grain crops.
APA, Harvard, Vancouver, ISO, and other styles
36

Cai, Maodong, Xiaomei Yi, Guoying Wang, Lufeng Mo, Peng Wu, Christine Mwanza, and Kasanda Ernest Kapula. "Image Segmentation Method for Sweetgum Leaf Spots Based on an Improved DeeplabV3+ Network." Forests 13, no. 12 (December 8, 2022): 2095. http://dx.doi.org/10.3390/f13122095.

Full text
Abstract:
This paper discusses a sweetgum leaf-spot image segmentation method based on an improved DeeplabV3+ network to address the low accuracy in plant leaf spot segmentation, problems with the recognition model, insufficient datasets, and slow training speeds. We replaced the backbone feature extraction network of the model's encoder with the MobileNetV2 network, which greatly reduced the amount of calculation being performed in the model and improved its calculation speed. Then, the attention mechanism module was introduced into the backbone feature extraction network and the decoder, which further optimized the model’s edge recognition effect and improved the model's segmentation accuracy. Given the category imbalance in the sweetgum leaf spot dataset (SLSD), a weighted loss function was introduced and assigned to two different types of weights, for spots and the background, respectively, to improve the segmentation of disease spot regions in the model. Finally, we graded the degree of the lesions. The experimental results show that the PA, mRecall, and mIou algorithms of the improved model were 94.5%, 85.4%, and 81.3%, respectively, which are superior to the traditional DeeplabV3+, Unet, Segnet models and other commonly used plant disease semantic segmentation methods. The model shows excellent performance for different degrees of speckle segmentation, demonstrating that this method can effectively improve the model’s segmentation performance for sweetgum leaf spots.
APA, Harvard, Vancouver, ISO, and other styles
37

Baum, Daniel, James C. Weaver, Igor Zlotnikov, David Knötel, Lara Tomholt, and Mason N. Dean. "High-Throughput Segmentation of Tiled Biological Structures using Random-Walk Distance Transforms." Integrative and Comparative Biology 59, no. 6 (July 8, 2019): 1700–1712. http://dx.doi.org/10.1093/icb/icz117.

Full text
Abstract:
Abstract Various 3D imaging techniques are routinely used to examine biological materials, the results of which are usually a stack of grayscale images. In order to quantify structural aspects of the biological materials, however, they must first be extracted from the dataset in a process called segmentation. If the individual structures to be extracted are in contact or very close to each other, distance-based segmentation methods utilizing the Euclidean distance transform are commonly employed. Major disadvantages of the Euclidean distance transform, however, are its susceptibility to noise (very common in biological data), which often leads to incorrect segmentations (i.e., poor separation of objects of interest), and its limitation of being only effective for roundish objects. In the present work, we propose an alternative distance transform method, the random-walk distance transform, and demonstrate its effectiveness in high-throughput segmentation of three microCT datasets of biological tilings (i.e., structures composed of a large number of similar repeating units). In contrast to the Euclidean distance transform, the random-walk approach represents the global, rather than the local, geometric character of the objects to be segmented and, thus, is less susceptible to noise. In addition, it is directly applicable to structures with anisotropic shape characteristics. Using three case studies—tessellated cartilage from a stingray, the dermal endoskeleton of a starfish, and the prismatic layer of a bivalve mollusc shell—we provide a typical workflow for the segmentation of tiled structures, describe core image processing concepts that are underused in biological research, and show that for each study system, large amounts of biologically-relevant data can be rapidly segmented, visualized, and analyzed.
APA, Harvard, Vancouver, ISO, and other styles
38

Apostolidis, Kyriakos D., Theofanis Kalampokas, Theodore P. Pachidis, and Vassilis G. Kaburlasos. "Grapevine Plant Image Dataset for Pruning." Data 7, no. 8 (August 9, 2022): 110. http://dx.doi.org/10.3390/data7080110.

Full text
Abstract:
Grapevine pruning is conducted during winter, and it is a very important and expensive task for wine producers managing their vineyard. During grapevine pruning every year, the past year’s canes should be removed and should provide the possibility for new canes to grow and produce grapes. It is a difficult procedure, and it is not yet fully automated. However, some attempts have been made by the research community. Based on the literature, grapevine pruning automation is approximated with the help of computer vision and image processing methods. Despite the attempts that have been made to automate grapevine pruning, the task remains hard for the abovementioned domains. The reason for this is that several challenges such as cane overlapping or complex backgrounds appear. Additionally, there is no public image dataset for this problem which makes it difficult for the research community to approach it. Motivated by the above facts, an image dataset is proposed for grapevine canes’ segmentation for a pruning task. An experimental analysis is also conducted in the proposed dataset, achieving a 67% IoU and 78% F1 score in grapevine cane semantic segmentation with the U-net model.
APA, Harvard, Vancouver, ISO, and other styles
39

Henke, Michael, Kerstin Neumann, Thomas Altmann, and Evgeny Gladilin. "Semi-Automated Ground Truth Segmentation and Phenotyping of Plant Structures Using k-Means Clustering of Eigen-Colors (kmSeg)." Agriculture 11, no. 11 (November 4, 2021): 1098. http://dx.doi.org/10.3390/agriculture11111098.

Full text
Abstract:
Background. Efficient analysis of large image data produced in greenhouse phenotyping experiments is often challenged by a large variability of optical plant and background appearance which requires advanced classification model methods and reliable ground truth data for their training. In the absence of appropriate computational tools, generation of ground truth data has to be performed manually, which represents a time-consuming task. Methods. Here, we present a efficient GUI-based software solution which reduces the task of plant image segmentation to manual annotation of a small number of image regions automatically pre-segmented using k-means clustering of Eigen-colors (kmSeg). Results. Our experimental results show that in contrast to other supervised clustering techniques k-means enables a computationally efficient pre-segmentation of large plant images in their original resolution. Thereby, the binary segmentation of plant images in fore- and background regions is performed within a few minutes with the average accuracy of 96–99% validated by a direct comparison with ground truth data. Conclusions. Primarily developed for efficient ground truth segmentation and phenotyping of greenhouse-grown plants, the kmSeg tool can be applied for efficient labeling and quantitative analysis of arbitrary images exhibiting distinctive differences between colors of fore- and background structures.
APA, Harvard, Vancouver, ISO, and other styles
40

Kumar, Sanjay, Sahil Kansal, Monagi H. Alkinani, Ahmed Elaraby, Saksham Garg, Shanthi Natarajan, and Vishnu Sharma. "Segmentation of Spectral Plant Images Using Generative Adversary Network Techniques." Electronics 11, no. 16 (August 20, 2022): 2611. http://dx.doi.org/10.3390/electronics11162611.

Full text
Abstract:
The spectral image analysis of complex analytic systems is usually performed in analytical chemistry. Signals associated with the key analytics present in an image scene are extracted during spectral image analysis. Accordingly, the first step in spectral image analysis is to segment the image in order to extract the applicable signals for analysis. In contrast, using traditional methods of image segmentation in chronometry makes it difficult to extract the relevant signals. None of the approaches incorporate contextual information present in an image scene; therefore, the classification is limited to thresholds or pixels only. An image translation pixel-to-pixel (p2p) method for segmenting spectral images using a generative adversary network (GAN) is presented in this paper. The p2p GAN forms two neuronal models. During the production and detection processes, the representation learns how to segment ethereal images precisely. For the evaluation of the results, a partial discriminate analysis of the least-squares method was used to classify the images based on thresholds and pixels. From the experimental results, it was determined that the GAN-based p2p segmentation performs the best segmentation with an overall accuracy of 0.98 ± 0.06. This result shows that image processing techniques using deep learning contribute to enhanced spectral image processing. The outcomes of this research demonstrated the effectiveness of image-processing techniques that use deep learning to enhance spectral-image processing.
APA, Harvard, Vancouver, ISO, and other styles
41

Ma, Lianbo, Kunyuan Hu, Yunlong Zhu, Hanning Chen, and Maowei He. "A Novel Plant Root Foraging Algorithm for Image Segmentation Problems." Mathematical Problems in Engineering 2014 (2014): 1–16. http://dx.doi.org/10.1155/2014/471209.

Full text
Abstract:
This paper presents a new type of biologically-inspired global optimization methodology for image segmentation based on plant root foraging behavior, namely, artificial root foraging algorithm (ARFO). The essential motive of ARFO is to imitate the significant characteristics of plant root foraging behavior including branching, regrowing, and tropisms for constructing a heuristic algorithm for multidimensional and multimodal problems. A mathematical model is firstly designed to abstract various plant root foraging patterns. Then, the basic process of ARFO algorithm derived in the model is described in details. When tested against ten benchmark functions, ARFO shows the superiority to other state-of-the-art algorithms on several benchmark functions. Further, we employed the ARFO algorithm to deal with multilevel threshold image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the suitability of the proposed method for solving such problem.
APA, Harvard, Vancouver, ISO, and other styles
42

Fuentes-Pacheco, Jorge, Juan Torres-Olivares, Edgar Roman-Rangel, Salvador Cervantes, Porfirio Juarez-Lopez, Jorge Hermosillo-Valadez, and Juan Manuel Rendón-Mancha. "Fig Plant Segmentation from Aerial Images Using a Deep Convolutional Encoder-Decoder Network." Remote Sensing 11, no. 10 (May 15, 2019): 1157. http://dx.doi.org/10.3390/rs11101157.

Full text
Abstract:
Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.
APA, Harvard, Vancouver, ISO, and other styles
43

D, Angayarkann. "A Plant Disease Detection and Classification using Image Segmentation Technique." International Journal of Advanced Trends in Computer Science and Engineering 9, no. 4 (August 25, 2020): 4972–76. http://dx.doi.org/10.30534/ijatcse/2020/112942020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Eid, Heba F. "Performance Improvement of Plant Identification Model based on PSO Segmentation." International Journal of Intelligent Systems and Applications 8, no. 2 (February 8, 2016): 53–58. http://dx.doi.org/10.5815/ijisa.2016.02.07.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Vijayalakshmi, S., and D. Murugan. "Comparative Analysis on Segmentation Approaches for Plant Leaf Disease Detection." International Journal of Computer Sciences and Engineering 6, no. 5 (May 31, 2018): 412–18. http://dx.doi.org/10.26438/ijcse/v6i5.412418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Karthik, Pullalarevu, Mansi Parashar, S. Sofana Reka, Kumar T. Rajamani, and Mattias P. Heinrich. "Semantic segmentation for plant phenotyping using advanced deep learning pipelines." Multimedia Tools and Applications 81, no. 3 (December 14, 2021): 4535–47. http://dx.doi.org/10.1007/s11042-021-11770-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lai, Yibin, Shenglian Lu, Tingting Qian, Ming Chen, Song Zhen, and Li Guo. "Segmentation of Plant Point Cloud based on Deep Learning Method." Computer-Aided Design and Applications 19, no. 6 (March 9, 2022): 1117–29. http://dx.doi.org/10.14733/cadaps.2022.1117-1129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

M. R. Golzarian, M.-K. Lee, and J. M. A. Desbiolles. "Evaluation of Color Indices for Improved Segmentation of Plant Images." Transactions of the ASABE 55, no. 1 (2012): 261–73. http://dx.doi.org/10.13031/2013.41236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Alenya, Guillem, Babette Dellen, Sergi Foix, and Carme Torras. "Robotized Plant Probing: Leaf Segmentation Utilizing Time-of-Flight Data." IEEE Robotics & Automation Magazine 20, no. 3 (September 2013): 50–59. http://dx.doi.org/10.1109/mra.2012.2230118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Tsuda, M., and M. T. Tyree. "Whole-plant hydraulic resistance and vulnerability segmentation in Acer saccharinum." Tree Physiology 17, no. 6 (June 1, 1997): 351–57. http://dx.doi.org/10.1093/treephys/17.6.351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography