Статті в журналах з теми "Classifications des images"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Classifications des images.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Classifications des images".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

FURTADO, Luiz Felipe de Almeida, Thiago Sanna Freire SILVA, Pedro José Farias FERNANDES, and Evelyn Márcia Leão de Moraes NOVO. "Land cover classification of Lago Grande de Curuai floodplain (Amazon, Brazil) using multi-sensor and image fusion techniques." Acta Amazonica 45, no. 2 (June 2015): 195–202. http://dx.doi.org/10.1590/1809-4392201401439.

Повний текст джерела
Анотація:
Given the limitations of different types of remote sensing images, automated land-cover classifications of the Amazon várzea may yield poor accuracy indexes. One way to improve accuracy is through the combination of images from different sensors, by either image fusion or multi-sensor classifications. Therefore, the objective of this study was to determine which classification method is more efficient in improving land cover classification accuracies for the Amazon várzea and similar wetland environments - (a) synthetically fused optical and SAR images or (b) multi-sensor classification of paired SAR and optical images. Land cover classifications based on images from a single sensor (Landsat TM or Radarsat-2) are compared with multi-sensor and image fusion classifications. Object-based image analyses (OBIA) and the J.48 data-mining algorithm were used for automated classification, and classification accuracies were assessed using the kappa index of agreement and the recently proposed allocation and quantity disagreement measures. Overall, optical-based classifications had better accuracy than SAR-based classifications. Once both datasets were combined using the multi-sensor approach, there was a 2% decrease in allocation disagreement, as the method was able to overcome part of the limitations present in both images. Accuracy decreased when image fusion methods were used, however. We therefore concluded that the multi-sensor classification method is more appropriate for classifying land cover in the Amazon várzea.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Shi, Li Jun, Xian Cheng Mao, and Zheng Lin Peng. "Method for Classification of Remote Sensing Images Based on Multiple Classifiers Combination." Applied Mechanics and Materials 263-266 (December 2012): 2561–65. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2561.

Повний текст джерела
Анотація:
This paper presents a new method for classification of remote sensing image based on multiple classifiers combination. In this method, three supervised classifications such as Mahalanobis Distance, Maximum Likelihood and SVM are selected to sever as the sub-classifications. The simple vote classification, maximum probability category method and fuzzy integral method are combined together according to certain rules. And adopted color infrared aerial images of Huairen country as the experimental object. The results show that the overall classification accuracy was improved by 12% and Kappa coefficient was increased by 0.12 compared with SVM classification which has the highest accuracy in single sub-classifications.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ghimire, Santosh. "On the Image Pixels Classification Methods." Journal of the Institute of Engineering 15, no. 2 (July 31, 2019): 202–9. http://dx.doi.org/10.3126/jie.v15i2.27667.

Повний текст джерела
Анотація:
In this article, we first discuss about the images and image pixels classifications. Then we briefly discuss the importance of classification of images and finally focus on various methods of classification which can be implemented to classify image pixels.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Klose, Christian D., Alexander D. Klose, Uwe Netz, Juergen Beuthan, and Andreas H. Hielscher. "Multiparameter classifications of optical tomographic images." Journal of Biomedical Optics 13, no. 5 (2008): 050503. http://dx.doi.org/10.1117/1.2981806.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

A.khalil and Almas a.Khalil, Turkan. "Fuzzy rule Base-Multispectral Images Classifications." Iraqi National Journal of Earth Sciences 5, no. 2 (December 28, 2005): 32–40. http://dx.doi.org/10.33899/earth.2005.41243.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Li, Mengmeng, and Alfred Stein. "Mapping Land Use from High Resolution Satellite Images by Exploiting the Spatial Arrangement of Land Cover Objects." Remote Sensing 12, no. 24 (December 18, 2020): 4158. http://dx.doi.org/10.3390/rs12244158.

Повний текст джерела
Анотація:
Spatial information regarding the arrangement of land cover objects plays an important role in distinguishing the land use types at land parcel or local neighborhood levels. This study investigates the use of graph convolutional networks (GCNs) in order to characterize spatial arrangement features for land use classification from high resolution remote sensing images, with particular interest in comparing land use classifications between different graph-based methods and between different remote sensing images. We examine three kinds of graph-based methods, i.e., feature engineering, graph kernels, and GCNs. Based upon the extracted arrangement features and features regarding the spatial composition of land cover objects, we formulated ten land use classifications. We tested those on two different remote sensing images, which were acquired from GaoFen-2 (with a spatial resolution of 0.8 m) and ZiYuan-3 (of 2.5 m) satellites in 2020 on Fuzhou City, China. Our results showed that land use classifications that are based on the arrangement features derived from GCNs achieved the highest classification accuracy than using graph kernels and handcrafted graph features for both images. We also found that the contribution to separating land use types by arrangement features varies between GaoFen-2 and ZiYuan-3 images, due to the difference in the spatial resolution. This study offers a set of approaches for effectively mapping land use types from (very) high resolution satellite images.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Molina, P. C., M. P. Castro, and C. S. Anjos. "ASSESSMENT OF PCA AND MNF INFLUENCE IN THE VHR SATELLITE IMAGE CLASSIFICATIONS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W12-2020 (November 4, 2020): 55–60. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w12-2020-55-2020.

Повний текст джерела
Анотація:
Abstract. Orbital images have been increasingly refined spatially as spectrally as that is the case with those provided by satellite Earth observation WorldView-3 used in this paper. However, the images are very susceptible to noise interference, so it is difficult to identify and characterize objects. Therefore, it is essential to use techniques to minimize them. Thus, through increasingly innovative processing, it is possible to carry out detailed characterization mainly of urban areas. This work aims to perform the classification of images Worldview-3 using the advanced methods of classification Random Forest and Deep Learning for the region of Botafogo in the municipality of Rio de Janeiro, Brazil. Such classifications were performed for four different data sets, including the spectral bands and transformations (MNF and PCA) resulting from the original images. The results demonstrate that the use of transformations resulting from the original images as input data for the extraction of attributes in conjunction with the spectral bands improves the accuracy of the classifications generated by the Random Forest and Deep Learning method.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kijima, Hiroaki, Shin Yamada, Natsuo Konishi, Hitoshi Kubota, Hiroshi Tazawa, Takayuki Tani, Norio Suzuki, et al. "The Reliability of Classifications of Proximal Femoral Fractures with 3-Dimensional Computed Tomography: The New Concept of Comprehensive Classification." Advances in Orthopedics 2014 (2014): 1–5. http://dx.doi.org/10.1155/2014/359689.

Повний текст джерела
Анотація:
The reliability of proximal femoral fracture classifications using 3DCT was evaluated, and a comprehensive “area classification” was developed. Eleven orthopedists (5–26 years from graduation) classified 27 proximal femoral fractures at one hospital from June 2013 to July 2014 based on preoperative images. Various classifications were compared to “area classification.” In “area classification,” the proximal femur is divided into 4 areas with 3 boundary lines: Line-1 is the center of the neck, Line-2 is the border between the neck and the trochanteric zone, and Line-3 links the inferior borders of the greater and lesser trochanters. A fracture only in the first area was classified as a pure first area fracture; one in the first and second area was classified as a 1-2 type fracture. In the same way, fractures were classified as pure 2, 3-4, 1-2-3, and so on. “Area classification” reliability was highest when orthopedists with varying experience classified proximal femoral fractures using 3DCT. Other classifications cannot classify proximal femoral fractures if they exceed each classification’s particular zones. However, fractures that exceed the target zones are “dangerous” fractures. “Area classification” can classify such fractures, and it is therefore useful for selecting osteosynthesis methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cracknell, Matthew, Anita Parbhakar-Fox, Laura Jackson, and Ekaterina Savinova. "Automated Acid Rock Drainage Indexing from Drill Core Imagery." Minerals 8, no. 12 (December 4, 2018): 571. http://dx.doi.org/10.3390/min8120571.

Повний текст джерела
Анотація:
The automated classification of acid rock drainage (ARD) potential developed in this study is based on a manual ARD Index (ARDI) logging code. Several components of the ARDI require accurate identification of sulfide minerals that hyperspectral drill core scanning technologies cannot yet report. To overcome this, a new methodology was developed that uses red–green–blue (RGB) true color images generated by Corescan® to determine the presence or absence of sulfides using supervised classification. The output images were then recombined with Corescan® visible to near infrared-shortwave infrared (VNIR-SWIR) mineral classifications to obtain information that allowed an automated ARDI (A-ARDI) assessment to be performed. To test this, A-ARDI estimations and the resulting acid-forming potential classifications for 22 drill core samples obtained from a porphyry Cu–Au deposit were compared to ARDI classifications made from manual observations and geochemical and mineralogical analyses. Results indicated overall agreement between automated and manual ARD potential classifications and those from geochemical and mineralogical analyses. Major differences between manual and automated ARDI results were a function of differences in estimates of sulfide and neutralizer mineral concentrations, likely due to the subjective nature of manual estimates of mineral content and automated classification image resolution limitations. The automated approach presented here for the classification of ARD potential offers rapid and repeatable outcomes that complement manual and analyses derived classifications. Methods for automated ARD classification from digital drill core data represent a step-change for geoenvironmental management practices in the mining industry.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Cherici, Céline. "Dossier thématique : images et classifications du vivant." Bulletin d’histoire et d’épistémologie des sciences de la vie Volume 23, no. 2 (2016): 119. http://dx.doi.org/10.3917/bhesv.232.0119.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Moreira, Noeli Aline Particcelli, Mariane Souza Reis, Thales Sehn Körting, Luciano Vieira Dutra, Emiliano Ferreira Castejon, and Egidio Arai. "Subpixel Analysis of MODIS Imagery Time Series using Transfer Learning and Relative Calibration." Revista Brasileira de Cartografia 72, no. 4 (November 14, 2020): 558–73. http://dx.doi.org/10.14393/rbcv72n4-54044.

Повний текст джерела
Анотація:
Transfer learning reuses a pre-trained model on a new related problem, which can be useful for monitoring large areas such as the Amazon biome. A given object must have similar spectral characteristics in the data used for this type of analysis, which can be achieved using relative calibration techniques. In this article, we present a relative calibration process in multitemporal images and evaluate its impacts on a subpixel classification process. MODIS images from the Amazon region, collected between 2013 and 2017, were relatively calibrated using a 2012 image as reference and classified by transfer learning. Classifications of calibrated and uncalibrated images were compared with data from the PRODES project, focusing on forest areas. A great variation was observed in the spectral responses of the forest class, even in images of proximate dates and from the same sensor. These variations significantly impacted the land cover classifications in the subpixel, with cases of agreement between the uncalibrated data maps and PRODES of 0%. For calibrated data, the agreement values ​​were greater than 70%. The results indicate that the method used, although quite simple, is adequate and necessary for the subpixel classification of MODIS images by transfer learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Liu, J., J. Heiskanen, E. Aynekulu, and P. K. E. Pellikka. "Seasonal variation of land cover classification accuracy of Landsat 8 images in Burkina Faso." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (April 29, 2015): 455–60. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-455-2015.

Повний текст джерела
Анотація:
In the seasonal tropics, vegetation shows large reflectance variation because of phenology, which complicates land cover change monitoring. Ideally, multi-temporal images for change monitoring should be from the same season, but availability of cloud-free images is limited in wet season in comparison to dry season. Our aim was to investigate how land cover classification accuracy depends on the season in southern Burkina Faso by analyzing 14 Landsat 8 OLI images from April 2013 to April 2014. Because all the images were acquired within one year, we assumed that most of the observed variation between the images was due to phenology. All the images were cloud masked and atmospherically corrected. Field data was collected from 160 field plots located within a 10 km × 10 km study area between December 2013 and February 2014. The plots were classified to closed forest, open forest and cropland, and used as training and validation data. Random forest classifier was employed for classifications. According to the results, there is a tendency for higher classification accuracy towards the dry season. The highest classification accuracy was provided by an image from December, which corresponds to the dry season and minimum NDVI period. In contrast, an image from October, which corresponds to the wet season and maximum NDVI period provided the lowest accuracy. Furthermore, the multi-temporal classification based on dry and wet season images had higher accuracy than single image classifications, but the improvement was small because seasonal changes affect similarly to the different land cover classes.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ni, Jin-xia, Si-hua Gao, Yu-hang Li, Shi-lei Ma, Tian Tian, Fang-fang Mo, Liu-qing Wang, and Wen-zeng Zhu. "Clinical Trial on the Characteristics of Zheng Classification of Pulmonary Diseases Based on Infrared Thermal Imaging Technology." Evidence-Based Complementary and Alternative Medicine 2013 (2013): 1–6. http://dx.doi.org/10.1155/2013/218909.

Повний текст джерела
Анотація:
Zheng classification study based on infrared thermal imaging technology has not been reported before. To detect the relative temperature of viscera and bowels of different syndromes patients with pulmonary disease and to summarize the characteristics of different Zheng classifications, the infrared thermal imaging technology was used in the clinical trial. The results showed that the infrared thermal images characteristics of different Zheng classifications of pulmonary disease were distinctly different. The influence on viscera and bowels was deeper in phlegm-heat obstructing lung syndrome group than in cold-phlegm obstructing lung syndrome group. It is helpful to diagnose Zheng classification and to improve the diagnosis rate by analyzing the infrared thermal images of patients. The application of infrared thermal imaging technology provided objective measures for medical diagnosis and treatment in the field of Zheng studies and provided a new methodology for Zheng classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Martínez Prentice, Ricardo, Miguel Villoslada Peciña, Raymond D. Ward, Thaisa F. Bergamo, Chris B. Joyce, and Kalev Sepp. "Machine Learning Classification and Accuracy Assessment from High-Resolution Images of Coastal Wetlands." Remote Sensing 13, no. 18 (September 14, 2021): 3669. http://dx.doi.org/10.3390/rs13183669.

Повний текст джерела
Анотація:
High-resolution images obtained by multispectral cameras mounted on Unmanned Aerial Vehicles (UAVs) are helping to capture the heterogeneity of the environment in images that can be discretized in categories during a classification process. Currently, there is an increasing use of supervised machine learning (ML) classifiers to retrieve accurate results using scarce datasets with samples with non-linear relationships. We compared the accuracies of two ML classifiers using a pixel and object analysis approach in six coastal wetland sites. The results show that the Random Forest (RF) performs better than K-Nearest Neighbors (KNN) algorithm in the classification of pixels and objects and the classification based on pixel analysis is slightly better than the object-based analysis. The agreement between the classifications of objects and pixels is higher in Random Forest. This is likely due to the heterogeneity of the study areas, where pixel-based classifications are most appropriate. In addition, from an ecological perspective, as these wetlands are heterogeneous, the pixel-based classification reflects a more realistic interpretation of plant community distribution.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Hizukuri, Akiyoshi, and Ryohei Nakayama. "Computer-Aided Diagnosis Scheme for Determining Histological Classification of Breast Lesions on Ultrasonographic Images Using Convolutional Neural Network." Diagnostics 8, no. 3 (July 25, 2018): 48. http://dx.doi.org/10.3390/diagnostics8030048.

Повний текст джерела
Анотація:
It can be difficult for clinicians to accurately discriminate among histological classifications of breast lesions on ultrasonographic images. The purpose of this study was to develop a computer-aided diagnosis (CADx) scheme for determining histological classifications of breast lesions using a convolutional neural network (CNN). Our database consisted of 578 breast ultrasonographic images. It included 287 malignant (217 invasive carcinomas and 70 noninvasive carcinomas) and 291 benign lesions (111 cysts and 180 fibroadenomas). In this study, the CNN constructed from four convolutional layers, three batch-normalization layers, four pooling layers, and two fully connected layers was employed for distinguishing between the four different types of histological classifications for lesions. The classification accuracies for histological classifications with our CNN model were 83.9–87.6%, which were substantially higher than those with our previous method (55.7–79.3%) using hand-crafted features and a classifier. The area under the curve with our CNN model was 0.976, whereas that with our previous method was 0.939 (p = 0.0001). Our CNN model would be useful in differential diagnoses of breast lesions as a diagnostic aid.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Tian, Chunwei, Guanglu Sun, Qi Zhang, Weibing Wang, Teng Chen, and Yuan Sun. "Integrating Sparse and Collaborative Representation Classifications for Image Classification." International Journal of Image and Graphics 17, no. 02 (April 2017): 1750007. http://dx.doi.org/10.1142/s0219467817500073.

Повний текст джерела
Анотація:
Collaborative representation classification (CRC) is an important sparse method, which is easy to carry out and uses a linear combination of training samples to represent a test sample. CRC method utilizes the offset between representation result of each class and the test sample to implement classification. However, the offset usually cannot well express the difference between every class and the test sample. In this paper, we propose a novel representation method for image recognition to address the above problem. This method not only fuses sparse representation and CRC method to improve the accuracy of image recognition, but also has novel fusion mechanism to classify images. The implementations of the proposed method have the following steps. First of all, it produces collaborative representation of the test sample. That is, a linear combination of all the training samples is first determined to represent the test sample. Then, it gets the sparse representation classification (SRC) of the test sample. Finally, the proposed method respectively uses CRC and SRC representations to obtain two kinds of scores of the test sample and fuses them to recognize the image. The experiments of face recognition show that the combination of CRC and SRC has satisfactory performance for image classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Loey, Mohamed, Mukdad Naman, and Hala Zayed. "Deep Transfer Learning in Diagnosing Leukemia in Blood Cells." Computers 9, no. 2 (April 15, 2020): 29. http://dx.doi.org/10.3390/computers9020029.

Повний текст джерела
Анотація:
Leukemia is a fatal disease that threatens the lives of many patients. Early detection can effectively improve its rate of remission. This paper proposes two automated classification models based on blood microscopic images to detect leukemia by employing transfer learning, rather than traditional approaches that have several disadvantages. In the first model, blood microscopic images are pre-processed; then, features are extracted by a pre-trained deep convolutional neural network named AlexNet, which makes classifications according to numerous well-known classifiers. In the second model, after pre-processing the images, AlexNet is fine-tuned for both feature extraction and classification. Experiments were conducted on a dataset consisting of 2820 images confirming that the second model performs better than the first because of 100% classification accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Lukic, Vesna, and Marcus Brüggen. "Galaxy Classifications with Deep Learning." Proceedings of the International Astronomical Union 12, S325 (October 2016): 217–20. http://dx.doi.org/10.1017/s1743921316012771.

Повний текст джерела
Анотація:
AbstractMachine learning techniques have proven to be increasingly useful in astronomical applications over the last few years, for example in object classification, estimating redshifts and data mining. One example of object classification is classifying galaxy morphology. This is a tedious task to do manually, especially as the datasets become larger with surveys that have a broader and deeper search-space. The Kaggle Galaxy Zoo competition presented the challenge of writing an algorithm to find the probability that a galaxy belongs in a particular class, based on SDSS optical spectroscopy data. The use of convolutional neural networks (convnets), proved to be a popular solution to the problem, as they have also produced unprecedented classification accuracies in other image databases such as the database of handwritten digits (MNIST †) and large database of images (CIFAR ‡). We experiment with the convnets that comprised the winning solution, but using broad classifications. The effect of changing the number of layers is explored, as well as using a different activation function, to help in developing an intuition of how the networks function and to see how they can be applied to radio galaxy images.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Leng, Xubo, Margot Wohl, Kenichi Ishii, Pavan Nayak, and Kenta Asahina. "Quantifying influence of human choice on the automated detection of Drosophila behavior by a supervised machine learning algorithm." PLOS ONE 15, no. 12 (December 16, 2020): e0241696. http://dx.doi.org/10.1371/journal.pone.0241696.

Повний текст джерела
Анотація:
Automated quantification of behavior is increasingly prevalent in neuroscience research. Human judgments can influence machine-learning-based behavior classification at multiple steps in the process, for both supervised and unsupervised approaches. Such steps include the design of the algorithm for machine learning, the methods used for animal tracking, the choice of training images, and the benchmarking of classification outcomes. However, how these design choices contribute to the interpretation of automated behavioral classifications has not been extensively characterized. Here, we quantify the effects of experimenter choices on the outputs of automated classifiers of Drosophila social behaviors. Drosophila behaviors contain a considerable degree of variability, which was reflected in the confidence levels associated with both human and computer classifications. We found that a diversity of sex combinations and tracking features was important for robust performance of the automated classifiers. In particular, features concerning the relative position of flies contained useful information for training a machine-learning algorithm. These observations shed light on the importance of human influence on tracking algorithms, the selection of training images, and the quality of annotated sample images used to benchmark the performance of a classifier (the ‘ground truth’). Evaluation of these factors is necessary for researchers to accurately interpret behavioral data quantified by a machine-learning algorithm and to further improve automated classifications.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

GONÇALVES, LUAN CELSO, ALBERTO OFENHEJM GOTFRYD, MARIA FERNANDA SILBER CAFFARO, NELSON ASTUR, RODRIGO GOES MEDÉA DE MENDONÇA, MARIANA KEI TOMA, and ROBERT MEVES. "ANALYSIS OF THE RELIABILITY OF THE LEE CLASSIFICATION FOR LUMBAR DISC HERNIATIONS." Coluna/Columna 19, no. 4 (December 2020): 258–61. http://dx.doi.org/10.1590/s1808-185120201904221700.

Повний текст джерела
Анотація:
ABSTRACT Objective To evaluate the intra- and interobserver reliability of the Lee et al. classification for migrated lumbar disc herniations. Methods In 2018, Ahn Y. et al. demonstrated the accuracy of this classification for radiologists. However, magnetic resonance images are often interpreted by orthopedists. Thus, a cross-sectional study was conducted by evaluating the magnetic resonance images of 82 patients diagnosed with lumbar disc herniation. The images were evaluated by 4 physicians, 3 of whom were spinal orthopedic specialists and 1 of whom was a radiologist. The intra- and interobserver analysis was conducted using the percentage of concordance and the Kappa method. Results The report of the classifications used by the four observers had a higher proportion of “zone 3” and “zone 4” type classifications in both evaluation moments. The most affected anatomical levels were L5-S1 (48.2%) and L4-L5 (41.4%). The intra- and interobserver concordance, when comparing both moments evaluation of the complementary examinations of the participants involved, was classified as moderate and very good. Conclusions Lee’s classification presented moderate to very good intra- and interobserver reliability for the evaluation of migrated lumbar disc herniation. Level of evidence II; Retrospective Study.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

S. M., Manimegalai, and Dr Ramaprabha T. "IMAGE CLASSIFICATIONS ON COVID 19 CXR IMAGES USING AUTO COLOR CORRELOGRAM FILTER." Indian Journal of Computer Science and Engineering 12, no. 5 (October 20, 2021): 1288–301. http://dx.doi.org/10.21817/indjcse/2021/v12i5/211205174.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Lanjewar, M. G., and O. L. Gurav. "Convolutional Neural Networks based classifications of soil images." Multimedia Tools and Applications 81, no. 7 (February 14, 2022): 10313–36. http://dx.doi.org/10.1007/s11042-022-12200-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lee, Jacky, Jeffrey Cardille, and Michael Coe. "BULC-U: Sharpening Resolution and Improving Accuracy of Land-Use/Land-Cover Classifications in Google Earth Engine." Remote Sensing 10, no. 9 (September 12, 2018): 1455. http://dx.doi.org/10.3390/rs10091455.

Повний текст джерела
Анотація:
Remote sensing is undergoing a fundamental paradigm shift, in which approaches interpreting one or two images are giving way to a wide array of data-rich applications. These include assessing global forest loss, tracking water resources across Earth’s surface, determining disturbance frequency across decades, and many more. These advances have been greatly facilitated by Google Earth Engine, which provides both image access and a platform for advanced analysis techniques. Within the realm of land-use/land-cover (LULC) classifications, Earth Engine provides the ability to create new classifications and to access major existing data sets that have already been created, particularly at global extents. By overlaying global LULC classifications—the 300-m GlobCover 2009 LULC data set for example—with sharper images like those from Landsat, one can see the promise and limits of these global data sets and platforms to fuse them. Despite the promise in a global classification covering all of the terrestrial surface, GlobCover 2009 may be too coarse for some applications. We asked whether the LULC labeling provided by GlobCover 2009 could be combined with the spatial granularity of the Landsat platform to produce a hybrid classification having the best features of both resources with high accuracy. Here we apply an improvement of the Bayesian Updating of Land Cover (BULC) algorithm that fused unsupervised Landsat classifications to GlobCover 2009, sharpening the result from a 300-m to a 30-m classification. Working with four clear categories in Mato Grosso, Brazil, we refined the resolution of the LULC classification by an order of magnitude while improving the overall accuracy from 69.1 to 97.5%. This “BULC-U” mode, because it uses unsupervised classifications as inputs, demands less region-specific knowledge from analysts and may be significantly easier for non-specialists to use. This technique can provide new information to land managers and others interested in highly accurate classifications at finer scales.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Teng, Yugang, Yuanzhen Zhang, and Zhenyu Wang. "Medical Image Analysis and Correlation Between Ankle Fracture Classification and Ankle Computed Tomography." Journal of Medical Imaging and Health Informatics 10, no. 12 (December 1, 2020): 2935–39. http://dx.doi.org/10.1166/jmihi.2020.3235.

Повний текст джерела
Анотація:
Objective: In this paper, we summarize computed tomography (CT) manifestations and characteristics of ankle fractures, and analyze the relationship between CT images and common ankle fracture classifications. Methods: A retrospective survey of 369 adult ankle fractures was performed. CT images of 1 cm horizontal cross-section above the ankle points and their characteristics were analyzed. Ankle fracture X-ray classification was performed, and the relationship between CT images and fracture X-ray classification was analyzed. Results: There is a correlation between CT images and Danis-Weber classification. The incidence of IOL fractures varies with the severity of Danis-Weber classification. After rank correlation test, the difference is statistically significant (Spearman R = 0.781,P < 0.001). CT images can detect IOL fractures that cannot be judged by X-ray fracture classification, and the incidence rate is 5.9%. Conclusions: The 1 cm horizontal cross-section CT image on the ankle point can clearly determine the combined tibiofibular IOL injury before surgery, and it has a good correlation with the Danis-Weber fracture classification, and can detect unexplainable IOL fractures in some radiographs.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Katz, Daniel S. W., Stuart A. Batterman, and Shannon J. Brines. "Improved Classification of Urban Trees Using a Widespread Multi-Temporal Aerial Image Dataset." Remote Sensing 12, no. 15 (August 1, 2020): 2475. http://dx.doi.org/10.3390/rs12152475.

Повний текст джерела
Анотація:
Urban tree identification is often limited by the accessibility of remote sensing imagery but has not yet been attempted with the multi-temporal commercial aerial photography that is now widely available. In this study, trees in Detroit, Michigan, USA are identified using eight high resolution red, green, and blue (RGB) aerial images from a commercial vendor and publicly available LiDAR data. Classifications based on these data were compared with classifications based on World View 2 satellite imagery, which is commonly used for this task but also more expensive. An object-based classification approach was used whereby tree canopies were segmented using LiDAR, and a street tree database was used for generating training and testing datasets. Overall accuracy using multi-temporal aerial images and LiDAR was 70%, which was higher than the accuracy achieved with World View 2 imagery and LiDAR (63%). When all data were used, classification accuracy increased to 74%. Taxa identified with high accuracy included Acer platanoides and Gleditsia, and taxa that were identified with good accuracy included Acer, Platanus, Quercus, and Tilia. Our results show that this large catalogue of multi-temporal aerial images can be leveraged for urban tree identification. While classification accuracy rates vary between taxa, the approach demonstrated can have practical value for socially or ecologically important taxa.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kliangsuwan, Thitinan, and Apichat Heednacram. "Classifiers for Ground-Based Cloud Images Using Texture Features." Advanced Materials Research 931-932 (May 2014): 1392–96. http://dx.doi.org/10.4028/www.scientific.net/amr.931-932.1392.

Повний текст джерела
Анотація:
The classification of ground-based cloud images has received more attention recently. The result of this work applies to the analysis of climate change; a correct classification is, therefore, important. In this paper, we used 18 texture features to distinguish 7 sky conditions. The important parameters of two classifiers are fine-tuned in the experiment, namely, k-nearest neighbor (k-NN) and artificial neural network (ANN). The performances of the two classifications were compared. Advantages and limitations of both classifiers were discussed. Our result revealed that the k-NN model performed at 72.99% accuracy while the ANN model has higher performance at 86.93% accuracy. We showed that our result is better than previous studies. Finally, seven most effective texture features are recommended to be used in the field of cloud type classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

V.R Mohan, D., I. Rambabu, and B. Harish. "Denoising and SAR Image Classification with K-SVD Algorithm." International Journal of Engineering & Technology 7, no. 3.3 (June 21, 2018): 36. http://dx.doi.org/10.14419/ijet.v7i3.3.14481.

Повний текст джерела
Анотація:
Synthetic Aperture Radar (SAR) is not only having the characteristic of obtaining images during all-day, all-weather, but also provides object information which is distinctive from visible and infrared sensors. but, SAR images have more speckles noise and fewer bands. This paper propose a method for denoising, feature extraction and classification of SAR images. Initially the image was denoised using K-Singular Value Decomposition (K-SVD) algorithm. Then the Gray Level Histogram (GLH) and Gray Level Co-occurrence Matrix (GLCM) are used for extraction of features. Secondly, the extracted feature vectors from the first step were combined using the correlation analysis to decrease the dimensionality of the feature spaces. Thirdly, Classification of SAR images was done in Sparse Representations Classification (SRC) and Support Vector Machines (SVMs). The results indicate that the performance of the introduce SAR classification method is good. The above mentioned classifications techniques are enhanced and performance parameters are computed using MATLAB 2014a software.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Latha, S., P. Muthu, Samiappan Dhanalakshmi, R. Kumar, Khin Wee Lai, and Xiang Wu. "Emerging Feature Extraction Techniques for Machine Learning-Based Classification of Carotid Artery Ultrasound Images." Computational Intelligence and Neuroscience 2022 (May 12, 2022): 1–14. http://dx.doi.org/10.1155/2022/1847981.

Повний текст джерела
Анотація:
Plaque deposits in the carotid artery are the major cause of stroke and atherosclerosis. Ultrasound imaging is used as an early indicator of disease progression. Classification of the images to identify plaque presence and intima-media thickness (IMT) by machine learning algorithms requires features extracted from the images. A total of 361 images were used for feature extraction, which will assist in further classification of the carotid artery. This study presents the extraction of 65 features, which constitute of shape, texture, histogram, correlogram, and morphology features. Principal component analysis (PCA)-based feature selection is performed, and the 22 most significant features, which will improve the classification accuracy, are selected. Naive Bayes algorithm and dynamic learning vector quantization (DLVQ)-based machine learning classifications are performed with the extracted and selected features, and analysis is performed.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Jiang, Tingxuan, Harald van der Werff, and Freek van der Meer. "Classification Endmember Selection with Multi-Temporal Hyperspectral Data." Remote Sensing 12, no. 10 (May 15, 2020): 1575. http://dx.doi.org/10.3390/rs12101575.

Повний текст джерела
Анотація:
In hyperspectral image classification, so-called spectral endmembers are used as reference data. These endmembers are either extracted from an image or taken from another source. Research has shown that endmembers extracted from an image usually perform best when classifying a single image. However, it is unclear if this also holds when classifying multi-temporal hyperspectral datasets. In this paper, we use spectral angle mapper, which is a frequently used classifier for hyperspectral datasets to classify multi-temporal airborne visible/infrared imaging spectrometer (AVIRIS) hyperspectral imagery. Three classifications are done on each of the images with endmembers being extracted from the corresponding image, and three more classifications are done on the three images while using averaged endmembers. We apply image-to-image registration and change detection to analyze the consistency of the classification results. We show that the consistency of classification accuracy using the averaged endmembers (around 65%) outperforms the classification results generated using endmembers that are extracted from each image separately (around 40%). We conclude that, for multi-temporal datasets, it is better to have an endmember collection that is not directly from the image, but is processed to a representative average.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Mohammed, Nora Ahmed, Mohammed Hamzah Abed, and Alaa Taima Albu-Salih. "Convolutional neural network for color images classification." Bulletin of Electrical Engineering and Informatics 11, no. 3 (June 1, 2022): 1343–49. http://dx.doi.org/10.11591/eei.v11i3.3730.

Повний текст джерела
Анотація:
Artificial intelligent and application of computer vision are an exciting topic in last few years, and its key for many real time applications like video summarization, image retrieval and image classifications. One of the most trend method in deep learning is a convolutional neural network, used for many applications of image processing and computer vision. In this work convolutional neural networks CNN model proposed for color image classification, the proposed model build using MATLAB tools of deep learning. In addition, the suggested model tested on three different datasets, with different size. The proposed model achieved highest result of accuracy, precision and sensitivity with the largest dataset and it was as following: accuracy is 0.9924, precision is 0.9947 and sensitivity is 0.9931, compare with other models.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Michalska, Magdalena, and Oksana Boyko. "AN OVERVIEW OF CLASSIFICATION METHODS FROM DERMOSCOPY IMAGES IN SKIN LESION DIAGNOSTIC." Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska 10, no. 2 (June 30, 2020): 36–39. http://dx.doi.org/10.35784/iapgos.1569.

Повний текст джерела
Анотація:
The article contains a review of selected classification methods of dermatoscopic images with human skin lesions, taking into account various stages of dermatological disease. The described algorithms are widely used in the diagnosis of skin lesions, such as artificial neural networks (CNN, DCNN), random forests, SVM, kNN classifier, AdaBoost MC and their modifications. The effectiveness, specificity and accuracy of classifications based on the same data sets were also compared and analyzed.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Movia, A., A. Beinat, and T. Sandri. "LAND USE CLASSIFICATION FROM VHR AERIAL IMAGES USING INVARIANT COLOUR COMPONENTS AND TEXTURE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 21, 2016): 311–17. http://dx.doi.org/10.5194/isprs-archives-xli-b7-311-2016.

Повний текст джерела
Анотація:
Very high resolution (VHR) aerial images can provide detailed analysis about landscape and environment; nowadays, thanks to the rapid growing airborne data acquisition technology an increasing number of high resolution datasets are freely available. <br><br> In a VHR image the essential information is contained in the red-green-blue colour components (RGB) and in the texture, therefore a preliminary step in image analysis concerns the classification in order to detect pixels having similar characteristics and to group them in distinct classes. Common land use classification approaches use colour at a first stage, followed by texture analysis, particularly for the evaluation of landscape patterns. Unfortunately RGB-based classifications are significantly influenced by image setting, as contrast, saturation, and brightness, and by the presence of shadows in the scene. The classification methods analysed in this work aim to mitigate these effects. The procedures developed considered the use of invariant colour components, image resampling, and the evaluation of a RGB texture parameter for various increasing sizes of a structuring element. <br><br> To identify the most efficient solution, the classification vectors obtained were then processed by a K-means unsupervised classifier using different metrics, and the results were compared with respect to corresponding user supervised classifications. <br><br> The experiments performed and discussed in the paper let us evaluate the effective contribution of texture information, and compare the most suitable vector components and metrics for automatic classification of very high resolution RGB aerial images.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Knapp, Kenneth R., Jessica L. Matthews, James P. Kossin, and Christopher C. Hennon. "Identification of Tropical Cyclone Storm Types Using Crowdsourcing." Monthly Weather Review 144, no. 10 (October 2016): 3783–98. http://dx.doi.org/10.1175/mwr-d-16-0022.1.

Повний текст джерела
Анотація:
The Cyclone Center project maintains a website that allows visitors to answer questions based on tropical cyclone satellite imagery. The goal is to provide a reanalysis of satellite-derived tropical cyclone characteristics from a homogeneous historical database composed of satellite imagery with a common spatial resolution for use in long-term, global analyses. The determination of the cyclone “type” (curved band, eye, shear, etc.) is a starting point for this process. This analysis shows how multiple classifications of a single image are combined to provide probabilities of a particular image’s type using an expectation–maximization (EM) algorithm. Analysis suggests that the project needs about 10 classifications of an image to adequately determine the storm type. The algorithm is capable of characterizing classifiers with varying levels of expertise, though the project needs about 200 classifications to quantify an individual’s precision. The EM classifications are compared with an objective algorithm, satellite fix data, and the classifications of a known classifier. The EM classifications compare well, with best agreement for eye and embedded center storm types and less agreement for shear and when convection is too weak (termed no-storm images). Both the EM algorithm and the known classifier showed similar tendencies when compared against an objective algorithm. The EM algorithm also fared well when compared to tropical cyclone fix datasets, having higher agreement with embedded centers and less agreement for eye images. The results were used to show the distribution of storm types versus wind speed during a storm’s lifetime.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Cejudo, Jose E., Akhilanand Chaurasia, Ben Feldberg, Joachim Krois, and Falk Schwendicke. "Classification of Dental Radiographs Using Deep Learning." Journal of Clinical Medicine 10, no. 7 (April 3, 2021): 1496. http://dx.doi.org/10.3390/jcm10071496.

Повний текст джерела
Анотація:
Objectives: To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance. Methods: Our dataset consisted of 31,288 panoramic, 43,598 periapical, 14,326 bitewing, and 1176 cephalometric radiographs from two centers (Berlin/Germany; Lucknow/India). For a subset of images L (32,381 images), image classifications were available and manually validated by an expert. The remaining subset of images U was iteratively annotated using active learning, with ResNet-34 being trained on L, least confidence informative sampling being performed on U, and the most uncertain image classifications from U being reviewed by a human expert and iteratively used for re-training. We then employed a baseline convolutional neural networks (CNN), a residual network (another ResNet-34, pretrained on ImageNet), and a capsule network (CapsNet) for classification. Early stopping was used to prevent overfitting. Evaluation of the model performances followed stratified k-fold cross-validation. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to provide visualizations of the weighted activations maps. Results: All three models showed high accuracy (>98%) with significantly higher accuracy, F1-score, precision, and sensitivity of ResNet than baseline CNN and CapsNet (p < 0.05). Specificity was not significantly different. ResNet achieved the best performance at small variance and fastest convergence. Misclassification was most common between bitewings and periapicals. For bitewings, model activation was most notable in the inter-arch space for periapicals interdentally, for panoramics on bony structures of maxilla and mandible, and for cephalometrics on the viscerocranium. Conclusions: Regardless of the models, high classification accuracies were achieved. Image features considered for classification were consistent with expert reasoning.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Castillejo-González, Isabel, Cristina Angueira, Alfonso García-Ferrer, and Manuel Sánchez de la Orden. "Combining Object-Based Image Analysis with Topographic Data for Landform Mapping: A Case Study in the Semi-Arid Chaco Ecosystem, Argentina." ISPRS International Journal of Geo-Information 8, no. 3 (March 7, 2019): 132. http://dx.doi.org/10.3390/ijgi8030132.

Повний текст джерела
Анотація:
This paper presents an object-based approach to mapping a set of landforms located in the fluvio-eolian plain of Rio Dulce and alluvial plain of Rio Salado (Dry Chaco, Argentina), with two Landsat 8 images collected in summer and winter combined with topographic data. The research was conducted in two stages. The first stage focused on basic-spectral landform classifications where both pixel- and object-based image analyses were tested with five classification algorithms: Mahalanobis Distance (MD), Spectral Angle Mapper (SAM), Maximum Likelihood (ML), Support Vector Machine (SVM) and Decision Tree (DT). The results obtained indicate that object-based analyses clearly outperform pixel-based classifications, with an increase in accuracy of up to 35%. The second stage focused on advanced object-based derived variables with topographic ancillary data classifications. The combinations of variables were tested in order to obtain the most accurate map of landforms based on the most successful classifiers identified in the previous stage (ML, SVM and DT). The results indicate that DT is the most accurate classifier, exhibiting the highest overall accuracies with values greater than 72% in both the winter and summer images. Future work could combine both, the most appropriate methodologies and combinations of variables obtained in this study, with physico-chemical variables sampled to improve the classification of landforms and even of types of soil.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Saufi Mohd Kassim, Muhamad, Wan Ishak Wan Ismail, and Lim Ho Teik. "Oil Palm Fruit Classifications by using Near Infrared Images." Research Journal of Applied Sciences, Engineering and Technology 7, no. 11 (March 20, 2014): 2200–2207. http://dx.doi.org/10.19026/rjaset.7.517.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Akkasaligar, Prema T., Sunanda Biradar, Sharan Badiger, and Rohini Pujari. "Review on Classifications of Medical Ultrasound Images of Kidney." International Journal of Computer Sciences and Engineering 6, no. 7 (July 31, 2018): 1565–68. http://dx.doi.org/10.26438/ijcse/v6i7.15651568.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Zhang, David, Bo Pang, Naimin Li, Kuanquan Wang, and Hongzhi Zhang. "Computerized Diagnosis from Tongue Appearance Using Quantitative Feature Classification." American Journal of Chinese Medicine 33, no. 06 (January 2005): 859–66. http://dx.doi.org/10.1142/s0192415x05003466.

Повний текст джерела
Анотація:
This study investigates relationships between diseases and the appearance of the human tongue in terms of quantitative features. The experimental samples are digital tongue images captured from three groups of candidates: one group in normal health, one suffering with appendicitis, and a third suffering with pancreatitis. For the purposes of diagnostic classification, we first extract chromatic and textural measurements from original tongue images. A feature selection procedure then identifies the measures most relevant to the classifications, based on which of the three tongue image categories are clearly separated. This study validates the use of tongue inspection by means of quantitative feature classification in medical diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Holloway, Jacinta, Kate J. Helmstedt, Kerrie Mengersen, and Michael Schmidt. "A Decision Tree Approach for Spatially Interpolating Missing Land Cover Data and Classifying Satellite Images." Remote Sensing 11, no. 15 (July 31, 2019): 1796. http://dx.doi.org/10.3390/rs11151796.

Повний текст джерела
Анотація:
Sustainable Development Goals (SDGs) are a set of priorities the United Nations and World Bank have set for countries to reach in order to improve quality of life and environment globally by 2030. Free satellite images have been identified as a key resource that can be used to produce official statistics and analysis to measure progress towards SDGs, especially those that are concerned with the physical environment, such as forest, water, and crops. Satellite images can often be unusable due to missing data from cloud cover, particularly in tropical areas where the deforestation rates are high. There are existing methods for filling in image gaps; however, these are often computationally expensive in image classification or not effective at pixel scale. To address this, we use two machine learning methods—gradient boosted machine and random forest algorithms—to classify the observed and simulated ‘missing’ pixels in satellite images as either grassland or woodland. We also predict a continuous biophysical variable, Foliage Projective Cover (FPC), which was derived from satellite images, and perform accurate binary classification and prediction using only the latitude and longitude of the pixels. We compare the performance of these methods against each other and inverse distance weighted interpolation, which is a well-established spatial interpolation method. We find both of the machine learning methods, particularly random forest, perform fast and accurate classifications of both observed and missing pixels, with up to 0.90 accuracy for the binary classification of pixels as grassland or woodland. The results show that the random forest method is more accurate than inverse distance weighted interpolation and gradient boosted machine for prediction of FPC for observed and missing data. Based on the case study results from a sub-tropical site in Australia, we show that our approach provides an efficient alternative for interpolating images and performing land cover classifications.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Movia, A., A. Beinat, and T. Sandri. "LAND USE CLASSIFICATION FROM VHR AERIAL IMAGES USING INVARIANT COLOUR COMPONENTS AND TEXTURE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 21, 2016): 311–17. http://dx.doi.org/10.5194/isprsarchives-xli-b7-311-2016.

Повний текст джерела
Анотація:
Very high resolution (VHR) aerial images can provide detailed analysis about landscape and environment; nowadays, thanks to the rapid growing airborne data acquisition technology an increasing number of high resolution datasets are freely available. &lt;br&gt;&lt;br&gt; In a VHR image the essential information is contained in the red-green-blue colour components (RGB) and in the texture, therefore a preliminary step in image analysis concerns the classification in order to detect pixels having similar characteristics and to group them in distinct classes. Common land use classification approaches use colour at a first stage, followed by texture analysis, particularly for the evaluation of landscape patterns. Unfortunately RGB-based classifications are significantly influenced by image setting, as contrast, saturation, and brightness, and by the presence of shadows in the scene. The classification methods analysed in this work aim to mitigate these effects. The procedures developed considered the use of invariant colour components, image resampling, and the evaluation of a RGB texture parameter for various increasing sizes of a structuring element. &lt;br&gt;&lt;br&gt; To identify the most efficient solution, the classification vectors obtained were then processed by a K-means unsupervised classifier using different metrics, and the results were compared with respect to corresponding user supervised classifications. &lt;br&gt;&lt;br&gt; The experiments performed and discussed in the paper let us evaluate the effective contribution of texture information, and compare the most suitable vector components and metrics for automatic classification of very high resolution RGB aerial images.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Bonniaud, Paul, Jérémie Jacques, Thomas Lambin, Jean-Michel Gonzalez, Xavier Dray, Emmanuel Coron, Sarah Leblanc, et al. "Endoscopic characterization of colorectal neoplasia with different published classifications: comparative study involving CONECCT classification." Endoscopy International Open 10, no. 01 (January 2022): E145—E153. http://dx.doi.org/10.1055/a-1613-5328.

Повний текст джерела
Анотація:
Abstract Background and study aims The aim of this study was to validate the COlorectal NEoplasia Classification to Choose the Treatment (CONECCT) classification that groups all published criteria (including covert signs of carcinoma) in a single table. Patients and methods For this multicenter comparative study an expert endoscopist created an image library (n = 206 lesions; from hyperplastic to deep invasive cancers) with at least white light Imaging and chromoendoscopy images (virtual ± dye based). Lesions were resected/biopsied to assess histology. Participants characterized lesions using the Paris, Laterally Spreading Tumours, Kudo, Sano, NBI International Colorectal Endoscopic Classification (NICE), Workgroup serrAted polypS and Polyposis (WASP), and CONECCT classifications, and assessed the quality of images on a web-based platform. Krippendorff alpha and Cohen’s Kappa were used to assess interobserver and intra-observer agreement, respectively. Answers were cross-referenced with histology. Results Eleven experts, 19 non-experts, and 10 gastroenterology fellows participated. The CONECCT classification had a higher interobserver agreement (Krippendorff alpha = 0.738) than for all the other classifications and increased with expertise and with quality of pictures. CONECCT classification had a higher intra-observer agreement than all other existing classifications except WASP (only describing Sessile Serrated Adenoma Polyp). Specificity of CONECCT IIA (89.2, 95 % CI [80.4;94.9]) to diagnose adenomas was higher than the NICE2 category (71.1, 95 % CI [60.1;80.5]). The sensitivity of Kudo Vi, Sano IIIa, NICE 2 and CONECCT IIC to detect adenocarcinoma were statistically different (P < 0.001): the highest sensitivities were for NICE 2 (84.2 %) and CONECCT IIC (78.9 %), and the lowest for Kudo Vi (31.6 %). Conclusions The CONECCT classification currently offers the best interobserver and intra-observer agreement, including between experts and non-experts. CONECCT IIA is the best classification for excluding presence of adenocarcinoma in a colorectal lesion and CONECCT IIC offers the better compromise for diagnosing superficial adenocarcinoma.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Sekertekin, A., A. M. Marangoz, and H. Akcin. "PIXEL-BASED CLASSIFICATION ANALYSIS OF LAND USE LAND COVER USING SENTINEL-2 AND LANDSAT-8 DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W6 (November 13, 2017): 91–93. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w6-91-2017.

Повний текст джерела
Анотація:
The aim of this study is to conduct accuracy analyses of Land Use Land Cover (LULC) classifications derived from Sentinel-2 and Landsat-8 data, and to reveal which dataset present better accuracy results. Zonguldak city and its near surrounding was selected as study area for this case study. Sentinel-2 Multispectral Instrument (MSI) and Landsat-8 the Operational Land Imager (OLI) data, acquired on 6 April 2016 and 3 April 2016 respectively, were utilized as satellite imagery in the study. The RGB and NIR bands of Sentinel-2 and Landsat-8 were used for classification and comparison. Pan-sharpening process was carried out for Landsat-8 data before classification because the spatial resolution of Landsat-8 (30m) is far from Sentinel-2 RGB and NIR bands (10m). LULC images were generated using pixel-based Maximum Likelihood (MLC) supervised classification method. As a result of the accuracy assessment, kappa statistics for Sentinel-2 and Landsat-8 data were 0.78 and 0.85 respectively. The obtained results showed that Sentinel-2 MSI presents more satisfying LULC images than Landsat-8 OLI data. However, in some areas of Sea class Landsat-8 presented better results than Sentinel-2.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Astawa, I. Nyoman Gede Arya, Made Leo Radhitya, I. Wayan Raka Ardana, and Felix Andika Dwiyanto. "Face Images Classification using VGG-CNN." Knowledge Engineering and Data Science 4, no. 1 (August 1, 2021): 49. http://dx.doi.org/10.17977/um018v4i12021p49-54.

Повний текст джерела
Анотація:
Image classification is a fundamental problem in computer vision. In facial recognition, image classification can speed up the training process and also significantly improve accuracy. The use of deep learning methods in facial recognition has been commonly used. One of them is the Convolutional Neural Network (CNN) method which has high accuracy. Furthermore, this study aims to combine CNN for facial recognition and VGG for the classification process. The process begins by input the face image. Then, the preprocessor feature extractor method is used for transfer learning. This study uses a VGG-face model as an optimization model of transfer learning with a pre-trained model architecture. Specifically, the features extracted from an image can be numeric vectors. The model will use this vector to describe specific features in an image. The face image is divided into two, 17% of data test and 83% of data train. The result shows that the value of accuracy validation (val_accuracy), loss, and loss validation (val_loss) are excellent. However, the best training results are images produced from digital cameras with modified classifications. Val_accuracy's result of val_accuracy is very high (99.84%), not too far from the accuracy value (94.69%). Those slight differences indicate an excellent model, since if the difference is too much will causes underfit. Other than that, if the accuracy value is higher than the accuracy validation value, then it will cause an overfit. Likewise, in the loss and val_loss, the two values are val_loss (0.69%) and loss value (10.41%).
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Cavanagh, Mitchell K., Kenji Bekki, and Brent A. Groves. "Morphological classification of galaxies with deep learning: comparing 3-way and 4-way CNNs." Monthly Notices of the Royal Astronomical Society 506, no. 1 (June 2, 2021): 659–76. http://dx.doi.org/10.1093/mnras/stab1552.

Повний текст джерела
Анотація:
ABSTRACT Classifying the morphologies of galaxies is an important step in understanding their physical properties and evolutionary histories. The advent of large-scale surveys has hastened the need to develop techniques for automated morphological classification. We train and test several convolutional neural network (CNN) architectures to classify the morphologies of galaxies in both a 3-class (elliptical, lenticular, and spiral) and a 4-class (+irregular/miscellaneous) schema with a data set of 14 034 visually classified SDSS images. We develop a new CNN architecture that outperforms existing models in both 3-way and 4-way classifications, with overall classification accuracies of 83 and 81 per cent, respectively. We also compare the accuracies of 2-way/binary classifications between all four classes, showing that ellipticals and spirals are most easily distinguished (&gt;98 per cent accuracy), while spirals and irregulars are hardest to differentiate (78 per cent accuracy). Through an analysis of all classified samples, we find tentative evidence that misclassifications are physically meaningful, with lenticulars misclassified as ellipticals tending to be more massive, among other trends. We further combine our binary CNN classifiers to perform a hierarchical classification of samples, obtaining comparable accuracies (81 per cent) to the direct 3-class CNN, but considerably worse accuracies in the 4-way case (65 per cent). As an additional verification, we apply our networks to a small sample of Galaxy Zoo images, obtaining accuracies of 92, 82, and 77 per cent for the binary, 3-way, and 4-way classifications, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Mansur, Henrique, Lucas Sacramento Ramos, and Anderson Freitas. "TL 18167 - Reproducibility assessment of the Lauge-Hansen, Danis-Weber and AO classifications of ankle fractures." Scientific Journal of the Foot & Ankle 13, Supl 1 (November 11, 2019): 98S. http://dx.doi.org/10.30795/scijfootankle.2019.v13.1093.

Повний текст джерела
Анотація:
Introduction: Although there are some studies on the reproducibility of various classifications of ankle fractures, they are controversial and lack consensus on which classification is the most appropriate. Thus, the objective of this study is to identify which of the 3 main ankle fracture classifications has the highest intra- and interobserver reproducibility and to assess whether the medical training stage of the participants affects the evaluation. Methods: Radiographs of 30 patients with ankle fracture in anteroposterior (AP), profile and true AP views were selected. All images were evaluated by 11 participants at different stages of their medical training (5 residents and 6 orthopedic surgeons) and at 2 different times. Intra- and interobserver agreement was analyzed using the weighted Cohen's kappa coefficient. Paired Student's t-tests were performed to assess whether the degree of interobserver agreement significantly differed between classification methods. Results: The results showed significant agreement in all classifications when analyzing intraobserver agreement alone. The Danis-Weber classification showed a highly significant (p<0.0001) moderate-to-excellent interobserver agreement. The Danis-Weber classification had, on average, a significantly higher degree of agreement than the other classification methods (p<0.0001). Conclusion: The Danis-Weber classification had the highest reproducibility among the classification methods evaluated in this study.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Labuzzetta, Charles, Zhengyuan Zhu, Xinyue Chang, and Yuyu Zhou. "A Submonthly Surface Water Classification Framework via Gap-Fill Imputation and Random Forest Classifiers of Landsat Imagery." Remote Sensing 13, no. 9 (April 30, 2021): 1742. http://dx.doi.org/10.3390/rs13091742.

Повний текст джерела
Анотація:
Global surface water classification layers, such as the European Joint Research Centre’s (JRC) Monthly Water History dataset, provide a starting point for accurate and large scale analyses of trends in waterbody extents. On the local scale, there is an opportunity to increase the accuracy and temporal frequency of these surface water maps by using locally trained classifiers and gap-filling missing values via imputation in all available satellite images. We developed the Surface Water IMputation (SWIM) classification framework using R and the Google Earth Engine computing platform to improve water classification compared to the JRC study. The novel contributions of the SWIM classification framework include (1) a cluster-based algorithm to improve classification sensitivity to a variety of surface water conditions and produce approximately unbiased estimation of surface water area, (2) a method to gap-fill every available Landsat image for a region of interest to generate submonthly classifications at the highest possible temporal frequency, (3) an outlier detection method for identifying images that contain classification errors due to failures in cloud masking. Validation and several case studies demonstrate the SWIM classification framework outperforms the JRC dataset in spatiotemporal analyses of small waterbody dynamics with previously unattainable sensitivity and temporal frequency. Most importantly, this study shows that reliable surface water classifications can be obtained for all pixels in every available Landsat image, even those containing cloud cover, after performing gap-fill imputation. By using this technique, the SWIM framework supports monitoring water extent on a submonthly basis, which is especially applicable to assessing the impact of short-term flood and drought events. Additionally, our results contribute to addressing the challenges of training machine learning classifiers with biased ground truth data and identifying images that contain regions of anomalous classification errors.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Yang, Na, and Yongtao Zhang. "A Gaussian Process Classification and Target Recognition Algorithm for SAR Images." Scientific Programming 2022 (January 20, 2022): 1–10. http://dx.doi.org/10.1155/2022/9212856.

Повний текст джерела
Анотація:
Synthetic aperture Radar (SAR) uses the relative movement of the Radar and the target to pick up echoes of the detected area and image it. In contrast to optical imaging, SAR imaging systems are not affected by weather and time and can detect targets in harsh conditions. Therefore, the SAR image has important application value in military and civilian purposes. This paper introduces the classification of Gaussian process. Gaussian process classification is a probabilistic classification algorithm based on Bass frame. This is a complete probability expression. Based on Gaussian process and SAR data, Gaussian process classification algorithm for SAR images is studied in this paper. In this paper, we introduce the basic principle of Gaussian process, briefly analyze the basic theory of classification and the characteristics of SAR images, provide the evaluation index system of image classification, and give the SAR classification model of Gaussian process. Taking Laplace approximation as an example, several classification algorithms are introduced directly. Based on the two classifications, we propose an indirect multipurpose classification method and a multifunction classification method for two-pair two-Gaussian processes. The SAR image algorithm based on the two categories is relatively simple and achieves certain results.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Wang, Da-Han, Wei Zhou, Jianmin Li, Yun Wu, and Shunzhi Zhu. "Exploring Misclassification Information for Fine-Grained Image Classification." Sensors 21, no. 12 (June 18, 2021): 4176. http://dx.doi.org/10.3390/s21124176.

Повний текст джерела
Анотація:
Fine-grained image classification is a hot topic that has been widely studied recently. Many fine-grained image classification methods ignore misclassification information, which is important to improve classification accuracy. To make use of misclassification information, in this paper, we propose a novel fine-grained image classification method by exploring the misclassification information (FGMI) of prelearned models. For each class, we harvest the confusion information from several prelearned fine-grained image classification models. For one particular class, we select a number of classes which are likely to be misclassified with this class. The images of selected classes are then used to train classifiers. In this way, we can reduce the influence of irrelevant images to some extent. We use the misclassification information for all the classes by training a number of confusion classifiers. The outputs of these trained classifiers are combined to represent images and produce classifications. To evaluate the effectiveness of the proposed FGMI method, we conduct fine-grained classification experiments on several public image datasets. Experimental results prove the usefulness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Khorasani, Nioosha E., Gabriel Thomas, Simone Balocco, and Danny Mann. "Agricultural Harvester Sound Classification using Convolutional Neural Networks and Spectrograms." Applied Engineering in Agriculture 38, no. 2 (2022): 455–59. http://dx.doi.org/10.13031/aea.14668.

Повний текст джерела
Анотація:
HighlightsAutomatic classification of harvester sounds.Final classification obtained using three convolutional neural networks.The results of the networks were combined via stacking and voting to achieve 100% accuracy.Abstract. The use of deep learning in agricultural tasks has recently become popular. Deep learning networks have been used for analyzing images of crops, identifying paddy areas, distinguishing sick plants from healthy ones, to name a few applications. Besides visual systems, sound analysis of agricultural machinery is a time-sensitive task that can also be incorporated in decision making and can be done with the help of deep learning models. We propose a method to generate spectrogram images from the sound of a harvester and classify them into three working modes in real-time. We used three convolutional neural networks and use the outputs of these networks as inputs to a stacking ensemble method to improve the accuracy of the system. To achieve 100% classification accuracy, a final decision is made by voting based on several consecutive classifications made by the stacking step. We were able to perform classifications in less than 1 s which was the standard to be considered as a safe time for the harvester. Keywords: Convolutional neural networks, Deep learning, Spectrograms, Stacking, Voting.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Foglini, Federica, Valentina Grande, Fabio Marchese, Valentina A. Bracchi, Mariacristina Prampolini, Lorenzo Angeletti, Giorgio Castellan, et al. "Application of Hyperspectral Imaging to Underwater Habitat Mapping, Southern Adriatic Sea." Sensors 19, no. 10 (May 16, 2019): 2261. http://dx.doi.org/10.3390/s19102261.

Повний текст джерела
Анотація:
Hyperspectral imagers enable the collection of high-resolution spectral images exploitable for the supervised classification of habitats and objects of interest (OOI). Although this is a well-established technology for the study of subaerial environments, Ecotone AS has developed an underwater hyperspectral imager (UHI) system to explore the properties of the seafloor. The aim of the project is to evaluate the potential of this instrument for mapping and monitoring benthic habitats in shallow and deep-water environments. For the first time, we tested this system at two sites in the Southern Adriatic Sea (Mediterranean Sea): the cold-water coral (CWC) habitat in the Bari Canyon and the Coralligenous habitat off Brindisi. We created a spectral library for each site, considering the different substrates and the main OOI reaching, where possible, the lower taxonomic rank. We applied the spectral angle mapper (SAM) supervised classification to map the areal extent of the Coralligenous and to recognize the major CWC habitat-formers. Despite some technical problems, the first results demonstrate the suitability of the UHI camera for habitat mapping and seabed monitoring, through the achievement of quantifiable and repeatable classifications.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії