To see the other types of publications on this topic, follow the link: Detectron2.

Journal articles on the topic 'Detectron2'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Detectron2.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Abdusalomov, Akmalbek Bobomirzaevich, Bappy MD Siful Islam, Rashid Nasimov, Mukhriddin Mukhiddinov, and Taeg Keun Whangbo. "An Improved Forest Fire Detection Method based on the Detectron2 Model and a Deep Learning Approach." Sensors 23, no. 3 (January 29, 2023): 1512. http://dx.doi.org/10.3390/s23031512.

Full text
Abstract:
With an increase in both global warming and the human population, forest fires have become a major global concern. This can lead to climatic shifts and the greenhouse effect, among other adverse outcomes. Surprisingly, human activities have caused a disproportionate number of forest fires. Fast detection with high accuracy is the key to controlling this unexpected event. To address this, we proposed an improved forest fire detection method to classify fires based on a new version of the Detectron2 platform (a ground-up rewrite of the Detectron library) using deep learning approaches. Furthermore, a custom dataset was created and labeled for the training model, and it achieved higher precision than the other models. This robust result was achieved by improving the Detectron2 model in various experimental scenarios with a custom dataset and 5200 images. The proposed model can detect small fires over long distances during the day and night. The advantage of using the Detectron2 algorithm is its long-distance detection of the object of interest. The experimental results proved that the proposed forest fire detection method successfully detected fires with an improved precision of 99.3%.
APA, Harvard, Vancouver, ISO, and other styles
2

Kouvaras, Loukas, and George P. Petropoulos. "A Novel Technique Based on Machine Learning for Detecting and Segmenting Trees in Very High Resolution Digital Images from Unmanned Aerial Vehicles." Drones 8, no. 2 (February 1, 2024): 43. http://dx.doi.org/10.3390/drones8020043.

Full text
Abstract:
The present study proposes a technique for automated tree crown detection and segmentation in digital images derived from unmanned aerial vehicles (UAVs) using a machine learning (ML) algorithm named Detectron2. The technique, which was developed in the python programming language, receives as input images with object boundary information. After training on sets of data, it is able to set its own object boundaries. In the present study, the algorithm was trained for tree crown detection and segmentation. The test bed consisted of UAV imagery of an agricultural field of tangerine trees in the city of Palermo in Sicily, Italy. The algorithm’s output was the accurate boundary of each tree. The output from the developed algorithm was compared against the results of tree boundary segmentation generated by the Support Vector Machine (SVM) supervised classifier, which has proven to be a very promising object segmentation method. The results from the two methods were compared with the most accurate yet time-consuming method, direct digitalization. For accuracy assessment purposes, the detected area efficiency, skipped area rate, and false area rate were estimated for both methods. The results showed that the Detectron2 algorithm is more efficient in segmenting the relevant data when compared to the SVM model in two out of the three indices. Specifically, the Detectron2 algorithm exhibited a 0.959% and 0.041% fidelity rate on the common detected and skipped area rate, respectively, when compared with the digitalization method. The SVM exhibited 0.902% and 0.097%, respectively. On the other hand, the SVM classification generated better false detected area results, with 0.035% accuracy, compared to the Detectron2 algorithm’s 0.056%. Having an accurate estimation of the tree boundaries from the Detectron2 algorithm, the tree health assessment was evaluated last. For this to happen, three different vegetation indices were produced (NDVI, GLI and VARI). All those indices showed tree health as average. All in all, the results demonstrated the ability of the technique to detect and segment trees from UAV imagery.
APA, Harvard, Vancouver, ISO, and other styles
3

Bhaddurgatte, Vishesh R., Supreeth S. Koushik, Shushruth S, and Kiran Y. C. "Detection Of Adulterants in Pistachio Using Machine Learning Technique." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (January 6, 2025): 1–9. https://doi.org/10.55041/ijsrem40523.

Full text
Abstract:
This study addresses the issue of adulteration in pistachios, with a focus of green peas to compromise product purity and maximize profits. Leveraging the capabilities of YOLOv5s, a state-of-the-art real-time image processing model, we developed a robust system for identifying adulterants in pistachios. Comparative evaluations against other deep learning models, including Detectron2 and Scaled YOLOv4 revealed inferior performance in terms of accuracy and speed with our YOLOv5s-based solution. This YOLOv5s model helps to provide instant percentage of adulterants and pistachios. Keywords - YOLOv5s, Scaled YOLOv4, Detectron2
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, Zijing. "The distinguish between cats and dogs based on Detectron2 for automatic feeding." Applied and Computational Engineering 38, no. 1 (January 22, 2024): 35–40. http://dx.doi.org/10.54254/2755-2721/38/20230526.

Full text
Abstract:
With the rapid growth of urbanization, the problem of stray animals on the streets is particularly prominent, especially the shortage of food for cats and dogs. This study introduces an automatic feeding system based on the Detectron2 deep learning framework, aiming to accurately identify and provide suitable food for these stray animals. Through training using Detectron2 with a large amount of image data, the system shows extremely high recognition accuracy in single-object images. When dealing with multi-object images, Detectron2 can generate independent recognition frames for each target and make corresponding feeding decisions. Despite the outstanding performance of the model, its potential uncertainties and errors still need to be considered. This research not only offers a practical solution to meet the basic needs of stray animals but also provides a new perspective for urban management and animal welfare. By combining technology with social responsibility, this innovative solution opens up a new path for solving the stray animal problem in cities, with broad application prospects and profound social significance.
APA, Harvard, Vancouver, ISO, and other styles
5

Chincholi, Farheen, and Harald Koestler. "Detectron2 for Lesion Detection in Diabetic Retinopathy." Algorithms 16, no. 3 (March 7, 2023): 147. http://dx.doi.org/10.3390/a16030147.

Full text
Abstract:
Hemorrhages in the retinal fundus are a common symptom of both diabetic retinopathy and diabetic macular edema, making their detection crucial for early diagnosis and treatment. For this task, the aim is to evaluate the performance of two pre-trained and additionally fine-tuned models from the Detectron2 model zoo, Faster R-CNN (R50-FPN) and Mask R-CNN (R50-FPN). Experiments show that the Mask R-CNN (R50-FPN) model provides highly accurate segmentation masks for each detected hemorrhage, with an accuracy of 99.34%. The Faster R-CNN (R50-FPN) model detects hemorrhages with an accuracy of 99.22%. The results of both models are compared using a publicly available image database with ground truth marked by experts. Overall, this study demonstrates that current models are valuable tools for early diagnosis and treatment of diabetic retinopathy and diabetic macular edema.
APA, Harvard, Vancouver, ISO, and other styles
6

Mullins, Connor C., Travis J. Esau, Qamar U. Zaman, Ahmad A. Al-Mallahi, and Aitazaz A. Farooque. "Exploiting 2D Neural Network Frameworks for 3D Segmentation Through Depth Map Analytics of Harvested Wild Blueberries (Vaccinium angustifolium Ait.)." Journal of Imaging 10, no. 12 (December 15, 2024): 324. https://doi.org/10.3390/jimaging10120324.

Full text
Abstract:
This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red–green–blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities. Notably, the YOLOv8 models, particularly YOLOv8n-seg, demonstrated superior processing efficiency, with an average time of 18.10 ms, significantly faster than the Detectron2 models, which exceeded 57 ms, while maintaining high performance with a mean intersection over union (IoU) of 0.944 and a Matthew’s correlation coefficient (MCC) of 0.957. A qualitative comparison of segmentation masks indicated that the YOLO models produced smoother and more accurate object boundaries, whereas Detectron2 showed jagged edges and under-segmentation. Statistical analyses, including ANOVA and Tukey’s HSD test (α = 0.05), confirmed the superior segmentation performance of models on depth maps over RGB images (p < 0.001). This study concludes by recommending the YOLOv8n-seg model for real-time 3D segmentation in precision agriculture, providing insights that can enhance volume estimation, yield prediction, and resource management practices.
APA, Harvard, Vancouver, ISO, and other styles
7

Castillo, Darwin, María José Rodríguez-Álvarez, René Samaniego, and Vasudevan Lakshminarayanan. "Models to Identify Small Brain White Matter Hyperintensity Lesions." Applied Sciences 15, no. 5 (March 6, 2025): 2830. https://doi.org/10.3390/app15052830.

Full text
Abstract:
According to the World Health Organization (WHO), peripheral and central neurological disorders affect approximately one billion people worldwide. Ischemic stroke and Alzheimer’s Disease and other dementias are the second and fifth leading causes of death, respectively. In this context, detecting and classifying brain lesions constitute a critical area of research in medical image processing, significantly impacting clinical practice. Traditional lesion detection, segmentation, and feature extraction methods are time-consuming and observer-dependent. In this sense, research in the machine and deep learning methods applied to medical image processing constitute one of the crucial tools for automatically learning hierarchical features to get better accuracy, quick diagnosis, treatment, and prognosis of diseases. This project aims to develop and implement deep learning models for detecting and classifying small brain White Matter hyperintensities (WMH) lesions in magnetic resonance images (MRI), specifically lesions concerning ischemic and demyelination diseases. The methods applied were the UNet and Segmenting Anything model (SAM) for segmentation, while YOLOV8 and Detectron2 (based on MaskRCNN) were also applied to detect and classify the lesions. Experimental results show a Dice coefficient (DSC) of 0.94, 0.50, 0.241, and 0.88 for segmentation of WMH lesions using the UNet, SAM, YOLOv8, and Detectron2, respectively. The Detectron2 model demonstrated an accuracy of 0.94 in detecting and 0.98 in classifying lesions, including small lesions where other models often fail. The methods developed give an outline for the detection, segmentation, and classification of small and irregular morphology brain lesions and could significantly aid clinical diagnostics, providing reliable support for physicians and improving patient outcomes.
APA, Harvard, Vancouver, ISO, and other styles
8

Rani, Anju, Daniel Ortiz-Arroyo, and Petar Durdevic. "Defect Detection in Synthetic Fibre Ropes using Detectron2 Framework." Applied Ocean Research 150 (September 2024): 104109. http://dx.doi.org/10.1016/j.apor.2024.104109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wen, Hao, Chang Huang, and Shengmin Guo. "The Application of Convolutional Neural Networks (CNNs) to Recognize Defects in 3D-Printed Parts." Materials 14, no. 10 (May 15, 2021): 2575. http://dx.doi.org/10.3390/ma14102575.

Full text
Abstract:
Cracks and pores are two common defects in metallic additive manufacturing (AM) parts. In this paper, deep learning-based image analysis is performed for defect (cracks and pores) classification/detection based on SEM images of metallic AM parts. Three different levels of complexities, namely, defect classification, defect detection and defect image segmentation, are successfully achieved using a simple CNN model, the YOLOv4 model and the Detectron2 object detection library, respectively. The tuned CNN model can classify any single defect as either a crack or pore at almost 100% accuracy. The other two models can identify more than 90% of the cracks and pores in the testing images. In addition to the application of static image analysis, defect detection is also successfully applied on a video which mimics the AM process control images. The trained Detectron2 model can identify almost all the pores and cracks that exist in the original video. This study lays a foundation for future in situ process monitoring of the 3D printing process.
APA, Harvard, Vancouver, ISO, and other styles
10

Sankar, Aravinthan, Kunal Chaturvedi, Al-Akhir Nayan, Mohammad Hesam Hesamian, Ali Braytee, and Mukesh Prasad. "Utilizing Generative Adversarial Networks for Acne Dataset Generation in Dermatology." BioMedInformatics 4, no. 2 (April 9, 2024): 1059–70. http://dx.doi.org/10.3390/biomedinformatics4020059.

Full text
Abstract:
Background: In recent years, computer-aided diagnosis for skin conditions has made significant strides, primarily driven by artificial intelligence (AI) solutions. However, despite this progress, the efficiency of AI-enabled systems remains hindered by the scarcity of high-quality and large-scale datasets, primarily due to privacy concerns. Methods: This research circumvents privacy issues associated with real-world acne datasets by creating a synthetic dataset of human faces with varying acne severity levels (mild, moderate, and severe) using Generative Adversarial Networks (GANs). Further, three object detection models—YOLOv5, YOLOv8, and Detectron2—are used to evaluate the efficacy of the augmented dataset for detecting acne. Results: Integrating StyleGAN with these models, the results demonstrate the mean average precision (mAP) scores: YOLOv5: 73.5%, YOLOv8: 73.6%, and Detectron2: 37.7%. These scores surpass the mAP achieved without GANs. Conclusions: This study underscores the effectiveness of GANs in generating synthetic facial acne images and emphasizes the importance of utilizing GANs and convolutional neural network (CNN) models for accurate acne detection.
APA, Harvard, Vancouver, ISO, and other styles
11

Миљуш, Сара. "ДЕТЕКЦИЈА СТЕНОЗА И ОКЛУЗИЈА КОРОНАРНИХ АРТЕРИЈА." Zbornik radova Fakulteta tehničkih nauka u Novom Sadu 39, no. 10 (October 9, 2024): 1532–35. http://dx.doi.org/10.24867/28rb04miljus.

Full text
Abstract:
Рад се заснива на детекцији стеноза и оклузија коронарних артерија срца, као и проналажење њиховог процента зачепљења, уз помоћ алгоритама машинског учења, односно YOLOv4, YOLOv8 алгоритама и Detectron2 библиотеке. База података која је коришћена за израду пројекта састоји се од 3.850 слика коронарних артерија гдје су јасно обиљежене стенозе и/или оклузије.
APA, Harvard, Vancouver, ISO, and other styles
12

Abiamamela Obi-Obuoha, Victor Samuel Rizama, Ifeanyichukwu Okafor, Haggai Edore Ovwenkekpere, Kehinde Obe, and Jeremiah Ekundayo. "Real-time traffic object detection using detectron 2 with faster R-CNN." World Journal of Advanced Research and Reviews 24, no. 2 (November 30, 2024): 2173–89. http://dx.doi.org/10.30574/wjarr.2024.24.2.3559.

Full text
Abstract:
Object detection is becoming more and more important in daily life, especially in applications like advanced traffic analysis, intelligent driver assistance systems, and driverless cars. The accurate identification of objects from real-time video is crucial for effective traffic analysis. These systems play a vital role in providing drivers and authorities a comprehensive understanding of the road and surrounding environment. Modern algorithms and neural network-based architecture with extremely high detection accuracy, like Faster R-CNN are crucial to achieving this. This study investigates an advanced object detection system designed for urban traffic applications using an interactive Gradio interface and Detectron2’s Faster R-CNN model. The research focuses on developing a model capable of identifying key traffic objects such as traffic lights, vehicles, buses, crossroads etc., with high accuracy and precision. A significant contribution of this study is the integration of Gradio-based interface that enables users to upload images or videos from their local storage or webcam and view the results in real time making the model both accessible and practical. Our findings demonstrate that the Detectron2 framework, paired with Gradio’s interactive interface offers a reliable and scalable solution for traffic monitoring and safety applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Evangelista, Ivan Roy S., Lenmar T. Catajay, Maria Gemel B. Palconit, Mary Grace Ann C. Bautista, Ronnie S. Concepcion II, Edwin Sybingco, Argel A. Bandala, and Elmer P. Dadios. "Detection of Japanese Quails (Coturnix japonica) in Poultry Farms Using YOLOv5 and Detectron2 Faster R-CNN." Journal of Advanced Computational Intelligence and Intelligent Informatics 26, no. 6 (November 20, 2022): 930–36. http://dx.doi.org/10.20965/jaciii.2022.p0930.

Full text
Abstract:
Poultry, like quails, is sensitive to stressful environments. Too much stress can adversely affect birds’ health, causing meat quality, egg production, and reproduction to degrade. Posture and behavioral activities can be indicators of poultry wellness and health condition. Animal welfare is one of the aims of precision livestock farming. Computer vision, with its real-time, non-invasive, and accurate monitoring capability, and its ability to obtain a myriad of information, is best for livestock monitoring. This paper introduces a quail detection mechanism based on computer vision and deep learning using YOLOv5 and Detectron2 (Faster R-CNN) models. An RGB camera installed 3 ft above the quail cages was used for video recording. The annotation was done in MATLAB video labeler using the temporal interpolator algorithm. 898 ground truth images were extracted from the annotated videos. Augmentation of images by change of orientation, noise addition, manipulating hue, saturation, and brightness was performed in Roboflow. Training, validation, and testing of the models were done in Google Colab. The YOLOv5 and Detectron2 reached average precision (AP) of 85.07 and 67.15, respectively. Both models performed satisfactorily in detecting quails in different backgrounds and lighting conditions.
APA, Harvard, Vancouver, ISO, and other styles
14

Moysiadis, Vasileios, Ilias Siniosoglou, Georgios Kokkonis, Vasileios Argyriou, Thomas Lagkas, Sotirios K. Goudos, and Panagiotis Sarigiannidis. "Cherry Tree Crown Extraction Using Machine Learning Based on Images from UAVs." Agriculture 14, no. 2 (February 18, 2024): 322. http://dx.doi.org/10.3390/agriculture14020322.

Full text
Abstract:
Remote sensing stands out as one of the most widely used operations in the field. In this research area, UAVs offer full coverage of large cultivation areas in a few minutes and provide orthomosaic images with valuable information based on multispectral cameras. Especially for orchards, it is helpful to isolate each tree and then calculate the preferred vegetation indices separately. Thus, tree detection and crown extraction is another important research area in the domain of Smart Farming. In this paper, we propose an innovative tree detection method based on machine learning, designed to isolate each individual tree in an orchard. First, we evaluate the effectiveness of Detectron2 and YOLOv8 object detection algorithms in identifying individual trees and generating corresponding masks. Both algorithms yield satisfactory results in cherry tree detection, with the best F1-Score up to 94.85%. In the second stage, we apply a method based on OTSU thresholding to improve the provided masks and precisely cover the crowns of the detected trees. The proposed method achieves 85.30% on IoU while Detectron2 gives 79.83% and YOLOv8 has 75.36%. Our work uses cherry trees, but it is easy to apply to any other tree species. We believe that our approach will be a key factor in enabling health monitoring for each individual tree.
APA, Harvard, Vancouver, ISO, and other styles
15

Mg, Wai Hnin Eaindrar, Pyke Tin, Masaru Aikawa, Ikuo Kobayashi, Yoichiro Horii, Kazuyuki Honkawa, and Thi Thi Zin. "Customized Tracking Algorithm for Robust Cattle Detection and Tracking in Occlusion Environments." Sensors 24, no. 4 (February 11, 2024): 1181. http://dx.doi.org/10.3390/s24041181.

Full text
Abstract:
Ensuring precise calving time prediction necessitates the adoption of an automatic and precisely accurate cattle tracking system. Nowadays, cattle tracking can be challenging due to the complexity of their environment and the potential for missed or false detections. Most existing deep-learning tracking algorithms face challenges when dealing with track-ID switch cases caused by cattle occlusion. To address these concerns, the proposed research endeavors to create an automatic cattle detection and tracking system by leveraging the remarkable capabilities of Detectron2 while embedding tailored modifications to make it even more effective and efficient for a variety of applications. Additionally, the study conducts a comprehensive comparison of eight distinct deep-learning tracking algorithms, with the objective of identifying the most optimal algorithm for achieving precise and efficient individual cattle tracking. This research focuses on tackling occlusion conditions and track-ID increment cases for miss detection. Through a comparison of various tracking algorithms, we discovered that Detectron2, coupled with our customized tracking algorithm (CTA), achieves 99% in detecting and tracking individual cows for handling occlusion challenges. Our algorithm stands out by successfully overcoming the challenges of miss detection and occlusion problems, making it highly reliable even during extended periods in a crowded calving pen.
APA, Harvard, Vancouver, ISO, and other styles
16

Dong, Changhao. "Exploring visual techniques for indoor intrusion detection using detectron2 and Faster RCNN." Applied and Computational Engineering 36, no. 1 (January 22, 2024): 230–36. http://dx.doi.org/10.54254/2755-2721/36/20230452.

Full text
Abstract:
As societal evolution marches forward, there's an escalating emphasis on indoor security concerns. Within the safety realm, indoor intrusion detection emerges as a pivotal challenge, given its direct implications on safeguarding human lives and assets. Leveraging visual methods for indoor intrusion detection holds promising potential, particularly due to its straightforward deployment advantages. This study zeroes in on surveillance video footage, a prevalent medium in the security domain, as its experimental muse. Post an initial preprocessing phase utilizing a video difference map, the research introduces a pedestrian detection algorithm hinged on the synergy of detectron2 and Faster R-CNN. Insights gleaned reveal that this combined algorithm, when augmented with the video difference map preprocessing, exhibits commendable accuracy, robustness, and real-time efficacy on surveillance footage, especially in scenarios bereft of significant target occlusion. Moreover, this algorithm showcases an adeptness at discerning diminutively sized targets, demonstrating resilience against varying light magnitudes and maintaining impeccable accuracy amidst intricate lighting conditions. By harnessing this methodology, the enhancement of indoor environment safety monitoring becomes feasible, thereby bolstering the provision of dependable protection for individuals.
APA, Harvard, Vancouver, ISO, and other styles
17

Bourassa, Albert, Philippe Apparicio, Jérémy Gelb, and Geneviève Boisjoly. "Canopy Assessment of Cycling Routes: Comparison of Videos from a Bicycle-Mounted Camera and GPS and Satellite Imagery." ISPRS International Journal of Geo-Information 12, no. 1 (December 27, 2022): 6. http://dx.doi.org/10.3390/ijgi12010006.

Full text
Abstract:
Many studies have proven that urban greenness is an important factor when cyclists choose a route. Thus, detecting trees along a cycling route is a major key to assessing the quality of cycling routes and providing further arguments to improve ridership and the better design of cycling routes. The rise in the use of video recordings in data collection provides access to a new point of view of a city, with data recorded at eye level. This method may be superior to the commonly used normalized difference vegetation index (NDVI) from satellite imagery because satellite images are costly to obtain and cloud cover sometimes obscures the view. This study has two objectives: (1) to assess the number of trees along a cycling route using software object detection on videos, particularly the Detectron2 library, and (2) to compare the detected canopy on the videos to other canopy data to determine if they are comparable. Using bicycles installed with cameras and GPS, four participants cycled on 141 predefined routes in Montréal over 87 h for a total of 1199 km. More than 300,000 images were extracted and analyzed using Detectron2. The results show that the detection of trees using the software is accurate. Moreover, the comparison reveals a strong correlation (>0.75) between the two datasets. This means that the canopy data could be replaced by video-detected trees, which is particularly relevant in cities where open GIS data on street vegetation are not available.
APA, Harvard, Vancouver, ISO, and other styles
18

Murugesan, Ramasamy, Gokul Adethya T., Ventesh A., B. Azhaganathan, and P. D. D. Domnic. "Instance segmentation of neuronal cells using U-Net, mask R-CNN, and Detectron2." International Journal of Biomedical Engineering and Technology 45, no. 2 (2024): 129–49. http://dx.doi.org/10.1504/ijbet.2024.138735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Purnadi, N. F., I. Jaya, and M. Iqbal. "Detection and identification of Red Snapper (Lutjanus gibbus and Lutjanus malabaricus) and grouper (Plectropomus leopardus and Plectropomus maculatus) with deep learning." IOP Conference Series: Earth and Environmental Science 1251, no. 1 (October 1, 2023): 012043. http://dx.doi.org/10.1088/1755-1315/1251/1/012043.

Full text
Abstract:
Abstract The Indonesian fishing industry is a major global contributor, particularly in the production and export of snapper and grouper fish. These two groups of fish are known for their high diversity, but distinguishing between the different species can be challenging due to their morphological similarities. In order to overcome this challenge, this paper proposes the application of deep learning methods to identify the two major species of snapper and grouper accurately. The deep learning framework utilized in this study is Detectron2, a state-of-the-art object detection and segmentation model. The datasets used in the study consist of 500 images each for the snapper species Lutjanus gibbus and Lutjanus malabaricus, and the grouper species Plectropomus leopardus and Plectropomus maculatus, totaling 2000 images. The datasets were labeled using Coco Annotator software with a focus on species segmentation. The labeled datasets were then trained using Google Collaboratory, resulting in an accuracy value of 89.51%, a precision of 87.69%, a recall of 99.85%, and an F1-score of 93.38%. These results demonstrate that, even with a relatively limited number of datasets, it is possible to accurately identify the Red Snapper Lutjanus gibbus and Lutjanus malabaricus, as well as the grouper species Plectropomus leopardus and Plectropomus maculatus using deep learning methods. In conclusion, this paper demonstrates the potential of deep learning methods, specifically Detectron2, in accurately identifying snapper and grouper species. The results of this study suggest that this technology can be used to improve the accuracy and efficiency of fish species identification, which could have significant implications for the fishing industry and marine conservation efforts.
APA, Harvard, Vancouver, ISO, and other styles
20

Wianzah, Dastin Arjuna, and Ahmad Ridha. "PROTOTIPE ALAT IDENTIFIKASI POLA NERVE RING PADA IRIS MATA MENGGUNAKAN RASPBERRY PI DAN DETECTRON2." Kurawal - Jurnal Teknologi, Informasi dan Industri 7, no. 2 (October 27, 2024): 35–44. http://dx.doi.org/10.33479/kurawal.v7i2.1101.

Full text
Abstract:
Kemajuan teknologi dalam bidang pengolahan citra dan kecerdasan buatan telah membuka peluang besar dalam berbagai aplikasi medis, termasuk dalam analisis iris mata. Studi ini mengembangkan sebuah prototipe alat berbasis Raspberry Pi untuk identifikasi pola nerve ring pada iris mata. Perancangan prototipe ini mencakup requirement analysis, system and hardware design, implementation, dan testing. Alat ini mengambil citra iris mata yang kemudian diproses dengan library Detectron2 untuk mengidentifikasi pola nerve ring dan hasilnya ditampilkan pada layar. Prototipe ini berhasil mengenali keberadaan nerve ring pada 93,33% data dengan mendapatkan MSE sebesar 3,27 pada nilai NMS sebesar 0,2. Hal ini menunjukkan kinerja yang baik dalam mengidentifikasi nerve ring secara efektif untuk digunakan dalam analisis lebih lanjut.
APA, Harvard, Vancouver, ISO, and other styles
21

Lim, Jae-Jun, Dae-Won Kim, Woon-Hee Hong, Min Kim, Dong-Hoon Lee, Sun-Young Kim, and Jae-Hoon Jeong. "Application of Convolutional Neural Network (CNN) to Recognize Ship Structures." Sensors 22, no. 10 (May 18, 2022): 3824. http://dx.doi.org/10.3390/s22103824.

Full text
Abstract:
The purpose of this paper is to study the recognition of ships and their structures to improve the safety of drone operations engaged in shore-to-ship drone delivery service. This study has developed a system that can distinguish between ships and their structures by using a convolutional neural network (CNN). First, the dataset of the Marine Traffic Management Net is described and CNN’s object sensing based on the Detectron2 platform is discussed. There will also be a description of the experiment and performance. In addition, this study has been conducted based on actual drone delivery operations—the first air delivery service by drones in Korea.
APA, Harvard, Vancouver, ISO, and other styles
22

Amerikanos, Paris, and Ilias Maglogiannis. "Image Analysis in Digital Pathology Utilizing Machine Learning and Deep Neural Networks." Journal of Personalized Medicine 12, no. 9 (September 1, 2022): 1444. http://dx.doi.org/10.3390/jpm12091444.

Full text
Abstract:
Detection of regions of interest (ROIs) in whole slide images (WSIs) in a clinical setting is a highly subjective and a labor-intensive task. In this work, recent developments in machine learning and computer vision algorithms are presented to assess their possible usage and performance to enhance and accelerate clinical pathology procedures, such as ROI detection in WSIs. In this context, a state-of-the-art deep learning framework (Detectron2) was trained on two cases linked to the TUPAC16 dataset for object detection and on the JPATHOL dataset for instance segmentation. The predictions were evaluated against competing models and further possible improvements are discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Nan, Hongbo Liu, Yicheng Li, Weijun Zhou, and Mingquan Ding. "Segmentation and Phenotype Calculation of Rapeseed Pods Based on YOLO v8 and Mask R-Convolution Neural Networks." Plants 12, no. 18 (September 20, 2023): 3328. http://dx.doi.org/10.3390/plants12183328.

Full text
Abstract:
Rapeseed is a significant oil crop, and the size and length of its pods affect its productivity. However, manually counting the number of rapeseed pods and measuring the length, width, and area of the pod takes time and effort, especially when there are hundreds of rapeseed resources to be assessed. This work created two state-of-the-art deep learning-based methods to identify rapeseed pods and related pod attributes, which are then implemented in rapeseed pots to improve the accuracy of the rapeseed yield estimate. One of these methods is YOLO v8, and the other is the two-stage model Mask R-CNN based on the framework Detectron2. The YOLO v8n model and the Mask R-CNN model with a Resnet101 backbone in Detectron2 both achieve precision rates exceeding 90%. The recognition results demonstrated that both models perform well when graphic images of rapeseed pods are segmented. In light of this, we developed a coin-based approach for estimating the size of rapeseed pods and tested it on a test dataset made up of nine different species of Brassica napus and one of Brassica campestris L. The correlation coefficients between manual measurement and machine vision measurement of length and width were calculated using statistical methods. The length regression coefficient of both methods was 0.991, and the width regression coefficient was 0.989. In conclusion, for the first time, we utilized deep learning techniques to identify the characteristics of rapeseed pods while concurrently establishing a dataset for rapeseed pods. Our suggested approaches were successful in segmenting and counting rapeseed pods precisely. Our approach offers breeders an effective strategy for digitally analyzing phenotypes and automating the identification and screening process, not only in rapeseed germplasm resources but also in leguminous plants, like soybeans that possess pods.
APA, Harvard, Vancouver, ISO, and other styles
24

Kothari, Kushal, Ajay Arjunwadkar, Hitesh Bhalerao, and Savita Lade. "Fine-Grained Identification of Clothing Apparels." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (April 30, 2022): 3168–71. http://dx.doi.org/10.22214/ijraset.2022.42022.

Full text
Abstract:
Abstract: The rapid use of smartphones and tablet computers, search is now not just limited to text,but moved to other modalities such as voice and image. Extracting and matching this attributes still remains a daunting task due to high deformability and variability of clothing items. Visual analysis of clothings is a topic that has received attention due to tremendous growth of e-commerce fashion stores. Analyzing fashion attributes is also crucial in the fashion design process. This paper addresses the solution of recognition of clothes and fashion related attributes related to it using better image segmentation RCNN based algorithms. Keywords: Computer Vision, Fine grained identification, Clothing apparel detection, Convolutional Neural Network, Mask RCNN, Detectron2
APA, Harvard, Vancouver, ISO, and other styles
25

Butt, Marya, Nick Glas, Jaimy Monsuur, Ruben Stoop, and Ander de Keijzer. "Application of YOLOv8 and Detectron2 for Bullet Hole Detection and Score Calculation from Shooting Cards." AI 5, no. 1 (December 22, 2023): 72–90. http://dx.doi.org/10.3390/ai5010005.

Full text
Abstract:
Scoring targets in shooting sports is a crucial and time-consuming task that relies on manually counting bullet holes. This paper introduces an automatic score detection model using object detection techniques. The study contributes to the field of computer vision by comparing the performance of seven models (belonging to two different architectural setups) and by making the dataset publicly available. Another value-added aspect is the inclusion of three variants of the object detection model, YOLOv8, recently released in 2023 (at the time of writing). Five of the used models are single-shot detectors, while two belong to the two-shot detectors category. The dataset was manually captured from the shooting range and expanded by generating more versatile data using Python code. Before the dataset was trained to develop models, it was resized (640 × 640) and augmented using Roboflow API. The trained models were then assessed on the test dataset, and their performance was compared using matrices like mAP50, mAP50-90, precision, and recall. The results showed that YOLOv8 models can detect multiple objects with good confidence scores. Among these models, YOLOv8m performed the best, with the highest mAP50 value of 96.7%, followed by the performance of YOLOv8s with the mAP50 value of 96.5%. It is suggested that if the system is to be implemented in a real-time environment, YOLOv8s is a better choice since it took significantly less inference time (2.3 ms) than YOLOv8m (5.7 ms) and yet generated a competitive mAP50 of 96.5%.
APA, Harvard, Vancouver, ISO, and other styles
26

Fernandes, Samuel, Alice Fialho, and Isabel Patriarca. "Deteção e delimitação de corpos de água em imagens de satélite de alta resolução com aprendizagem profunda." REVISTA INTERNACIONAL MAPPING 32, no. 214 (January 15, 2024): 10–24. http://dx.doi.org/10.59192/mapping.442.

Full text
Abstract:
A delimitação de corpos de água com recurso a imagens de satélite desempenha umpapel crucial em diversas aplicações, como monitorização ambiental, planeamento derecursos hídricos, planeamento na defesa contra a incêndios e na análise dasalteraçõesclimáticas. Neste trabalho, pretendemos explorar a aplicação daaprendizagem profunda tendo por base oFramework Detectron2, nageraçãoautomática depolígonos que representamcorpos de águacomopequenasalbufeiras,lagos,charcos e reservatórios.A caracterização eficiente das disponibilidades hídricasdos reservatórios, albufeiras e barragenspermite uma melhor e maiseficientemonitorização dos Planos de Água (PA), bem como a boa gestão desses mesmosrecursos. A área geográfica de estudo e as metodologias desenvolvidas, encontra-seenquadrada nas áreas de jurisdição da Administração da Região Hidrográfica doAlentejo, Departamentos desconcentrados da Agência portuguesa do Ambiente, I.P..Foidesenvolvidoum conjunto de dados abrangente e personalizado composto porimagens de satélite de alta resolução e rótulos anotados manualmente, identificandoas áreas correspondentes aos corpos de água, para treinar o modelo.Foi utilizada aarquiteturaResNet-50 combinada com aMask R-CNN, presentesno Detectron2, pararealizar a tarefa de deteção de objetos em gerale segmentação respetivamente. Emseguida, treinamos o modelo de aprendizagem profunda utilizando o nosso conjuntode dados na plataforma Google Colab, aproveitando o poder computacional dasunidades de processamento gráfico (GPU).A vantagem de usara FrameworkDetectron2 é a sua capacidade rápida e eficiente dedelimitação de corpos de águaem grandes volumes de dados,comparativamente aométodo tradicional, oqual envolve um processo manual de análise e marcaçãodospolígonosnas imagens de satéliteatravés de pessoal especializado,apresentandoelevados custos em termos de recursos humanos, económicose com elevadamorosidade.Na(Figura-1)é possível observar dois corpos de água corretamente segmentadosutilizando o método proposto.Esta abordagem pode impulsionar o desenvolvimento detécnicas mais precisas e eficientes para a deteção e delimitação de característicashidrológicas em imagens de satéliteuma vez que conseguimos segmentar corpos deágua com dimensões de até 121 m2.A abordagem implementada neste trabalho podeser aplicada a outras áreas temáticas como por exemplo a deteção de incêndios,blooms de algas, identificação de estruturas urbanas, delimitação de florestas e cultivos agrícolas.
APA, Harvard, Vancouver, ISO, and other styles
27

Ali Husnain, Aftab Ahmad, and Ayesha Saeed. "Enhancing agricultural health with AI: Drone-based machine learning for mango tree disease detection." World Journal of Advanced Research and Reviews 23, no. 2 (August 30, 2024): 1267–76. http://dx.doi.org/10.30574/wjarr.2024.23.2.2455.

Full text
Abstract:
In the agriculture sector, timely detection of diseases in fruit trees is a significant challenge, leading to substantial economic losses. Automated detection of diseases in fruit trees, particularly mango trees, is crucial to minimize these losses by enabling early intervention. This research explores the use of drone-captured multispectral images combined with deep learning and computer vision techniques to detect diseases in mango trees. The proposed system leverages various pre-trained Convolutional Neural Network (CNN) models, such as YOLOv5, Detectron2, and Faster R-CNN, to achieve optimal accuracy. Data augmentation techniques are employed to address data skewness and overfitting, while Generative Adversarial Networks (GANs) enhance image quality. The system aims to provide a scalable solution for early disease detection, thereby reducing economic losses and supporting the agricultural sector's growth.
APA, Harvard, Vancouver, ISO, and other styles
28

Rashmi, S., S. Srinath, Seema Deshmukh, S. Prashanth, and Karthikeya Patil. "Cephalometric landmark annotation using transfer learning: Detectron2 and YOLOv8 baselines on a diverse cephalometric image dataset." Computers in Biology and Medicine 183 (December 2024): 109318. http://dx.doi.org/10.1016/j.compbiomed.2024.109318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

de Almeida, Guilherme Pires Silva, Leonardo Nazário Silva dos Santos, Leandro Rodrigues da Silva Souza, Pablo da Costa Gontijo, Ruy de Oliveira, Matheus Cândido Teixeira, Mario De Oliveira, Marconi Batista Teixeira, and Heyde Francielle do Carmo França. "Performance Analysis of YOLO and Detectron2 Models for Detecting Corn and Soybean Pests Employing Customized Dataset." Agronomy 14, no. 10 (September 24, 2024): 2194. http://dx.doi.org/10.3390/agronomy14102194.

Full text
Abstract:
One of the most challenging aspects of agricultural pest control is accurate detection of insects in crops. Inadequate control measures for insect pests can seriously impact the production of corn and soybean plantations. In recent years, artificial intelligence (AI) algorithms have been extensively used for detecting insect pests in the field. In this line of research, this paper introduces a method to detect four key insect species that are predominant in Brazilian agriculture. Our model relies on computer vision techniques, including You Only Look Once (YOLO) and Detectron2, and adapts them to lightweight formats—TensorFlow Lite (TFLite) and Open Neural Network Exchange (ONNX)—for resource-constrained devices. Our method leverages two datasets: a comprehensive one and a smaller sample for comparison purposes. With this setup, the authors aimed at using these two datasets to evaluate the performance of the computer vision models and subsequently convert the best-performing models into TFLite and ONNX formats, facilitating their deployment on edge devices. The results are promising. Even in the worst-case scenario, where the ONNX model with the reduced dataset was compared to the YOLOv9-gelan model with the full dataset, the precision reached 87.3%, and the accuracy achieved was 95.0%.
APA, Harvard, Vancouver, ISO, and other styles
30

Naik, Shivam, Khurram Azeem Hashmi, Alain Pagani, Marcus Liwicki, Didier Stricker, and Muhammad Zeshan Afzal. "Investigating Attention Mechanism for Page Object Detection in Document Images." Applied Sciences 12, no. 15 (July 26, 2022): 7486. http://dx.doi.org/10.3390/app12157486.

Full text
Abstract:
Page object detection in scanned document images is a complex task due to varying document layouts and diverse page objects. In the past, traditional methods such as Optical Character Recognition (OCR)-based techniques have been employed to extract textual information. However, these methods fail to comprehend complex page objects such as tables and figures. This paper addresses the localization problem and classification of graphical objects that visually summarize vital information in documents. Furthermore, this work examines the benefit of incorporating attention mechanisms in different object detection networks to perform page object detection on scanned document images. The model is designed with a Pytorch-based framework called Detectron2. The proposed pipelines can be optimized end-to-end and exhaustively evaluated on publicly available datasets such as DocBank, PublayNet, and IIIT-AR-13K. The achieved results reflect the effectiveness of incorporating the attention mechanism for page object detection in documents.
APA, Harvard, Vancouver, ISO, and other styles
31

Sousa Júnior, Pedro Cavalcante, Luís Fabrício de Freitas Souza, José Jerovane da Costa Nascimento, Lucas Oliveira Santos, Adriell Gomes Marques, Francisco Eduardo Sales Ribeiro, and Pedro Pedrosa Rebouças Filho. "Detection and Segmentation of Lungs Regions Using CNN Combined with Levelset." Learning and Nonlinear Models 19, no. 1 (December 31, 2021): 45–54. http://dx.doi.org/10.21528/lnlm-vol19-no1-art4.

Full text
Abstract:
Lung diseases are among the leaders in ranking diseases that kill the most globally. A quick and accurate diagnosis made by a specialist doctor facilitates the treatment of the disease and can save lives. In recent decades, an area that has gained strength in computing has been the aid to medical diagnosis. Several techniques were created to help health professionals in their work using Computer Vision Techniques and Machine Learning. This work presents a method of lung segmentation based on deep learning and computer vision techniques to aid in the medical diagnosis of lung diseases. The method uses the Detectron2 convolutional neural network for detection, which obtained 99.89% accuracy for detecting the pulmonary region. It was then combined with the LevelSet method for segmentation, which got 99.32% accuracy in segmentation in Lung Computed Tomography images being equivalent in state of the art, surpassing different deep learning models for segmentation.
APA, Harvard, Vancouver, ISO, and other styles
32

Souza, Luís Fabrício de Freitas, Tassiana Marinho Castro, Lucas de Oliveira Santos, Adriell Gomes Marques, José Jerovane da Costa Nascimento, Matheus Araújo Santos, Guilherme F. Brilhante Severiano, and Pedro Pedrosa Rebouças Filho. "Detection and Segmentation of Damaged Photovoltaic Panels Using Deep Learning and Fine-tuning in Images Captured by Drone." Learning and Nonlinear Models 19, no. 2 (December 31, 2021): 4–14. http://dx.doi.org/10.21528/lnlm-vol19-no2-art1.

Full text
Abstract:
Energy consumption is a direct impact factor in various sectors of society. Different technologies for energy generation are based on renewable sources and used as alternatives to the consumption of finite resources. Among these technologies, photovoltaic panels represent an efficient solution for energy generation and an option for sustainable consumption. The problem of damaged panels brings numerous problems in energy generation, from the interruption of generation to losses through financial investments. The proposed study presents an efficient model based on deep learning for detection and different models based on fine-tuning for the segmentation of damaged photovoltaic panels. The use of the Detectron2 convolutional network obtained 78% of Accuracy for detection and 95% precision in the detectable panels, also obtaining 99.91% for the segmentation problem of photovoltaic panels in the best-generated model in this study. The proposed model showed great effectiveness for panel detection and segmentation, surpassing works found in the literature.
APA, Harvard, Vancouver, ISO, and other styles
33

Agarwal, Palak, Somya Goel, Simran Bhagat, and Rahul Singh. "Autonomous Navigation Using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (March 31, 2025): 1891–97. https://doi.org/10.22214/ijraset.2025.67672.

Full text
Abstract:
Abstract: With uses in robotics, industrial automation, autonomous vehicles, and surveillance, object detection is a basic computer vision problem. Within the context of the COCO dataset, this work compares the performance of several state-of-theart object recognition models, including Mask R-CNN (Detectron2), YOLOv8s, YOLOv8l, and YOLOv11s. Some of the significant parameters such as mean Average Precision (mAP), precision, recall, and inference speed are utilized to compare models. The results indicate that while Mask R-CNN is accurate, its computation makes it less suitable for real-time use. YOLO models, particularly YOLOv8s, are however a compromise between accuracy and speed and thus are ideal for real-time detection processes. YOLOv8l is however computationally more demanding but somewhat offers higher accuracy. Due to its speed and accuracy, YOLOv8s is the most suitable model to apply in real-time, as stated in the review. In selecting the most suitable object detection models for various applications, researchers and developers can learn a lot from this study
APA, Harvard, Vancouver, ISO, and other styles
34

Alkhalis, Naufal, Husaini Husaini, Haekal Azief Haridhi, Cut Nadilla Maretna, Nur Fadli, Yudi Haditiar, Muhammad Nanda, et al. "Implementasi Mask R-CNN pada Perhutungan Tinggi dan Lebar Karang untuk Memantau Pertumbuhan Transplantasi Karang." Jurnal Teknologi Informasi dan Ilmu Komputer 11, no. 3 (July 31, 2024): 603–14. http://dx.doi.org/10.25126/jtiik.938374.

Full text
Abstract:
Terumbu karang merupakan ekosistem yang berperan penting di laut serta sangat rentan terhadap kerusakan. Transplantasi karang telah menjadi salah satu pendekatan yang dilakukan untuk melestarikan terumbu karang. Pasca transplantasi, pemantauan perlu dilakukan untuk melihat pertumbuhan karang. Dalam upaya pemantauan, para penyelam harus membawa alat selam, penggaris dan buku untuk mengukur dan mencatat satu-satu karang yang telah ditransplantasi. Proses tersebut menghabiskan investasi finansial, waktu dan tenaga yang besar. Pemantauan dapat dioptimalkan dengan mengimplementasikan algoritma Mask Region Convolutional Neural Network (Mask R-CNN) melalui library Detectron2 pada citra transplantasi karang. Proses implementasi akan menghasilkan model yang dapat melakukan segmentasi pada objek karang. Segmentasi tersebut dapat dikalkulasikan untuk melihat tinggi dan lebar karang sebagai indikator pertumbuhan. Implementasi model melibatkan tujuh backbone segmentasi instance dengan jadwal laju pembelajaran sebesar tiga kali. Berdasarkan hasil penelitian, model yang dihasilkan telah berhasil diimplementasikan dalam mengukur tinggi dan lebar dari karang transplantasi. Perbandingan antara hasil pengukuran menggunakan model Mask R-CNN dan pengukuran langsung menunjukkan konsistensi yang baik. Dengan demikian para penyelam hanya perlu memaksimalkan sumberdaya yang dimiliki untuk mengambil citra karang dengan jarak yang telah ditentukan sehingga dapat meringkas waktu penyelaman. Abstract Coral reefs are ecosystems that play an important role in the sea and are very vulnerable to damage. Coral transplantation has become one of the approaches taken to preserve coral reefs. Post-transplant, monitoring needs to be done to see coral growth. In monitoring efforts, divers must carry diving equipment, rulers and books to measure and record the only corals that have been transplanted. The process consumes a huge investment of finance, time and effort. Monitoring can be optimized by implementing the Mask Region Convolutional Neural Network (Mask R-CNN) algorithm through the Detectron2 library on coral transplant images. The implementation process will produce a model that can segment coral objects. The segmentation can be calculated to see the height and width of corals as growth indicators. The model implementation involves seven backbone segmentation instances with a learning rate schedule of three times. Based on the results of the study, the resulting model has been successfully implemented in measuring the height and width of transplanted corals. Comparison between measurement results using the Mask R-CNN model and direct measurements showed good consistency. Thus, divers only need to maximize their resources to take images of corals with a predetermined distance so as to shorten dive time.
APA, Harvard, Vancouver, ISO, and other styles
35

Bunnell, Arianna, Dustin Valdez, Thomas Wolfgruber, Aleen Altamirano, Brenda Hernandez, Peter Sadowski, and John Shepherd. "Abstract P3-04-05: Artificial Intelligence Detects, Classifies, and Describes Lesions in Clinical Breast Ultrasound Images." Cancer Research 83, no. 5_Supplement (March 1, 2023): P3–04–05—P3–04–05. http://dx.doi.org/10.1158/1538-7445.sabcs22-p3-04-05.

Full text
Abstract:
Abstract Purpose Many low-middle income countries (LMIC) suffer from chronic shortages of resources that inhibit the implementation of effective breast cancer screening programs. Advanced breast cancer rates in the U.S. Affiliated Pacific Islands substantially exceed that of the United States. We propose the use of portable breast ultrasound coupled with artificial intelligence (AI) algorithms to assist non-radiologist field personnel in real-time field lesion detection, classification, and biopsy, as well as determination of breast density for risk assessment. In this study, we examine the ability of an AI algorithm to detect and describe breast cancer lesions in clinically-acquired breast ultrasound images in 40,000 women participating in a Hawaii screening program. Materials and Methods The Hawaii and Pacific Islands Mammography Registry (HIPIMR) collects breast health questionnaires and breast imaging (mammography, ultrasound, and MRI) from participating centers in Hawaii and the Pacific and links this information to the Hawaii Tumor Registry for cancer findings. From the women with either screening or diagnostic B-mode breast ultrasound exams, we selected 3 negative cases (no cancer) for every positive case matched by age, excluding Doppler and elastography images. The blinded images were read by the study radiologist to delineate all lesions and describe in terms of the BI-RADS lexicon. These images were split by woman into training (70%), validation and hyperparameter selection (20%) and testing (20%) subsets. An AI model was fine-tuned for lesion and BI-RADS category classification from a Detectron2 framework [1] pre-trained on the COCO Instance Segmentation Dataset [2]. Model performance was evaluated by computation of precision and sensitivity percentages, as well as Area under the Receiver Operator Curve (AUROC). Detections were considered positive if they overlapped a ground truth lesion delineation by at least 50% (Intersection over Union = 0.5), and a maximum of 4 detections were generated for each image. Timing experiments were run on a GPU-enabled (Nvidia Tesla V100) machine on unbatched images. Results Over the 10-year observation period, we identified 5,214 women with US images meeting our criterion. Of these, 111 were diagnosed with malignant breast cancer and 333 were selected as non-cases for a total of 444 women. These 444 women had a total of 4,623 ultrasound images with 2,028 benign and 1,431 malignant lesions identified by the study radiologist. For cancerous lesions, the AI algorithm had 8% precision at a sensitivity of 90% on the testing set. For benign lesions, a sensitivity of 90% resulted in 5% precision on the testing set. The AUROC for bounding box detections of cancerous lesions was 0.90. The AUROC for bounding box detections of benign lesions was 0.87. The model made predictions at a rate of 25 frames/second time (38.7 milliseconds per image). Conclusion Detection, segmentation, and cancer classification of breast lesions are possible in clinically-acquired ultrasound images using AI. Based on our timing experiments, the model is capable of detecting and classifying lesions in real-time during ultrasound capture. Model performance is expected to improve as more data becomes available for training. Future work would involve further fine-tuning of the model on portable breast ultrasound images and increasing model evaluation speed in order to assess utility in low-resource populations [1] Wu Y, Kirillov A, Massa F, Lo W-Y, Girshick R. Detectron2. https://github.com/facebookresearch/detectron2. [2] Lin T-Y, Maire M, Belongie S, et al. Microsoft COCO: Common Objects in Context. Computer Vision – ECCV 2014. Springer International Publishing; 2014:740-755. Citation Format: Arianna Bunnell, Dustin Valdez, Thomas Wolfgruber, Aleen Altamirano, Brenda Hernandez, Peter Sadowski, John Shepherd. Artificial Intelligence Detects, Classifies, and Describes Lesions in Clinical Breast Ultrasound Images [abstract]. In: Proceedings of the 2022 San Antonio Breast Cancer Symposium; 2022 Dec 6-10; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2023;83(5 Suppl):Abstract nr P3-04-05.
APA, Harvard, Vancouver, ISO, and other styles
36

Saxena, Saloni, Sneh Thorat, Prachi Jain, Rupal Mohanty, and Trupti Baraskar. "Analysis of object detection techniques for bird species identification." Journal of Physics: Conference Series 2325, no. 1 (August 1, 2022): 012054. http://dx.doi.org/10.1088/1742-6596/2325/1/012054.

Full text
Abstract:
Abstract The birdwatching community in India is extensive. Wildlife enthusiasts often face difficulty in identifying a particular bird. There exist, separately, object detection Machine Learning models as well as online directories for manual bird identification, but there is no approach combining the two for easy identification of birds in India. We present a technique that uses object detection algorithms such as Faster R-CNN and YOLOv5 to solve this challenge. Our dataset includes a total of 60 species of birds found in Maharashtra. Furthermore, these methods have been tested on datasets of various sizes to provide a thorough comparison of the two techniques and to better understand their behaviour as the dataset size and classes grow. The YOLOV5 model trained on 100 epochs for 3000 images achieved a mAP score of 0.78 whereas Detectron2 when trained on the same dataset achieved an IOU score of 0.001. These methods have not been tested on a dataset containing birds from peninsular India, particularly Maharashtra. As a result, we intend to extend the dataset and make bird identification easier for birdwatchers.
APA, Harvard, Vancouver, ISO, and other styles
37

Byzkrovnyi, Oleksandr, Kyrylo Smelyakov, Anastasiya Chupryna, Loreta Savulioniene, and Paulius Sakalys. "COMPARISON OF POTENTIAL ROAD ACCIDENT DETECTION ALGORITHMS FOR MODERN MACHINE VISION SYSTEM." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 3 (June 13, 2023): 50–55. http://dx.doi.org/10.17770/etr2023vol3.7299.

Full text
Abstract:
Nowadays the robotics is relevant development industry. Robots are becoming more sophisticated, and this requires more sophisticated technologies. One of them is robot vision. This is needed for robots which communicate with the environment using vision instead of a batch of sensors. These data are utilized to analyze the situation at hand and develop a real-time action plan for the given scenario. This article explores the most suitable algorithm for detecting potential road accidents, specifically focusing on the scenario of turning left across one or more oncoming lanes. The selection of the optimal algorithm is based on a comparative analysis of evaluation and testing results, including metrics such as maximum frames per second for video processing during detection using robot’s hardware. The study categorises potential accidents into two classes: danger and not-danger. The Yolov7 and Detectron2 algorithms are compared, and the article aims to create simple models with the potential for future refinement. Also, this article provides conclusions and recommendations regarding the practical implementation of the proposed models and algorithm.
APA, Harvard, Vancouver, ISO, and other styles
38

Moysiadis, Vasileios, Georgios Kokkonis, Stamatia Bibi, Ioannis Moscholios, Nikolaos Maropoulos, and Panagiotis Sarigiannidis. "Monitoring Mushroom Growth with Machine Learning." Agriculture 13, no. 1 (January 16, 2023): 223. http://dx.doi.org/10.3390/agriculture13010223.

Full text
Abstract:
Mushrooms contain valuable nutrients, proteins, minerals, and vitamins, and it is suggested to include them in our diet. Many farmers grow mushrooms in restricted environments with specific atmospheric parameters in greenhouses. In addition, recent technologies of the Internet of things intend to give solutions in the agriculture area. In this paper, we evaluate the effectiveness of machine learning for mushroom growth monitoring for the genus Pleurotus. We use YOLOv5 to detect mushrooms’ growing stage and indicate those ready to harvest. The results show that it can detect mushrooms in the greenhouse with an F1-score of up to 76.5%. The classification in the final stage of mushroom growth gives an accuracy of up to 70%, which is acceptable considering the complexity of the photos used. In addition, we propose a method for mushroom growth monitoring based on Detectron2. Our method shows that the average growth period of the mushrooms is 5.22 days. Moreover, our method is also adequate to indicate the harvesting day. The evaluation results show that it could improve the time to harvest for 14.04% of the mushrooms.
APA, Harvard, Vancouver, ISO, and other styles
39

Peterson, T., and R. Green. "ASSESSING RISK PARAMETERS OF ACL INJURY VIA HUMAN POSE ESTIMATION." Orthopaedic Proceedings 105-B, SUPP_3 (February 2023): 97. http://dx.doi.org/10.1302/1358-992x.2023.3.097.

Full text
Abstract:
A method is proposed to assess risk parameters of anterior cruciate ligament (ACL) injury using human pose estimation (HPE) and a single stereo depth camera. Detectron2 is used to identify key points of a subject performing a single leg jump test. This allows dynamic pivot of the knee to be assessed during landing using four risk parameters: knee valgus, knee translation in the coronal plane, pelvic tilt, and head-ankle alignment (body sway).Results show the model has an accuracy of 7° in angular measurements and 38 mm in linear measurements. Compared to previous studies, which only consider front-on analysis, this method has partially reduced accuracy in linear measurements and half the accuracy in angular measurements. Despite this, coupling information from multiple risk parameters reduces the accuracy required on any one parameter and the use of a single depth camera enables reliable analysis at a subject orientation of ±45° relative to the camera.These factors create a novel solution, proposing the ability for broad evaluation of ACL risk parameters in environments outside a testing laboratory, which has not been done before.
APA, Harvard, Vancouver, ISO, and other styles
40

Ahamed, Asif Shakil, Tchepseu Pateng Uriche Cabrel, MA Chuang, Md Naoroj Jaman, and Muhammad Maruf Billah. "Optimizing Automated Bone Fracture Detection Through Advanced Faster R-CNN Architectures Integrating Multi-Scale Feature Extraction and Data Augmentation Techniques." European Journal of Biology and Medical Science Research 13, no. 1 (January 15, 2025): 1–11. https://doi.org/10.37745/ejbmsr.2013/vol13n1111.

Full text
Abstract:
Bone fracture detection using X-ray images is a critical diagnostic task in the healthcare sector. This study investigates the efficacy of two state-of-the-art Faster R-CNN architectures, ResNeXt 101 Feature Pyramid Network (FPN) and ResNet-50 FPN, implemented using Detectron2. The dataset used includes COCO-style annotated X-ray images with various fracture categories, including shoulder, wrist, and humerus fractures. The models were trained using advanced data augmentation techniques such as rotation, scaling, and flipping to improve generalization. ResNeXt 101 FPN demonstrated superior feature extraction capabilities, achieving higher precision (18.91% AP at IoU=0.50:0.95) compared to ResNet-50 FPN (6.23% AP). However, challenges such as high false negatives and overlapping predictions were identified, highlighting areas for improvement. Experimental results reveal that ResNeXt 101 FPN not only achieves better localization accuracy but also demonstrates robustness in detecting subtle fracture patterns. The integration of these models into clinical workflows could potentially assist radiologists in reducing diagnostic errors. Future work aims to address the identified limitations and explore domain-specific pretraining for enhanced performance.
APA, Harvard, Vancouver, ISO, and other styles
41

Jabir, Brahim, Noureddine Falih, and Khalid Rahmani. "Accuracy and Efficiency Comparison of Object Detection Open-Source Models." International Journal of Online and Biomedical Engineering (iJOE) 17, no. 05 (May 20, 2021): 165. http://dx.doi.org/10.3991/ijoe.v17i05.21833.

Full text
Abstract:
In agriculture, weeds cause direct damage to the crop, and it primarily affects the crop yield potential. Manual and mechanical weeding methods consume a lot of energy and time and do not give efficient results. Chemical weed control is still the best way to control weeds. However, the widespread and large-scale use of herbicides is harmful to the environment. Our study's objective is to propose an efficient model for a smart system to detect weeds in crops in real-time using computer vision. Our experiment dataset contains images of two different weed species well known in our region strained in this region with a temperate climate. The first is the Phalaris Paradoxa. The second is Convolvulus, manually captured with a professional camera from fields under different lighting conditions (from morning to afternoon in sunny and cloudy weather). The detection of weed and crop has experimented with four recent pre-configured open-source computer vision models for object detection: Detectron2, EfficientDet, YOLO, and Faster R-CNN. The performance comparison of weed detection models is executed on the Open CV and Keras platform using python language.
APA, Harvard, Vancouver, ISO, and other styles
42

Strzępek, Krzysztof, Mateusz Salach, Bartosz Trybus, Karol Siwiec, Bartosz Pawłowicz, and Andrzej Paszkiewicz. "Quantitative and Qualitative Analysis of Agricultural Fields Based on Aerial Multispectral Images Using Neural Networks." Sensors 23, no. 22 (November 17, 2023): 9251. http://dx.doi.org/10.3390/s23229251.

Full text
Abstract:
This article presents an integrated system that uses the capabilities of unmanned aerial vehicles (UAVs) to perform a comprehensive crop analysis, combining qualitative and quantitative evaluations for efficient agricultural management. A convolutional neural network-based model, Detectron2, serves as the foundation for detecting and segmenting objects of interest in acquired aerial images. This model was trained on a dataset prepared using the COCO format, which features a variety of annotated objects. The system architecture comprises a frontend and a backend component. The frontend facilitates user interaction and annotation of objects on multispectral images. The backend involves image loading, project management, polygon handling, and multispectral image processing. For qualitative analysis, users can delineate regions of interest using polygons, which are then subjected to analysis using the Normalized Difference Vegetation Index (NDVI) or Optimized Soil Adjusted Vegetation Index (OSAVI). For quantitative analysis, the system deploys a pre-trained model capable of object detection, allowing for the counting and localization of specific objects, with a focus on young lettuce crops. The prediction quality of the model has been calculated using the AP (Average Precision) metric. The trained neural network exhibited robust performance in detecting objects, even within small images.
APA, Harvard, Vancouver, ISO, and other styles
43

Kumar, P. Dimpul. "Vision-Based Asphalt Pavement Defect Detection Using Deep CNN with BIM Integration." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (April 30, 2024): 1471–75. http://dx.doi.org/10.22214/ijraset.2024.59840.

Full text
Abstract:
Abstract: The major objective of the research is to predict the defects on the asphalt pavement which is the main concern for the smooth movement of vehicles by using deep CNN. CNN consists of different operations that is image classification, object detection, and image segmentation. We use detectron2 for labelling the defective part to detect the defect accurately. The data set which consists of images of defects of pavement is collected and used for training the algorithm and some of the collected images are used for the testing. The suggested approach makes use of deep learning to automatically detect and categorize a range of asphalt pavement defects, such as surface distress, potholes, and cracks. The training, validation, and testing of the model are conducted on an extensive dataset of annotated pavement photographs. The trained model shows resilience and the ability to generalize, which qualifies it for practical uses in infrastructure management and pavement upkeep. This study advances automated pavement inspection systems, which provide a practical and economical means of determining which maintenance tasks should be prioritized to maintain safer and smoother roads and this approach predicted the defects with an accuracy of 98%.
APA, Harvard, Vancouver, ISO, and other styles
44

Matadamas, Idarh, Erik Zamora, and Teodulfo Aquino-Bolaños. "Detection and Classification of Agave angustifolia Haw Using Deep Learning Models." Agriculture 14, no. 12 (December 2, 2024): 2199. https://doi.org/10.3390/agriculture14122199.

Full text
Abstract:
In Oaxaca, Mexico, there are more than 30 species of the Agave genus, and its cultivation is of great economic and social importance. The incidence of pests, diseases, and environmental stress cause significant losses to the crop. The identification of damage through non-invasive tools based on visual information is important for reducing economic losses. The objective of this study was to evaluate and compare five deep learning models: YOLO versions 7, 7-tiny, and 8, and two from the Detectron2 library, Faster-RCNN and RetinaNet, for the detection and classification of Agave angustifolia plants in digital images. In the town of Santiago Matatlán, Oaxaca, 333 images were taken in an open-air plantation, and 1317 plants were labeled into five classes: sick, yellow, healthy, small, and spotted. Models were trained with a 70% random partition, validated with 10%, and tested with the remaining 20%. The results obtained from the models indicate that YOLOv7 is the best-performing model, in terms of the test set, with a mAP of 0.616, outperforming YOLOv7-tiny and YOLOv8, both with a mAP of 0.606 on the same set; demonstrating that artificial intelligence for the detection and classification of Agave angustifolia plants under planting conditions is feasible using digital images.
APA, Harvard, Vancouver, ISO, and other styles
45

Tun, San Chain, Tsubasa Onizuka, Pyke Tin, Masaru Aikawa, Ikuo Kobayashi, and Thi Thi Zin. "Revolutionizing Cow Welfare Monitoring: A Novel Top-View Perspective with Depth Camera-Based Lameness Classification." Journal of Imaging 10, no. 3 (March 8, 2024): 67. http://dx.doi.org/10.3390/jimaging10030067.

Full text
Abstract:
This study innovates livestock health management, utilizing a top-view depth camera for accurate cow lameness detection, classification, and precise segmentation through integration with a 3D depth camera and deep learning, distinguishing it from 2D systems. It underscores the importance of early lameness detection in cattle and focuses on extracting depth data from the cow’s body, with a specific emphasis on the back region’s maximum value. Precise cow detection and tracking are achieved through the Detectron2 framework and Intersection Over Union (IOU) techniques. Across a three-day testing period, with observations conducted twice daily with varying cow populations (ranging from 56 to 64 cows per day), the study consistently achieves an impressive average detection accuracy of 99.94%. Tracking accuracy remains at 99.92% over the same observation period. Subsequently, the research extracts the cow’s depth region using binary mask images derived from detection results and original depth images. Feature extraction generates a feature vector based on maximum height measurements from the cow’s backbone area. This feature vector is utilized for classification, evaluating three classifiers: Random Forest (RF), K-Nearest Neighbor (KNN), and Decision Tree (DT). The study highlights the potential of top-view depth video cameras for accurate cow lameness detection and classification, with significant implications for livestock health management.
APA, Harvard, Vancouver, ISO, and other styles
46

Silva, Josef Augusto Oberdan Souza, Vilson Soares de Siqueira, Marcio Mesquita, Luís Sérgio Rodrigues Vale, Thiago do Nascimento Borges Marques, Jhon Lennon Bezerra da Silva, Marcos Vinícius da Silva, et al. "Deep Learning for Weed Detection and Segmentation in Agricultural Crops Using Images Captured by an Unmanned Aerial Vehicle." Remote Sensing 16, no. 23 (November 24, 2024): 4394. http://dx.doi.org/10.3390/rs16234394.

Full text
Abstract:
Artificial Intelligence (AI) has changed how processes are developed, and decisions are made in the agricultural area replacing manual and repetitive processes with automated and more efficient ones. This study presents the application of deep learning techniques to detect and segment weeds in agricultural crops by applying models with different architectures in the analysis of images captured by an Unmanned Aerial Vehicle (UAV). This study contributes to the computer vision field by comparing the performance of the You Only Look Once (YOLOv8n, YOLOv8s, YOLOv8m, and YOLOv8l), Mask R-CNN (with framework Detectron2), and U-Net models, making public the dataset with aerial images of soybeans and beans. The models were trained using a dataset consisting of 3021 images, randomly divided into test, validation, and training sets, which were annotated, resized, and increased using the Roboflow application interface. Evaluation metrics were used, which included training efficiency (mAP50 and mAP50-90), precision, accuracy, and recall in the model’s evaluation and comparison. The YOLOv8s variant achieved higher performance with an mAP50 of 97%, precision of 99.7%, and recall of 99% when compared to the other models. The data from this manuscript show that deep learning models can generate efficient results for automatic weed detection when trained with a well-labeled and large set. Furthermore, this study demonstrated the great potential of using advanced object segmentation algorithms in detecting weeds in soybean and bean crops.
APA, Harvard, Vancouver, ISO, and other styles
47

Titu, Md Fahim Shahoriar, Mahir Afser Pavel, Goh Kah Ong Michael, Hisham Babar, Umama Aman, and Riasat Khan. "Real-Time Fire Detection: Integrating Lightweight Deep Learning Models on Drones with Edge Computing." Drones 8, no. 9 (September 13, 2024): 483. http://dx.doi.org/10.3390/drones8090483.

Full text
Abstract:
Fire accidents are life-threatening catastrophes leading to losses of life, financial damage, climate change, and ecological destruction. Promptly and efficiently detecting and extinguishing fires is essential to reduce the loss of lives and damage. This study uses drone, edge computing, and artificial intelligence (AI) techniques, presenting novel methods for real-time fire detection. This proposed work utilizes a comprehensive dataset of 7187 fire images and advanced deep learning models, e.g., Detection Transformer (DETR), Detectron2, You Only Look Once YOLOv8, and Autodistill-based knowledge distillation techniques to improve the model performance. The knowledge distillation approach has been implemented with the YOLOv8m (medium) as the teacher (base) model. The distilled (student) frameworks are developed employing the YOLOv8n (Nano) and DETR techniques. The YOLOv8n attains the best performance with 95.21% detection accuracy and 0.985 F1 score. A powerful hardware setup, including a Raspberry Pi 5 microcontroller, Pi camera module 3, and a DJI F450 custom-built drone, has been constructed. The distilled YOLOv8n model has been deployed in the proposed hardware setup for real-time fire identification. The YOLOv8n model achieves 89.23% accuracy and an approximate frame rate of 8 for the conducted live experiments. Integrating deep learning techniques with drone and edge devices demonstrates the proposed system’s effectiveness and potential for practical applications in fire hazard mitigation.
APA, Harvard, Vancouver, ISO, and other styles
48

Guidi, Tommaso, Lorenzo Python, Matteo Forasassi, Costanza Cucci, Massimiliano Franci, Fabrizio Argenti, and Andrea Barucci. "Egyptian Hieroglyphs Segmentation with Convolutional Neural Networks." Algorithms 16, no. 2 (February 1, 2023): 79. http://dx.doi.org/10.3390/a16020079.

Full text
Abstract:
The objective of this work is to show the application of a Deep Learning algorithm able to operate the segmentation of ancient Egyptian hieroglyphs present in an image, with the ambition to be as versatile as possible despite the variability of the image source. The problem is quite complex, the main obstacles being the considerable amount of different classes of existing hieroglyphs, the differences related to the hand of the scribe as well as the great differences among the various supports, such as papyri, stone or wood, where they are written. Furthermore, as in all archaeological finds, damage to the supports are frequent, with the consequence that hieroglyphs can be partially corrupted. In order to face this challenging problem, we leverage on the well-known Detectron2 platform, developed by the Facebook AI Research Group, focusing on the Mask R-CNN architecture to perform segmentation of image instances. Likewise, for several machine learning studies, one of the hardest challenges is the creation of a suitable dataset. In this paper, we will describe a hieroglyph dataset that has been created for the purpose of segmentation, highlighting its pros and cons, and the impact of different hyperparameters on the final results. Tests on the segmentation of images taken from public databases will also be presented and discussed along with the limitations of our study.
APA, Harvard, Vancouver, ISO, and other styles
49

Park, Yu-Hyeon, Sung Hoon Choi, Yeon-Ju Kwon, Soon-Wook Kwon, Yang Jae Kang, and Tae-Hwan Jun. "Detection of Soybean Insect Pest and a Forecasting Platform Using Deep Learning with Unmanned Ground Vehicles." Agronomy 13, no. 2 (February 6, 2023): 477. http://dx.doi.org/10.3390/agronomy13020477.

Full text
Abstract:
Soybeans (Glycine max (L.) Merr.), a popular food resource worldwide, have various uses throughout the industry, from everyday foods and health functional foods to cosmetics. Soybeans are vulnerable to pests such as stink bugs, beetles, mites, and moths, which reduce yields. Riptortus pedestris (R. pedestris) has been reported to cause damage to pods and leaves throughout the soybean growing season. In this study, an experiment was conducted to detect R. pedestris according to three different environmental conditions (pod filling stage, maturity stage, artificial cage) by developing a surveillance platform based on an unmanned ground vehicle (UGV) GoPro CAM. Deep learning technology (MRCNN, YOLOv3, Detectron2)-based models used in this experiment can be quickly challenged (i.e., built with lightweight parameter) immediately through a web application. The image dataset was distributed by random selection for training, validation, and testing and then preprocessed by labeling the image for annotation. The deep learning model localized and classified the R. pedestris individuals through a bounding box and masking in the image data. The model achieved high performances, at 0.952, 0.716, and 0.873, respectively, represented through the calculated means of average precision (mAP) value. The manufactured model will enable the identification of R. pedestris in the field and can be an effective tool for insect forecasting in the early stage of pest outbreaks in crop production.
APA, Harvard, Vancouver, ISO, and other styles
50

Chaiprasittikul, Natkritta, Bhornsawan Thanathornwong, Suchaya Pornprasertsuk-Damrongsri, Somchart Raocharernporn, Somporn Maponthong, and Somchai Manopatanakul. "Application of a Multi-Layer Perceptron in Preoperative Screening for Orthognathic Surgery." Healthcare Informatics Research 29, no. 1 (January 31, 2023): 16–22. http://dx.doi.org/10.4258/hir.2023.29.1.16.

Full text
Abstract:
Objectives: Orthognathic surgery is used to treat moderate to severe occlusal discrepancies. Examinations and measurements for preoperative screening are essential procedures. A careful analysis is needed to decide whether cases require orthognathic surgery. This study developed screening software using a multi-layer perceptron to determine whether orthognathic surgery is required. Methods: In total, 538 digital lateral cephalometric radiographs were retrospectively collected from a hospital data system. The input data consisted of seven cephalometric variables. All cephalograms were analyzed by the Detectron2 detection and segmentation algorithms. A keypoint region-based convolutional neural network (R-CNN) was used for object detection, and an artificial neural network (ANN) was used for classification. This novel neural network decision support system was created and validated using Keras software. The output data are shown as a number from 0 to 1, with cases requiring orthognathic surgery being indicated by a number approaching 1. Results: The screening software demonstrated a diagnostic agreement of 96.3% with specialists regarding the requirement for orthognathic surgery. A confusion matrix showed that only 2 out of 54 cases were misdiagnosed (accuracy = 0.963, sensitivity = 1, precision = 0.93, F-value = 0.963, area under the curve = 0.96). Conclusions: Orthognathic surgery screening with a keypoint R-CNN for object detection and an ANN for classification showed 96.3% diagnostic agreement in this study.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography