Littérature scientifique sur le sujet « Detectron2 »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Detectron2 ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Detectron2"

1

Abdusalomov, Akmalbek Bobomirzaevich, Bappy MD Siful Islam, Rashid Nasimov, Mukhriddin Mukhiddinov et Taeg Keun Whangbo. « An Improved Forest Fire Detection Method based on the Detectron2 Model and a Deep Learning Approach ». Sensors 23, no 3 (29 janvier 2023) : 1512. http://dx.doi.org/10.3390/s23031512.

Texte intégral
Résumé :
With an increase in both global warming and the human population, forest fires have become a major global concern. This can lead to climatic shifts and the greenhouse effect, among other adverse outcomes. Surprisingly, human activities have caused a disproportionate number of forest fires. Fast detection with high accuracy is the key to controlling this unexpected event. To address this, we proposed an improved forest fire detection method to classify fires based on a new version of the Detectron2 platform (a ground-up rewrite of the Detectron library) using deep learning approaches. Furthermore, a custom dataset was created and labeled for the training model, and it achieved higher precision than the other models. This robust result was achieved by improving the Detectron2 model in various experimental scenarios with a custom dataset and 5200 images. The proposed model can detect small fires over long distances during the day and night. The advantage of using the Detectron2 algorithm is its long-distance detection of the object of interest. The experimental results proved that the proposed forest fire detection method successfully detected fires with an improved precision of 99.3%.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kouvaras, Loukas, et George P. Petropoulos. « A Novel Technique Based on Machine Learning for Detecting and Segmenting Trees in Very High Resolution Digital Images from Unmanned Aerial Vehicles ». Drones 8, no 2 (1 février 2024) : 43. http://dx.doi.org/10.3390/drones8020043.

Texte intégral
Résumé :
The present study proposes a technique for automated tree crown detection and segmentation in digital images derived from unmanned aerial vehicles (UAVs) using a machine learning (ML) algorithm named Detectron2. The technique, which was developed in the python programming language, receives as input images with object boundary information. After training on sets of data, it is able to set its own object boundaries. In the present study, the algorithm was trained for tree crown detection and segmentation. The test bed consisted of UAV imagery of an agricultural field of tangerine trees in the city of Palermo in Sicily, Italy. The algorithm’s output was the accurate boundary of each tree. The output from the developed algorithm was compared against the results of tree boundary segmentation generated by the Support Vector Machine (SVM) supervised classifier, which has proven to be a very promising object segmentation method. The results from the two methods were compared with the most accurate yet time-consuming method, direct digitalization. For accuracy assessment purposes, the detected area efficiency, skipped area rate, and false area rate were estimated for both methods. The results showed that the Detectron2 algorithm is more efficient in segmenting the relevant data when compared to the SVM model in two out of the three indices. Specifically, the Detectron2 algorithm exhibited a 0.959% and 0.041% fidelity rate on the common detected and skipped area rate, respectively, when compared with the digitalization method. The SVM exhibited 0.902% and 0.097%, respectively. On the other hand, the SVM classification generated better false detected area results, with 0.035% accuracy, compared to the Detectron2 algorithm’s 0.056%. Having an accurate estimation of the tree boundaries from the Detectron2 algorithm, the tree health assessment was evaluated last. For this to happen, three different vegetation indices were produced (NDVI, GLI and VARI). All those indices showed tree health as average. All in all, the results demonstrated the ability of the technique to detect and segment trees from UAV imagery.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bhaddurgatte, Vishesh R., Supreeth S. Koushik, Shushruth S et Kiran Y. C. « Detection Of Adulterants in Pistachio Using Machine Learning Technique ». INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no 01 (6 janvier 2025) : 1–9. https://doi.org/10.55041/ijsrem40523.

Texte intégral
Résumé :
This study addresses the issue of adulteration in pistachios, with a focus of green peas to compromise product purity and maximize profits. Leveraging the capabilities of YOLOv5s, a state-of-the-art real-time image processing model, we developed a robust system for identifying adulterants in pistachios. Comparative evaluations against other deep learning models, including Detectron2 and Scaled YOLOv4 revealed inferior performance in terms of accuracy and speed with our YOLOv5s-based solution. This YOLOv5s model helps to provide instant percentage of adulterants and pistachios. Keywords - YOLOv5s, Scaled YOLOv4, Detectron2
Styles APA, Harvard, Vancouver, ISO, etc.
4

Shi, Zijing. « The distinguish between cats and dogs based on Detectron2 for automatic feeding ». Applied and Computational Engineering 38, no 1 (22 janvier 2024) : 35–40. http://dx.doi.org/10.54254/2755-2721/38/20230526.

Texte intégral
Résumé :
With the rapid growth of urbanization, the problem of stray animals on the streets is particularly prominent, especially the shortage of food for cats and dogs. This study introduces an automatic feeding system based on the Detectron2 deep learning framework, aiming to accurately identify and provide suitable food for these stray animals. Through training using Detectron2 with a large amount of image data, the system shows extremely high recognition accuracy in single-object images. When dealing with multi-object images, Detectron2 can generate independent recognition frames for each target and make corresponding feeding decisions. Despite the outstanding performance of the model, its potential uncertainties and errors still need to be considered. This research not only offers a practical solution to meet the basic needs of stray animals but also provides a new perspective for urban management and animal welfare. By combining technology with social responsibility, this innovative solution opens up a new path for solving the stray animal problem in cities, with broad application prospects and profound social significance.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chincholi, Farheen, et Harald Koestler. « Detectron2 for Lesion Detection in Diabetic Retinopathy ». Algorithms 16, no 3 (7 mars 2023) : 147. http://dx.doi.org/10.3390/a16030147.

Texte intégral
Résumé :
Hemorrhages in the retinal fundus are a common symptom of both diabetic retinopathy and diabetic macular edema, making their detection crucial for early diagnosis and treatment. For this task, the aim is to evaluate the performance of two pre-trained and additionally fine-tuned models from the Detectron2 model zoo, Faster R-CNN (R50-FPN) and Mask R-CNN (R50-FPN). Experiments show that the Mask R-CNN (R50-FPN) model provides highly accurate segmentation masks for each detected hemorrhage, with an accuracy of 99.34%. The Faster R-CNN (R50-FPN) model detects hemorrhages with an accuracy of 99.22%. The results of both models are compared using a publicly available image database with ground truth marked by experts. Overall, this study demonstrates that current models are valuable tools for early diagnosis and treatment of diabetic retinopathy and diabetic macular edema.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Mullins, Connor C., Travis J. Esau, Qamar U. Zaman, Ahmad A. Al-Mallahi et Aitazaz A. Farooque. « Exploiting 2D Neural Network Frameworks for 3D Segmentation Through Depth Map Analytics of Harvested Wild Blueberries (Vaccinium angustifolium Ait.) ». Journal of Imaging 10, no 12 (15 décembre 2024) : 324. https://doi.org/10.3390/jimaging10120324.

Texte intégral
Résumé :
This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red–green–blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities. Notably, the YOLOv8 models, particularly YOLOv8n-seg, demonstrated superior processing efficiency, with an average time of 18.10 ms, significantly faster than the Detectron2 models, which exceeded 57 ms, while maintaining high performance with a mean intersection over union (IoU) of 0.944 and a Matthew’s correlation coefficient (MCC) of 0.957. A qualitative comparison of segmentation masks indicated that the YOLO models produced smoother and more accurate object boundaries, whereas Detectron2 showed jagged edges and under-segmentation. Statistical analyses, including ANOVA and Tukey’s HSD test (α = 0.05), confirmed the superior segmentation performance of models on depth maps over RGB images (p < 0.001). This study concludes by recommending the YOLOv8n-seg model for real-time 3D segmentation in precision agriculture, providing insights that can enhance volume estimation, yield prediction, and resource management practices.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Castillo, Darwin, María José Rodríguez-Álvarez, René Samaniego et Vasudevan Lakshminarayanan. « Models to Identify Small Brain White Matter Hyperintensity Lesions ». Applied Sciences 15, no 5 (6 mars 2025) : 2830. https://doi.org/10.3390/app15052830.

Texte intégral
Résumé :
According to the World Health Organization (WHO), peripheral and central neurological disorders affect approximately one billion people worldwide. Ischemic stroke and Alzheimer’s Disease and other dementias are the second and fifth leading causes of death, respectively. In this context, detecting and classifying brain lesions constitute a critical area of research in medical image processing, significantly impacting clinical practice. Traditional lesion detection, segmentation, and feature extraction methods are time-consuming and observer-dependent. In this sense, research in the machine and deep learning methods applied to medical image processing constitute one of the crucial tools for automatically learning hierarchical features to get better accuracy, quick diagnosis, treatment, and prognosis of diseases. This project aims to develop and implement deep learning models for detecting and classifying small brain White Matter hyperintensities (WMH) lesions in magnetic resonance images (MRI), specifically lesions concerning ischemic and demyelination diseases. The methods applied were the UNet and Segmenting Anything model (SAM) for segmentation, while YOLOV8 and Detectron2 (based on MaskRCNN) were also applied to detect and classify the lesions. Experimental results show a Dice coefficient (DSC) of 0.94, 0.50, 0.241, and 0.88 for segmentation of WMH lesions using the UNet, SAM, YOLOv8, and Detectron2, respectively. The Detectron2 model demonstrated an accuracy of 0.94 in detecting and 0.98 in classifying lesions, including small lesions where other models often fail. The methods developed give an outline for the detection, segmentation, and classification of small and irregular morphology brain lesions and could significantly aid clinical diagnostics, providing reliable support for physicians and improving patient outcomes.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Rani, Anju, Daniel Ortiz-Arroyo et Petar Durdevic. « Defect Detection in Synthetic Fibre Ropes using Detectron2 Framework ». Applied Ocean Research 150 (septembre 2024) : 104109. http://dx.doi.org/10.1016/j.apor.2024.104109.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wen, Hao, Chang Huang et Shengmin Guo. « The Application of Convolutional Neural Networks (CNNs) to Recognize Defects in 3D-Printed Parts ». Materials 14, no 10 (15 mai 2021) : 2575. http://dx.doi.org/10.3390/ma14102575.

Texte intégral
Résumé :
Cracks and pores are two common defects in metallic additive manufacturing (AM) parts. In this paper, deep learning-based image analysis is performed for defect (cracks and pores) classification/detection based on SEM images of metallic AM parts. Three different levels of complexities, namely, defect classification, defect detection and defect image segmentation, are successfully achieved using a simple CNN model, the YOLOv4 model and the Detectron2 object detection library, respectively. The tuned CNN model can classify any single defect as either a crack or pore at almost 100% accuracy. The other two models can identify more than 90% of the cracks and pores in the testing images. In addition to the application of static image analysis, defect detection is also successfully applied on a video which mimics the AM process control images. The trained Detectron2 model can identify almost all the pores and cracks that exist in the original video. This study lays a foundation for future in situ process monitoring of the 3D printing process.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sankar, Aravinthan, Kunal Chaturvedi, Al-Akhir Nayan, Mohammad Hesam Hesamian, Ali Braytee et Mukesh Prasad. « Utilizing Generative Adversarial Networks for Acne Dataset Generation in Dermatology ». BioMedInformatics 4, no 2 (9 avril 2024) : 1059–70. http://dx.doi.org/10.3390/biomedinformatics4020059.

Texte intégral
Résumé :
Background: In recent years, computer-aided diagnosis for skin conditions has made significant strides, primarily driven by artificial intelligence (AI) solutions. However, despite this progress, the efficiency of AI-enabled systems remains hindered by the scarcity of high-quality and large-scale datasets, primarily due to privacy concerns. Methods: This research circumvents privacy issues associated with real-world acne datasets by creating a synthetic dataset of human faces with varying acne severity levels (mild, moderate, and severe) using Generative Adversarial Networks (GANs). Further, three object detection models—YOLOv5, YOLOv8, and Detectron2—are used to evaluate the efficacy of the augmented dataset for detecting acne. Results: Integrating StyleGAN with these models, the results demonstrate the mean average precision (mAP) scores: YOLOv5: 73.5%, YOLOv8: 73.6%, and Detectron2: 37.7%. These scores surpass the mAP achieved without GANs. Conclusions: This study underscores the effectiveness of GANs in generating synthetic facial acne images and emphasizes the importance of utilizing GANs and convolutional neural network (CNN) models for accurate acne detection.
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Livres sur le sujet "Detectron2"

1

Pham, Van Vung, et Tommy Dang. Hands-On Computer Vision with Detectron2 : Develop Object Detection and Segmentation Models with a Code and Visualization Approach. de Gruyter GmbH, Walter, 2023.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Detectron2"

1

Mujagić, Adnan, Amar Mujagić et Dželila Mehanović. « Food Recognition and Segmentation Using Detectron2 Framework ». Dans Lecture Notes in Networks and Systems, 409–19. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71694-2_30.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Galli, Hugo, Michelli Loureiro, Felipe Loureiro et Edimilson Santos. « Convolutional Neural Network-Based Brain Tumor Segmentation Using Detectron2 ». Dans Intelligent Systems Design and Applications, 80–89. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-64813-7_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Soltani, Hama, Mohamed Amroune, Issam Bendib et Mohamed-Yassine Haouam. « Application of Faster-RCNN with Detectron2 for Effective Breast Tumor Detection in Mammography ». Dans 13th International Conference on Information Systems and Advanced Technologies “ICISAT 2023”, 57–63. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60594-9_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Priyanka, Amrita Mohan, K. N. Singh, A. K. Singh et A. K. Agrawal. « D2StegE : Using Detectron2 to Segment Medical Image with Security Through Steganography and Encryption ». Dans Communications in Computer and Information Science, 49–63. Cham : Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-81336-8_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ali, Ammar Alhaj, Rasin Katta, Roman Jasek, Bronislav Chramco et Said Krayem. « COVID-19 Detection from Chest X-Ray Images Using Detectron2 and Faster R-CNN ». Dans Data Science and Algorithms in Systems, 37–53. Cham : Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-21438-7_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Magwili, Glenn, Michael Jherriecho Christiane Ayson et Mae Anne Armas. « Admonishment System for Human Physical Distancing Violators Using Faster Region-Based Convolutional Neural Network with Detectron2 Library ». Dans Lecture Notes in Networks and Systems, 356–71. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90321-3_29.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Restrepo-Arias, Juan Felipe, Paulina Arregocés-Guerra et John Willian Branch-Bedoya. « Crops Classification in Small Areas Using Unmanned Aerial Vehicles (UAV) and Deep Learning Pre-trained Models from Detectron2 ». Dans Handbook on Decision Making, 273–91. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08246-7_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kömeçoğlu, Yavuz, Serdar Akyol, Fethi Su et Başak Buluz Kömeçoğlu. « Deep Learning for Information Extraction From Digital Documents ». Dans Machine Learning for Societal Improvement, Modernization, and Progress, 180–99. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-4045-2.ch009.

Texte intégral
Résumé :
Print-oriented PDF documents are excellent at preserving the position of text and other objects but have difficulties in processing. Processable PDF documents will provide solutions to the unique needs of different sectors by paving the way for many innovations such as searching within documents, linking with different documents, or restructuring in a format that will increase the reading experience. In this chapter, a deep learning-based system design is presented that aims to export clean text content, separate all visual elements, and extract rich information from the content without losing the integrated structure of content types. While the F-RCNN model using the Detectron2 library was used to extract the layout, the cosine similarities between the wod2vec representations of the texts were used to identify the related clips, and the transformer language models were used to classify the clip type. The performance values on the 200-sample data set created by the researchers were determined as 1.87 WER and 2.11 CER in the headings and 0.22 WER and 0.21 CER in the paragraphs.
Styles APA, Harvard, Vancouver, ISO, etc.
9

« A11*6 Element Pyroelectric Detectro Array Utilizing Self-Polarized PZT Thin Films Grown by Sputtering ». Dans Science and Technology of Integrated Ferroelectrics, 563–70. CRC Press, 2001. http://dx.doi.org/10.1201/9781482283365-51.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Detectron2"

1

K V, Adith, Aidan Dsouza, B. M. Kripa, D. Sathvik Pai et Gayana M N. « DStruct : Handwritten Data Structure Problem Solving Using Detectron2 and YOLO ». Dans 2024 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/discover62353.2024.10750745.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lytvyn, Anastasiia, Kateryna Posokhova, Maksym Tymkovych, Oleg Avrunin, Oleksandra Hubenia et Birgit Glasmacher. « Object Detection for Virtual Assistant in Cryolaboratory Based on Detectron2 Framework ». Dans 2024 IEEE 17th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET), 534–39. IEEE, 2024. http://dx.doi.org/10.1109/tcset64720.2024.10755685.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ayadi, Nouamane, Abdelilah Et-Taleby, Yassine Chaibi, Cheikhelwely Elwely Salem, Mohamed Benslimane et Zakaria Chalh. « Photovoltaic hotspot fault detection based on detectron2 with faster R-CNN ». Dans 2024 3rd International Conference on Embedded Systems and Artificial Intelligence (ESAI), 1–12. IEEE, 2024. https://doi.org/10.1109/esai62891.2024.10913808.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Bansal, Aayushi, Rewa Sharma et Mamta Kathuria. « A Comparative Study of Object Detection and Pose Detection for Fall Detection using Detectron2 ». Dans 2024 3rd Edition of IEEE Delhi Section Flagship Conference (DELCON), 1–8. IEEE, 2024. https://doi.org/10.1109/delcon64804.2024.10866659.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

S, Raveena, et Surendran R. « Detectron2 Powered-Image Segmentation and Object Detection for Smart Weed Control Program in Coffee Plantation ». Dans 2024 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), 812–19. IEEE, 2024. https://doi.org/10.1109/3ict64318.2024.10824261.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Singh, Ritik, Shubham Shetty, Gaurav Patil et Pramod J. Bide. « Helmet Detection Using Detectron2 and EfficientDet ». Dans 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2021. http://dx.doi.org/10.1109/icccnt51525.2021.9579953.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Heartlin Maria, H., A. Maria Jossy, S. Malarvizhi et K. Saravanan. « Automated detection of ovarian tumors using Detectron2 network ». Dans PROCEEDING OF INTERNATIONAL CONFERENCE ON ENERGY, MANUFACTURE, ADVANCED MATERIAL AND MECHATRONICS 2021. AIP Publishing, 2023. http://dx.doi.org/10.1063/5.0126164.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Choudhari, Rutvik, Shubham Goel, Yash Patel et Sunil Ghane. « Traffic Rule Violation Detection using Detectron2 and Yolov7 ». Dans 2023 World Conference on Communication & Computing (WCONF). IEEE, 2023. http://dx.doi.org/10.1109/wconf58270.2023.10235130.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mahammad, Farooq Sunar, K. V. Sai Phani, N. Ramadevi, G. Siva Nageswara Rao, O. Bhaskaru et Parumanchala Bhaskar. « Detecting social distancing by using Detectron2 and OpenCV ». Dans INTERNATIONAL CONFERENCE ON EMERGING TRENDS IN ELECTRONICS AND COMMUNICATION ENGINEERING - 2023. AIP Publishing, 2024. http://dx.doi.org/10.1063/5.0212686.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Abdallah, Asma Ben, Abdelaziz Kallel, Mouna Dammak et Ahmed Ben Ali. « Olive tree and shadow instance segmentation based on Detectron2 ». Dans 2022 6th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP). IEEE, 2022. http://dx.doi.org/10.1109/atsip55956.2022.9805897.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie