Auswahl der wissenschaftlichen Literatur zum Thema „TensorFlow Object Detection API 2“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "TensorFlow Object Detection API 2" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "TensorFlow Object Detection API 2"

1

Elshin, Кonstantin А., Еlena I. Molchanova, Мarina V. Usoltseva und Yelena V. Likhoshway. „Automatic accounting of Baikal diatomic algae: approaches and prospects“. Issues of modern algology (Вопросы современной альгологии), Nr. 2(20) (2019): 295–99. http://dx.doi.org/10.33624/2311-0147-2019-2(20)-295-299.

Der volle Inhalt der Quelle
Annotation:
Using the TensorFlow Object Detection API, an approach to identifying and registering Baikal diatom species Synedra acus subsp. radians has been tested. As a result, a set of images was formed and training was conducted. It is shown that аfter 15000 training iterations, the total value of the loss function was obtained equal to 0,04. At the same time, the classification accuracy is equal to 95%, and the accuracy of construction of the bounding box is also equal to 95%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sharma, Rishabh. „Blindfold: A Smartphone based Object Detection Application“. International Journal for Research in Applied Science and Engineering Technology 9, Nr. VI (20.06.2021): 1268–73. http://dx.doi.org/10.22214/ijraset.2021.35091.

Der volle Inhalt der Quelle
Annotation:
With the advancement of computing power of Smartphones, they seem to be a better option to be used as an Assistive Technology for the visually impaired. In this paper we have discussed an application which allows visually impaired users to detect objects of their choice in their environment. We have made use of the Tensorflow Lite Application Programmable Interface (API), an API by Tensorflow which specifically runs models on an Android Smartphone. We have discussed the architecture of the API and the application itself. We have discussed the performance of various types of models such as MobileNet, ResNet & Inception. We have compared the results of the various Models on their size, accuracy & inference time(ms) and found that the MobileNet has the best performance. We have also explained the working of our application in detail.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Salunkhe, Akilesh, Manthan Raut, Shayantan Santra und Sumedha Bhagwat. „Android-based object recognition application for visually impaired“. ITM Web of Conferences 40 (2021): 03001. http://dx.doi.org/10.1051/itmconf/20214003001.

Der volle Inhalt der Quelle
Annotation:
Detecting objects in real-time and converting them into an audio output was a challenging task. Recent advancement in computer vision has allowed the development of various real-time object detection applications. This paper describes a simple android app that would help the visually impaired people in understanding their surroundings. The information about the surrounding environment was captured through a phone’s camera where real-time object recognition through tensorflow’s object detection API was done. The detected objects were then converted into an audio output by using android’s text-to-speech library. Tensorflow lite made the offline processing of complex algorithms simple. The overall accuracy of the proposed system was found to be approximately 90%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

GHIFARI, HUMMAM GHASSAN, DENNY DARLIS und ARIS HARTAMAN. „Pendeteksi Golongan Darah Manusia Berbasis Tensorflow menggunakan ESP32-CAM“. ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 9, Nr. 2 (04.04.2021): 359. http://dx.doi.org/10.26760/elkomika.v9i2.359.

Der volle Inhalt der Quelle
Annotation:
ABSTRAKPendeteksian golongan darah dilakukan untuk mengetahui golongan darah yang dimiliki. Hingga saat ini pendeteksian golongan darah masih dilakukan oleh petugas analis kesehatan menggunakan kemampuan mata manusia. Pada penelitian ini dilakukan perancangan alat pendeteksi golongan darah menggunakan ESP32-CAM. Alat ini menggunakan kamera OV2640 untuk menangkap citra, yang diproses menggunakan Tensorflow Object Detection API sebagai framework untuk melatih serta mengolah citra darah. Model latih akan digunakan pada kondisi pendeteksian langsung dan ditampilkan dalam bentuk jendela program golongan darah beserta tingkat akurasinya. Dalam penelitian ini pengujian dilakukan menggunakan 20 dataset dengan jarak pengukuran antara ESP32-CAM dengan citra golongan darah yaitu sejauh 20 cm. Hasil yang didapat selama pengujian mayoritas golongan darah yang dapat terdeteksi adalah golongan darah AB.Kata kunci: ESP32-CAM, Tensorflow, Python, Golongan Darah, Pengolahan Citra ABSTRACTBlood group detection is performed to determine the blood group. Currently, in detecting blood type, it still relies on the ability of the human eyeThis paper presents a human blood group detection device using ESP32-CAM. This tool uses ESP32-CAM to capture images, and the Tensorflow Object Detection API as a framework used to train and process an image. The way this tool works is that the ESP32-CAM will capture an image of the blood sample and then send it via the IP address. Through the IP Address, the python program will access the image, then the image will be processed based on a model that has been previously trained. The results of this processing will be displayed in the form of a window program along with the blood type and level of accuracy. In this study, testing was carried out based on the number of image samples, the number of datasets, and the measurement distance. The ideal measurement distance between the ESP32-CAM and the blood group image is 20 cm long. The results obtained during the testing of the majority of blood groups that can be detected are AB blood group.Keywords: ESP32-CAM, Tensorflow, Python, Blood Type, Image Processing
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sun, Chenfan, Wei Zhan, Jinhiu She und Yangyang Zhang. „Object Detection from the Video Taken by Drone via Convolutional Neural Networks“. Mathematical Problems in Engineering 2020 (13.10.2020): 1–10. http://dx.doi.org/10.1155/2020/4013647.

Der volle Inhalt der Quelle
Annotation:
The aim of this research is to show the implementation of object detection on drone videos using TensorFlow object detection API. The function of the research is the recognition effect and performance of the popular target detection algorithm and feature extractor for recognizing people, trees, cars, and buildings from real-world video frames taken by drones. The study found that using different target detection algorithms on the “normal” image (an ordinary camera) has different performance effects on the number of instances, detection accuracy, and performance consumption of the target and the application of the algorithm to the image data acquired by the drone is different. Object detection is a key part of the realization of any robot’s complete autonomy, while unmanned aerial vehicles (UAVs) are a very active area of this field. In order to explore the performance of the most advanced target detection algorithm in the image data captured by UAV, we have done a lot of experiments to solve our functional problems and compared two different types of representative of the most advanced convolution target detection systems, such as SSD and Faster R-CNN, with MobileNet, GoogleNet/Inception, and ResNet50 base feature extractors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ghuli, Poonam, Shashank B. N und Athri G. Rao. „Development of framework for detecting smoking scene in video clips“. Indonesian Journal of Electrical Engineering and Computer Science 13, Nr. 1 (01.01.2019): 22. http://dx.doi.org/10.11591/ijeecs.v13.i1.pp22-26.

Der volle Inhalt der Quelle
Annotation:
<p>According to Global Adult Tobacco Survey 2016-17, 61.9% of people quitting tobacco the reason was the warnings displayed on the product covers. The focus of this paper is to automatically display warning messages in video clips. This paper explains the development of a system to automatically detect the smoking scenes using image recognition approach in video clips and then add the warning message to the viewer. The approach aims to detect the cigarette object using Tensorflow’s object detection API. Tensorflow is an open source software library for machine learning provided by Google which is broadly used in the field image recognition. At present, Faster R-CNN with Inception ResNet is theTensorflow’s slowest but most accurate model. Faster R-CNN with Inception Resnet v2 model is used to detect smoking scenes by training the model with cigarette as an object.</p><p><em><br /></em></p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Trainor-Guitton, Whitney, Leo Turon und Dominique Dubucq. „Python Earth Engine API as a new open-source ecosphere for characterizing offshore hydrocarbon seeps and spills“. Leading Edge 40, Nr. 1 (Januar 2021): 35–44. http://dx.doi.org/10.1190/tle40010035.1.

Der volle Inhalt der Quelle
Annotation:
The Python Earth Engine application programming interface (API) provides a new open-source ecosphere for testing hydrocarbon detection algorithms on large volumes of images curated with the Google Earth Engine. We specifically demonstrate the Python Earth Engine API by calculating three hydrocarbon indices: fluorescence, rotation absorption, and normalized fluorescence. The Python Earth Engine API provides an ideal environment for testing these indices with varied oil seeps and spills by (1) removing barriers of proprietary software formats and (2) providing an extensive library of data analysis tools (e.g., Pandas and Seaborn) and classification algorithms (e.g., Scikit-learn and TensorFlow). Our results demonstrate end-member cases in which fluorescence and normalized fluorescence indices of seawater and oil are statistically similar and different. As expected, predictive classification is more effective and the calculated probability of oil is more accurate for scenarios in which seawater and oil are well separated in the fluorescence space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mohd Ariff Brahin, Noor, Haslinah Mohd Nasir, Aiman Zakwan Jidin, Mohd Faizal Zulkifli und Tole Sutikno. „Development of vocabulary learning application by using machine learning technique“. Bulletin of Electrical Engineering and Informatics 9, Nr. 1 (01.02.2020): 362–69. http://dx.doi.org/10.11591/eei.v9i1.1616.

Der volle Inhalt der Quelle
Annotation:
Nowadays an educational mobile application has been widely accepted and opened new windows of opportunity to explore. With its flexibility and practicality, the mobile application can promote learning through playing with an interactive environment especially to the children. This paper describes the development of mobile learning to help children above 4 years old in learning English and Arabic language in a playful and fun way. The application is developed with a combination of Android Studio and the machine learning technique, TensorFlow object detection API in order to predict the output result. Developed application namely “LearnWithIman” has successfully been implemented and the results show the prediction of application is accurate based on the captured image with the list item. The inclusion of the user database for lesson tracking and new lesson will be added for improvement in the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ou, Soobin, Huijin Park und Jongwoo Lee. „Implementation of an Obstacle Recognition System for the Blind“. Applied Sciences 10, Nr. 1 (30.12.2019): 282. http://dx.doi.org/10.3390/app10010282.

Der volle Inhalt der Quelle
Annotation:
The blind encounter commuting risks, such as failing to recognize and avoid obstacles while walking, but protective support systems are lacking. Acoustic signals at crosswalk lights are activated by button or remote control; however, these signals are difficult to operate and not always available (i.e., broken). Bollards are posts installed for pedestrian safety, but they can create dangerous situations in that the blind cannot see them. Therefore, we proposed an obstacle recognition system to assist the blind in walking safely outdoors; this system can recognize and guide the blind through two obstacles (crosswalk lights and bollards) with image training from the Google Object Detection application program interface (API) based on TensorFlow. The recognized results notify the blind through voice guidance playback in real time. The single shot multibox detector (SSD) MobileNet and faster region-convolutional neural network (R-CNN) models were applied to evaluate the obstacle recognition system; the latter model demonstrated better performance. Crosswalk lights were evaluated and found to perform better during the day than night. They were also analyzed to determine if a client could cross at a crosswalk, while the locations of bollards were analyzed by algorithms to guide the client by voice guidance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Balaniuk, Remis, Olga Isupova und Steven Reece. „Mining and Tailings Dam Detection in Satellite Imagery Using Deep Learning“. Sensors 20, Nr. 23 (04.12.2020): 6936. http://dx.doi.org/10.3390/s20236936.

Der volle Inhalt der Quelle
Annotation:
This work explores the combination of free cloud computing, free open-source software, and deep learning methods to analyze a real, large-scale problem: the automatic country-wide identification and classification of surface mines and mining tailings dams in Brazil. Locations of officially registered mines and dams were obtained from the Brazilian government open data resource. Multispectral Sentinel-2 satellite imagery, obtained and processed at the Google Earth Engine platform, was used to train and test deep neural networks using the TensorFlow 2 application programming interface (API) and Google Colaboratory (Colab) platform. Fully convolutional neural networks were used in an innovative way to search for unregistered ore mines and tailing dams in large areas of the Brazilian territory. The efficacy of the approach is demonstrated by the discovery of 263 mines that do not have an official mining concession. This exploratory work highlights the potential of a set of new technologies, freely available, for the construction of low cost data science tools that have high social impact. At the same time, it discusses and seeks to suggest practical solutions for the complex and serious problem of illegal mining and the proliferation of tailings dams, which pose high risks to the population and the environment, especially in developing countries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "TensorFlow Object Detection API 2"

1

Černil, Martin. „Automatická detekce ovládacích prvků výtahu zpracováním digitálního obrazu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-444987.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the automatic detection of elevator controls in personal elevators through digital imaging using computer vision. The theoretical part of the thesis goes through methods of image processing with regards to object detection in image and research of previous solutions. This leads to investigation into the field of convolutional neural networks. The practical part covers the creation of elevator controls image dataset, selection, training and evaluation of the used models and the implementation of a robust algorithm utilizing the detection of elevator controls. The conclussion of the work discusses the suitability of the detection on given task.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Furundzic, Bojan, und Fabian Mathisson. „Dataset Evaluation Method for Vehicle Detection Using TensorFlow Object Detection API“. Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43345.

Der volle Inhalt der Quelle
Annotation:
Recent developments in the field of object detection have highlighted a significant variation in quality between visual datasets. As a result, there is a need for a standardized approach of validating visual dataset features and their performance contribution. With a focus on vehicle detection, this thesis aims to develop an evaluation method utilized for comparing visual datasets. This method was utilized to determine the dataset that contributed to the detection model with the greatest ability to detect vehicles. The visual datasets compared in this research were BDD100K, KITTI and Udacity, each one being trained on individual models. Applying the developed evaluation method, a strong indication of BDD100K's performance superiority was determined. Further analysis and feature extraction of dataset size, label distribution and average labels per image was conducted. In addition, real-world experimental conduction was performed in order to validate the developed evaluation method. It could be determined that all features and experimental results pointed to BDD100K's superiority over the other datasets, validating the developed evaluation method. Furthermore, the TensorFlow Object Detection API's ability to improve performance gain from a visual dataset was studied. Through the use of augmentations, it was concluded that the TensorFlow Object Detection API serves as a great tool to increase performance gain for visual datasets.
Inom fältet av objektdetektering har ny utveckling demonstrerat stor kvalitetsvariation mellan visuella dataset. Till följd av detta finns det ett behov av standardiserade valideringsmetoder för att jämföra visuella dataset och deras prestationsförmåga. Detta examensarbete har, med ett fokus på fordonsigenkänning, som syfte att utveckla en pålitlig valideringsmetod som kan användas för att jämföra visuella dataset. Denna valideringsmetod användes därefter för att fastställa det dataset som bidrog till systemet med bäst förmåga att detektera fordon. De dataset som användes i denna studien var BDD100K, KITTI och Udacity, som tränades på individuella igenkänningsmodeller. Genom att applicera denna valideringsmetod, fastställdes det att BDD100K var det dataset som bidrog till systemet med bäst presterande igenkänningsförmåga. En analys av dataset storlek, etikettdistribution och genomsnittliga antalet etiketter per bild var även genomförd. Tillsammans med ett experiment som genomfördes för att testa modellerna i verkliga sammanhang, kunde det avgöras att valideringsmetoden stämde överens med de fastställda resultaten. Slutligen studerades TensorFlow Object Detection APIs förmåga att förbättra prestandan som erhålls av ett visuellt dataset. Genom användning av ett modifierat dataset, kunde det fastställas att TensorFlow Object Detection API är ett lämpligt modifieringsverktyg som kan användas för att öka prestandan av ett visuellt dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Horák, Martin. „Sémantický popis obrazovky embedded zařízení“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413261.

Der volle Inhalt der Quelle
Annotation:
Tato diplomová práce se zabývá detekcí prvků uživatelského rozhraní na obrázku displejetiskárny za použití konvolučních neuronových sítí. V teoretické části je provedena rešeršesoučasně používaných architektur pro detekci objektů. V praktické čísti je probrána tvorbagalerie, učení a vyhodnocování vybraných modelů za použití Tensorflow ObjectDetectionAPI. Závěr práce pojednává o vhodnosti vycvičených modelů pro zadaný úkol.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Agarwal, Kirti. „Object detection in refrigerators using Tensorflow“. Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/10464.

Der volle Inhalt der Quelle
Annotation:
Object Detection is widely used in many applications such as face detection, detecting vehicles and pedestrians on streets, and autonomous vehicles. Object detection not only includes recognizing and classifying objects in an image, but also localizes those objects and draws bounding boxes around them. Therefore, most of the successful object detection networks make use of neural network based image classifiers in conjunction with object detection techniques. Tensorflow Object Detection API, an open source framework based on Google's TensorFlow, allows us to create, train and deploy object detection models. This thesis mainly focuses on detecting objects kept in a refrigerator. To facilitate the object detection in a refrigerator, we have used Tensorflow Object Detection API to train and evaluate models such as SSD-MobileNet-v2, Faster R-CNN-ResNet-101, and R-FCN-ResNet-101. The models are tested as a) a pre-trained model and b) a fine-tuned model devised by fine-tuning the existing models with a training dataset for eight food classes extracted from the ImageNet database. The models are evaluated on a test dataset for the same eight classes derived from the ImageNet database to infer which works best for our application. The results suggest that the performance of Faster R-CNN is the best on the test food dataset with a mAP score of 81.74%, followed by R-FCN with a mAP of 80.33% and SSD with a mAP of 76.39%. However, the time taken by SSD for detection is considerably less than the other two models which makes it a viable option for our objective. The results provide substantial evidence that the SSD model is the most suitable model for deploying object detection on mobile devices with an accuracy of 76.39%. Our methodology and results could potentially help other researchers to design a custom object detector and further enhance the precision for their datasets.
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "TensorFlow Object Detection API 2"

1

Xin, Chen, Minh Nguyen und Wei Qi Yan. „Multiple Flames Recognition Using Deep Learning“. In Handbook of Research on Multimedia Cyber Security, 296–307. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2701-6.ch015.

Der volle Inhalt der Quelle
Annotation:
Identifying fire flames is based on object recognition which has valuable applications in intelligent surveillance. This chapter focuses on flame recognition using deep learning and its evaluations. For achieving this goal, authors design a Multi-Flame Detection scheme (MFD) which utilises Convolutional Neural Networks (CNNs). Authors take use of TensorFlow in deep learning with an NVIDIA GPU to train an image dataset and constructed a model for flame recognition. The contributions of this book chapter are: (1) data augmentation for flame recognition, (2) model construction for deep learning, and (3) result evaluations for flame recognition using deep learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "TensorFlow Object Detection API 2"

1

Barba-Guaman, Luis, Jose Eugenio Naranjo und Anthony Ortiz. „Object detection in rural roads using Tensorflow API“. In 2020 International Conference of Digital Transformation and Innovation Technology (Incodtrin). IEEE, 2020. http://dx.doi.org/10.1109/incodtrin51881.2020.00028.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kannan, Raadhesh, Chin Ji Jian und XiaoNing Guo. „Adversarial Evasion Noise Attacks Against TensorFlow Object Detection API“. In 2020 15th International Conference for Internet Technology and Secured Transactions (ICITST). IEEE, 2020. http://dx.doi.org/10.23919/icitst51030.2020.9351331.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hsieh, Cheng-Hsiung, Dung-Ching Lin, Cheng-Jia Wang, Zong-Ting Chen und Jiun-Jian Liaw. „Real-Time Car Detection and Driving Safety Alarm System With Google Tensorflow Object Detection API“. In 2019 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2019. http://dx.doi.org/10.1109/icmlc48188.2019.8949265.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kilic, Irfan, und Galip Aydin. „Traffic Sign Detection And Recognition Using TensorFlow’ s Object Detection API With A New Benchmark Dataset“. In 2020 International Conference on Electrical Engineering (ICEE). IEEE, 2020. http://dx.doi.org/10.1109/icee49691.2020.9249914.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Rosol, Marcin. „Application of the TensorFlow object detection API to high speed videos of pyrotechnics for velocity calculations“. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, herausgegeben von Tien Pham, Latasha Solomon und Katie Rainey. SPIE, 2020. http://dx.doi.org/10.1117/12.2557526.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie