Auswahl der wissenschaftlichen Literatur zum Thema „Tensorflow ObjectDetection API 2“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Tensorflow ObjectDetection API 2" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Tensorflow ObjectDetection API 2"

1

Elshin, Кonstantin А., Еlena I. Molchanova, Мarina V. Usoltseva und Yelena V. Likhoshway. „Automatic accounting of Baikal diatomic algae: approaches and prospects“. Issues of modern algology (Вопросы современной альгологии), Nr. 2(20) (2019): 295–99. http://dx.doi.org/10.33624/2311-0147-2019-2(20)-295-299.

Der volle Inhalt der Quelle
Annotation:
Using the TensorFlow Object Detection API, an approach to identifying and registering Baikal diatom species Synedra acus subsp. radians has been tested. As a result, a set of images was formed and training was conducted. It is shown that аfter 15000 training iterations, the total value of the loss function was obtained equal to 0,04. At the same time, the classification accuracy is equal to 95%, and the accuracy of construction of the bounding box is also equal to 95%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Trainor-Guitton, Whitney, Leo Turon und Dominique Dubucq. „Python Earth Engine API as a new open-source ecosphere for characterizing offshore hydrocarbon seeps and spills“. Leading Edge 40, Nr. 1 (Januar 2021): 35–44. http://dx.doi.org/10.1190/tle40010035.1.

Der volle Inhalt der Quelle
Annotation:
The Python Earth Engine application programming interface (API) provides a new open-source ecosphere for testing hydrocarbon detection algorithms on large volumes of images curated with the Google Earth Engine. We specifically demonstrate the Python Earth Engine API by calculating three hydrocarbon indices: fluorescence, rotation absorption, and normalized fluorescence. The Python Earth Engine API provides an ideal environment for testing these indices with varied oil seeps and spills by (1) removing barriers of proprietary software formats and (2) providing an extensive library of data analysis tools (e.g., Pandas and Seaborn) and classification algorithms (e.g., Scikit-learn and TensorFlow). Our results demonstrate end-member cases in which fluorescence and normalized fluorescence indices of seawater and oil are statistically similar and different. As expected, predictive classification is more effective and the calculated probability of oil is more accurate for scenarios in which seawater and oil are well separated in the fluorescence space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Balaniuk, Remis, Olga Isupova und Steven Reece. „Mining and Tailings Dam Detection in Satellite Imagery Using Deep Learning“. Sensors 20, Nr. 23 (04.12.2020): 6936. http://dx.doi.org/10.3390/s20236936.

Der volle Inhalt der Quelle
Annotation:
This work explores the combination of free cloud computing, free open-source software, and deep learning methods to analyze a real, large-scale problem: the automatic country-wide identification and classification of surface mines and mining tailings dams in Brazil. Locations of officially registered mines and dams were obtained from the Brazilian government open data resource. Multispectral Sentinel-2 satellite imagery, obtained and processed at the Google Earth Engine platform, was used to train and test deep neural networks using the TensorFlow 2 application programming interface (API) and Google Colaboratory (Colab) platform. Fully convolutional neural networks were used in an innovative way to search for unregistered ore mines and tailing dams in large areas of the Brazilian territory. The efficacy of the approach is demonstrated by the discovery of 263 mines that do not have an official mining concession. This exploratory work highlights the potential of a set of new technologies, freely available, for the construction of low cost data science tools that have high social impact. At the same time, it discusses and seeks to suggest practical solutions for the complex and serious problem of illegal mining and the proliferation of tailings dams, which pose high risks to the population and the environment, especially in developing countries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Madandola, Olatunde, Altansuren Tumurbaatar, Liangyu Tan, Saitaja Abbu und Lauren E. Charles. „Camera-based, mobile disease surveillance using Convolutional Neural Networks“. Online Journal of Public Health Informatics 11, Nr. 1 (30.05.2019). http://dx.doi.org/10.5210/ojphi.v11i1.9849.

Der volle Inhalt der Quelle
Annotation:
ObjectiveAutomated syndromic surveillance using mobile devices is an emerging public health focus that has a high potential for enhanced disease tracking and prevention in areas with poor infrastructure. Pacific Northwest National Laboratory sought to develop an Android mobile application for syndromic biosurveillance that would i) use the phone camera to take images of human faces to detect individuals that are sick through a machine learning (ML) model and ii) collect image data to increase training data available for ML models. The initial prototype use case is for screening and tracking the health of soldiers for use by the Department of Defense’s Disease Threat Reduction Agency.IntroductionInfectious diseases present with multifarious factors requiring several efforts to detect, prevent, and break the chain of transmission. Recently, machine learning has shown to be promising for automated surveillance leading to rapid and early interventions, and extraction of phenotypic features of human faces [3, 5]. In addition, mobile devices have become a promising tool to provide on-the-ground surveillance, especially in remote areas and geolocation mapping [4].Pacific Northwest National Laboratory (PNNL) combines machine learning with mobile technology to provide a groundbreaking prototype of disease surveillance without the need for internet, just a camera. In this android application, VisionDx, a machine learning algorithm analyses human face images and within milliseconds notifies the user with confidence level whether or not the person is sick. VisionDx comes with two modes, photo and video, and additional features of history, map, and statistics. This application is the first of its kind and provides a new way to think about the future of syndromic surveillance.MethodsData. Human healthy (n = 1096) and non-healthy (n = 1269) facial images met the criteria for training the Machine Learning model after preprocessing them. The healthy images were obtained from the Chicago face database [6] and California Institute of Technology [2]. There are no known collections of disease facial images. Using open source image collection/curation services, images were identified by a variety of keywords, including specific infectious diseases. The criteria for image inclusion was 1. a frontal face was identified using OpenCV library [1], and 2. the image contained signs of disease through visual inspection (e.g., abnormal color, texture, swelling).Model. To identify a sick face from a healthy one, we used transfer machine learning and experimented with various pretrained Convolutional Neural Networks (CNN) from Google for mobile and embedded vision applications. Using MobileNet, we trained the final model with our data and deployed it to our prototype mobile app. Google Mobile Vision API and TensorFlow mobile were used to detect human faces and run predictions in the mobile app.Mobile Application. The Android app was built using Android Studio to provide an easily navigable interface that connects every action between tabbed features. The app features (i.e., Map, Camera, History, and Statistics) are in tab view format. The custom-made camera is the main feature of the app, and it contains face detection capability. A real-time health status detection function gives a level of confidence based the algorithm results found on detected faces in the camera image.ResultsPNNL's prototype Android application, VisionDx, was built with user-friendly tab views and functions to take camera images of human faces and classify them as sick or healthy through an inbuilt ML model. The major functions of the app are the camera, map, history, and statistics pages. The camera tab has a custom-made camera with face detection algorithm and classification model of sick or healthy. The camera has image or video mode and results of the algorithm are updated in milliseconds. The Statistics view provides a simple pie chart on sick/healthy images based on user selected time and location. The Map shows pins representing all labeled images stored, and the History displays all the labeled images. Clicking on an image in either view shows the image with metadata, i.e., model confidence levels, geolocation, and datetime.The CNN model prediction accuracy has ~98% validation accuracy and ~96% test accuracy. High model performance shows the possibility that deep learning could be a powerful tool to detect sickness. However, given the limited dataset, this high accuracy also means the model is most likely overfit to the data. The training set is limited: a. the number of training images is small compared to the variability in facial expressions and skin coloring, and b. the sick images only contained overt clinical signs. If trained on a larger, diverse set of data, this prototype app could prove extremely useful in surveillance efforts of individual to large groups of people in remote areas, e.g., to identify individuals in need of medical attention or get an overview of population health. In effort to improve the model, VisionDx was developed as a data collection tool to build a more comprehensive dataset. Within the tool, users can override the model prediction, i.e., false positive or false negative, with a simple toggle button. Lastly, the app was built to protect privacy so that other phone aps can't access the images unless shared by a user.ConclusionsDeveloped at PNNL for the Defense Threat Reduction Agency, VisionDx is a novel, camera-based mobile application for real-time biosurveillance and early warning in the field without internet dependency. The prototype mobile app takes pictures of faces and analyzes them using a state-of-the-art machine learning model to give two confidence levels of likelihood of being sick and healthy. With further development of a labeled dataset, such as by using the app as a data collection too, the results of the algorithm will quickly improve leading to a ground-breaking approach to public health surveillance.References1. Bradski G. (n.d.) The OpenCV Library. Retrieved Sept 30, 2018 at http://www.drdobbs.com/open-source/the-opencv-library/1844043192. Computational Vision: Archive. (1999). Retrieved Sept 22, 2018 at http://www.vision.caltech.edu/html-files/archive.html3. Ferry Q, Steinberg J, Webber C, et al (2014). Diagnostically relevant facial gestalt information from ordinary photos. ELife, 3, e02020.4. Fornace KM, Surendra H, Abidin TR, et al (2018). Use of mobile technology-based participatory mapping approaches to geolocate health facility attendees for disease surveillance in low resource settings. International Journal of Health Geographics, 17(1), 21. https://doi.org/10.1186/s12942-018-0141-05. Lopez DM, de Mello FL, G Dias, CM, et al (2017). Evaluating the Surveillance System for Spotted Fever in Brazil Using Machine-Learning Techniques. Frontiers in Public Health, 5. https://doi.org/10.3389/fpubh.2017.003236. Ma DS, Correll J, Wittenbrink B. (2015) The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122–1135. https://doi.org/10.3758/s13428-014-0532-5
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Tensorflow ObjectDetection API 2"

1

Černil, Martin. „Automatická detekce ovládacích prvků výtahu zpracováním digitálního obrazu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-444987.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the automatic detection of elevator controls in personal elevators through digital imaging using computer vision. The theoretical part of the thesis goes through methods of image processing with regards to object detection in image and research of previous solutions. This leads to investigation into the field of convolutional neural networks. The practical part covers the creation of elevator controls image dataset, selection, training and evaluation of the used models and the implementation of a robust algorithm utilizing the detection of elevator controls. The conclussion of the work discusses the suitability of the detection on given task.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Tensorflow ObjectDetection API 2"

1

Zhang, Zejun, Yanming Yang, Xin Xia, David Lo, Xiaoxue Ren und John Grundy. „Unveiling the Mystery of API Evolution in Deep Learning Frameworks: A Case Study of Tensorflow 2“. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE, 2021. http://dx.doi.org/10.1109/icse-seip52600.2021.00033.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie