Добірка наукової літератури з теми "Dataset VISION"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Dataset VISION".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Dataset VISION"

1

Scheuerman, Morgan Klaus, Alex Hanna, and Emily Denton. "Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–37. http://dx.doi.org/10.1145/3476058.

Повний текст джерела
Анотація:
Data is a crucial component of machine learning. The field is reliant on data to train, validate, and test models. With increased technical capabilities, machine learning research has boomed in both academic and industry settings, and one major focus has been on computer vision. Computer vision is a popular domain of machine learning increasingly pertinent to real-world applications, from facial recognition in policing to object detection for autonomous vehicles. Given computer vision's propensity to shape machine learning research and impact human life, we seek to understand disciplinary practices around dataset documentation - how data is collected, curated, annotated, and packaged into datasets for computer vision researchers and practitioners to use for model tuning and development. Specifically, we examine what dataset documentation communicates about the underlying values of vision data and the larger practices and goals of computer vision as a field. To conduct this study, we collected a corpus of about 500 computer vision datasets, from which we sampled 114 dataset publications across different vision tasks. Through both a structured and thematic content analysis, we document a number of values around accepted data practices, what makes desirable data, and the treatment of humans in the dataset construction process. We discuss how computer vision datasets authors value efficiency at the expense of care; universality at the expense of contextuality; impartiality at the expense of positionality; and model work at the expense of data work. Many of the silenced values we identify sit in opposition with social computing practices. We conclude with suggestions on how to better incorporate silenced values into the dataset creation and curation process.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Geiger, A., P. Lenz, C. Stiller, and R. Urtasun. "Vision meets robotics: The KITTI dataset." International Journal of Robotics Research 32, no. 11 (August 23, 2013): 1231–37. http://dx.doi.org/10.1177/0278364913491297.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liew, Yu Liang, and Jeng Feng Chin. "Vision-based biomechanical markerless motion classification." Machine Graphics and Vision 32, no. 1 (February 16, 2023): 3–24. http://dx.doi.org/10.22630/mgv.2023.32.1.1.

Повний текст джерела
Анотація:
This study used stick model augmentation on single-camera motion video to create a markerless motion classification model of manual operations. All videos were augmented with a stick model composed of keypoints and lines by using the programming model, which later incorporated the COCO dataset, OpenCV and OpenPose modules to estimate the coordinates and body joints. The stick model data included the initial velocity, cumulative velocity, and acceleration for each body joint. The extracted motion vector data were normalized using three different techniques, and the resulting datasets were subjected to eight classifiers. The experiment involved four distinct motion sequences performed by eight participants. The random forest classifier performed the best in terms of accuracy in recorded data classification in its min-max normalized dataset. This classifier also obtained a score of 81.80% for the dataset before random subsampling and a score of 92.37% for the resampled dataset. Meanwhile, the random subsampling method dramatically improved classification accuracy by removing noise data and replacing them with replicated instances to balance the class. This research advances methodological and applied knowledge on the capture and classification of human motion using a single camera view.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Alyami, Hashem, Abdullah Alharbi, and Irfan Uddin. "Lifelong Machine Learning for Regional-Based Image Classification in Open Datasets." Symmetry 12, no. 12 (December 16, 2020): 2094. http://dx.doi.org/10.3390/sym12122094.

Повний текст джерела
Анотація:
Deep Learning algorithms are becoming common in solving different supervised and unsupervised learning problems. Different deep learning algorithms were developed in last decade to solve different learning problems in different domains such as computer vision, speech recognition, machine translation, etc. In the research field of computer vision, it is observed that deep learning has become overwhelmingly popular. In solving computer vision related problems, we first take a CNN (Convolutional Neural Network) which is trained from scratch or some times a pre-trained model is taken and further fine-tuned based on the dataset that is available. The problem of training the model from scratch on new datasets suffers from catastrophic forgetting. Which means that when a new dataset is used to train the model, it forgets the knowledge it has obtained from an existing dataset. In other words different datasets does not help the model to increase its knowledge. The problem with the pre-trained models is that mostly CNN models are trained on open datasets, where the data set contains instances from specific regions. This results into predicting disturbing labels when the same model is used for instances of datasets collected in a different region. Therefore, there is a need to find a solution on how to reduce the gap of Geo-diversity in different computer vision problems in developing world. In this paper, we explore the problems of models that were trained from scratch along with models which are pre-trained on a large dataset, using a dataset specifically developed to understand the geo-diversity issues in open datasets. The dataset contains images of different wedding scenarios in South Asian countries. We developed a Lifelong CNN that can incrementally increase knowledge i.e., the CNN learns labels from the new dataset but includes the existing knowledge of open data sets. The proposed model demonstrates highest accuracy compared to models trained from scratch or pre-trained model.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bai, Long, Liangyu Wang, Tong Chen, Yuanhao Zhao, and Hongliang Ren. "Transformer-Based Disease Identification for Small-Scale Imbalanced Capsule Endoscopy Dataset." Electronics 11, no. 17 (August 31, 2022): 2747. http://dx.doi.org/10.3390/electronics11172747.

Повний текст джерела
Анотація:
Vision Transformer (ViT) is emerging as a new leader in computer vision with its outstanding performance in many tasks (e.g., ImageNet-22k, JFT-300M). However, the success of ViT relies on pretraining on large datasets. It is difficult for us to use ViT to train from scratch on a small-scale imbalanced capsule endoscopic image dataset. This paper adopts a Transformer neural network with a spatial pooling configuration. Transfomer’s self-attention mechanism enables it to capture long-range information effectively, and the exploration of ViT spatial structure by pooling can further improve the performance of ViT on our small-scale capsule endoscopy dataset. We trained from scratch on two publicly available datasets for capsule endoscopy disease classification, obtained 79.15% accuracy on the multi-classification task of the Kvasir-Capsule dataset, and 98.63% accuracy on the binary classification task of the Red Lesion Endoscopy dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Zhixue, Yu Zhang, Lin Luo, and Nan Wang. "AnoDFDNet: A Deep Feature Difference Network for Anomaly Detection." Journal of Sensors 2022 (August 16, 2022): 1–14. http://dx.doi.org/10.1155/2022/3538541.

Повний текст джерела
Анотація:
This paper proposed a novel anomaly detection (AD) approach of high-speed train images based on convolutional neural networks and the Vision Transformer. Different from previous AD works, in which anomalies are identified with a single image using classification, segmentation, or object detection methods, the proposed method detects abnormal difference between two images taken at different times of the same region. In other words, we cast anomaly detection problem with a single image into a difference detection problem with two images. The core idea of the proposed method is that the “anomaly” commonly represents an abnormal state instead of a specific object, and this state should be identified by a pair of images. In addition, we introduced a deep feature difference AD network (AnoDFDNet) which sufficiently explored the potential of the Vision Transformer and convolutional neural networks. To verify the effectiveness of the proposed AnoDFDNet, we gathered three datasets, a difference dataset (Diff dataset), a foreign body dataset (FB dataset), and an oil leakage dataset (OL dataset). Experimental results on the above datasets demonstrate the superiority of the proposed method. In terms of the F1-score, the AnoDFDNet obtained 76.24%, 81.04%, and 83.92% on Diff dataset, FB dataset, and OL dataset, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Voytov, D. Y., S. B. Vasil’ev, and D. V. Kormilitsyn. "Technology development for determining tree species using computer vision." FORESTRY BULLETIN 27, no. 1 (February 2023): 60–66. http://dx.doi.org/10.18698/2542-1468-2023-1-60-66.

Повний текст джерела
Анотація:
A technology has been developed to determine the European white birch (Betula pendula Roth.) species in the photo. The differences of the known neural networks of classifiers with the definition of objects are studied. YOLOv4 was chosen as the most promising for further development of the technology. The mechanism of image markup for the formation of training examples has been studied. The method of marking on the image has been formed. Two different datasets have been formed to retrain the network. An algorithmic increase in the dataset was carried out by transforming images and applying filters. The difference in the results of the classifier is determined. The accuracy when training exclusively on images containing hanging birch was 35 %, the accuracy when training on a dataset containing other trees was 71 %, the accuracy when training on the entire dataset was 75 %. To demonstrate the work, birch trees were identified in photographs taken in the arboretum of the MF Bauman Moscow State Technical University. To improve the technology, additional training is recommended to determine the remaining tree species. The technology can be used for the implementation of taxation of specific tree species; the formation of marked datasets for further development; the primary element in the tree image analysis system, to exclude third-party objects in the original image.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ayana, Gelan, and Se-woon Choe. "BUViTNet: Breast Ultrasound Detection via Vision Transformers." Diagnostics 12, no. 11 (November 1, 2022): 2654. http://dx.doi.org/10.3390/diagnostics12112654.

Повний текст джерела
Анотація:
Convolutional neural networks (CNNs) have enhanced ultrasound image-based early breast cancer detection. Vision transformers (ViTs) have recently surpassed CNNs as the most effective method for natural image analysis. ViTs have proven their capability of incorporating more global information than CNNs at lower layers, and their skip connections are more powerful than those of CNNs, which endows ViTs with superior performance. However, the effectiveness of ViTs in breast ultrasound imaging has not yet been investigated. Here, we present BUViTNet breast ultrasound detection via ViTs, where ViT-based multistage transfer learning is performed using ImageNet and cancer cell image datasets prior to transfer learning for classifying breast ultrasound images. We utilized two publicly available ultrasound breast image datasets, Mendeley and breast ultrasound images (BUSI), to train and evaluate our algorithm. The proposed method achieved the highest area under the receiver operating characteristics curve (AUC) of 1 ± 0, Matthew’s correlation coefficient (MCC) of 1 ± 0, and kappa score of 1 ± 0 on the Mendeley dataset. Furthermore, BUViTNet achieved the highest AUC of 0.968 ± 0.02, MCC of 0.961 ± 0.01, and kappa score of 0.959 ± 0.02 on the BUSI dataset. BUViTNet outperformed ViT trained from scratch, ViT-based conventional transfer learning, and CNN-based transfer learning in classifying breast ultrasound images (p < 0.01 in all cases). Our findings indicate that improved transformers are effective in analyzing breast images and can provide an improved diagnosis if used in clinical settings. Future work will consider the use of a wide range of datasets and parameters for optimized performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hanji, Param, Muhammad Z. Alam, Nicola Giuliani, Hu Chen, and Rafał K. Mantiuk. "HDR4CV: High Dynamic Range Dataset with Adversarial Illumination for Testing Computer Vision Methods." Journal of Imaging Science and Technology 65, no. 4 (July 1, 2021): 40404–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2021.65.4.040404.

Повний текст джерела
Анотація:
Abstract Benchmark datasets used for testing computer vision (CV) methods often contain little variation in illumination. The methods that perform well on these datasets have been observed to fail under challenging illumination conditions encountered in the real world, in particular, when the dynamic range of a scene is high. The authors present a new dataset for evaluating CV methods in challenging illumination conditions such as low light, high dynamic range, and glare. The main feature of the dataset is that each scene has been captured in all the adversarial illuminations. Moreover, each scene includes an additional reference condition with uniform illumination, which can be used to automatically generate labels for the tested CV methods. We demonstrate the usefulness of the dataset in a preliminary study by evaluating the performance of popular face detection, optical flow, and object detection methods under adversarial illumination conditions. We further assess whether the performance of these applications can be improved if a different transfer function is used.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Jing Li, Jing Li, and Xueping Luo Jing Li. "Malware Family Classification Based on Vision Transformer." 電腦學刊 34, no. 1 (February 2023): 087–99. http://dx.doi.org/10.53106/199115992023023401007.

Повний текст джерела
Анотація:
<p>Cybersecurity worries intensify as Big Data, the Internet of Things, and 5G technologies develop. Based on code reuse technologies, malware creators are producing new malware quickly, and new malware is continually endangering the effectiveness of existing detection methods. We propose a vision transformer-based approach for malware picture identification because, in contrast to CNN, Transformer’s self-attentive process is not constrained by local interactions and can simultaneously compute long-range mine relationships. We use ViT-B/16 weights pre-trained on the ImageNet21k dataset to improve model generalization capability and fine-tune them for the malware image classification task. This work demonstrates that (i) a pure attention mechanism applies to malware recognition, and (ii) the Transformer can be used instead of traditional CNN for malware image recognition. We train and assess our models using the MalImg dataset and the BIG2015 dataset in this paper. Our experimental evaluation found that the recognition accuracy of transfer learning-based ViT for MalImg samples and BIG2015 samples is 99.14% and 98.22%, respectively. This study shows that training ViT models using transfer learning can perform better than CNN in malware family classification.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Dataset VISION"

1

Toll, Abigail. "Matrices of Vision : Sonic Disruption of a Dataset." Thesis, Kungl. Musikhögskolan, Institutionen för komposition, dirigering och musikteori, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kmh:diva-4152.

Повний текст джерела
Анотація:
Matrices of Vision is a sonic deconstruction of a higher education dataset compiled by the influential Swedish higher education authority Universitetskanslersämbetet (UKÄ). The title Matrices of Vision and project theme is inspired by Indigenous cyberfeminist, scholar and artist Tiara Roxanne’s work into data colonialism. The method explores how practical applications of sound and theory can be used to meditate on political struggles and envision emancipatory modes of creation that hold space through a music-making practice. The artistic approach uses just intonation as a system, or grid of fixed points, which it refuses. The pitch strategy diverges from this approach by way of its political motivations: it disobeys just intonation’s rigid structure through practice and breaks with its order as a way to explore its experiential qualities. The approach seeks to engage beyond the structures designed to regulate behaviors and ways of perceiving and rather hold space for a multiplicity of viewpoints which are explored through cacophony, emotion and deep listening techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Berriel, Rodrigo Ferreira. "Vision-based ego-lane analysis system : dataset and algorithms." Mestrado em Informática, 2016. http://repositorio.ufes.br/handle/10/6775.

Повний текст джерела
Анотація:
Submitted by Patricia Barros (patricia.barros@ufes.br) on 2017-04-13T13:58:14Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) dissertacao Rodrigo Ferreira Berriel.pdf: 18168750 bytes, checksum: 52805e1f943170ef4d6cc96046ea48ec (MD5)
Approved for entry into archive by Patricia Barros (patricia.barros@ufes.br) on 2017-04-13T14:00:19Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) dissertacao Rodrigo Ferreira Berriel.pdf: 18168750 bytes, checksum: 52805e1f943170ef4d6cc96046ea48ec (MD5)
Made available in DSpace on 2017-04-13T14:00:19Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) dissertacao Rodrigo Ferreira Berriel.pdf: 18168750 bytes, checksum: 52805e1f943170ef4d6cc96046ea48ec (MD5)
FAPES
A detecção e análise da faixa de trânsito são tarefas importantes e desafiadoras em sistemas avançados de assistência ao motorista e direção autônoma. Essas tarefas são necessárias para auxiliar veículos autônomos e semi-autônomos a operarem com segurança. A queda no custo dos sensores de visão e os avanços em hardware embarcado impulsionaram as pesquisas relacionadas a faixa de trânsito –detecção, estimativa, rastreamento, etc. – nas últimas duas décadas. O interesse nesse tópico aumentou ainda mais com a demanda por sistemas avançados de assistência ao motorista (ADAS) e carros autônomos. Embora amplamente estudado de forma independente, ainda há necessidade de estudos que propõem uma solução combinada para os vários problemas relacionados a faixa do veículo, tal como aviso de saída de faixa (LDW), detecção de troca de faixa, classificação do tipo de linhas de divisão de fluxo (LMT), detecção e classificação de inscrições no pavimento, e detecção da presença de faixas ajdacentes. Esse trabalho propõe um sistema de análise da faixa do veículo (ELAS) em tempo real capaz de estimar a posição da faixa do veículo, classificar as linhas de divisão de fluxo e inscrições na faixa, realizar aviso de saída de faixa e detectar eventos de troca de faixa. O sistema proposto, baseado em visão, funciona em uma sequência temporal de imagens. Características das marcações de faixa são extraídas tanto na perspectiva original quanto em images mapeadas para a vista aérea, que então são combinadas para aumentar a robustez. A estimativa final da faixa é modelada como uma spline usando uma combinação de métodos (linhas de Hough, filtro de Kalman e filtro de partículas). Baseado na faixa estimada, todos os outros eventos são detectados. Além disso, o sistema proposto foi integrado para experimentação em um sistema para carros autônomos que está sendo desenvolvido pelo Laboratório de Computação de Alto Desempenho (LCAD) da Universidade Federal do Espírito Santo (UFES). Para validar os algorítmos propostos e cobrir a falta de base de dados para essas tarefas na literatura, uma nova base dados com mais de 20 cenas diferentes (com mais de 15.000 imagens) e considerando uma variedade de cenários (estrada urbana, rodovias, tráfego, sombras, etc.) foi criada. Essa base de dados foi manualmente anotada e disponilizada publicamente para possibilitar a avaliação de diversos eventos que são de interesse para a comunidade de pesquisa (i.e. estimativa, mudança e centralização da faixa; inscrições no pavimento; cruzamentos; tipos de linhas de divisão de fluxo; faixas de pedestre e faixas adjacentes). Além disso, o sistema também foi validado qualitativamente com base na integração com o veículo autônomo. O sistema alcançou altas taxas de detecção em todos os eventos do mundo real e provou estar pronto para aplicações em tempo real.
Lane detection and analysis are important and challenging tasks in advanced driver assistance systems and autonomous driving. These tasks are required in order to help autonomous and semi-autonomous vehicles to operate safely. Decreasing costs of vision sensors and advances in embedded hardware boosted lane related research – detection, estimation, tracking, etc. – in the past two decades. The interest in this topic has increased even more with the demand for advanced driver assistance systems (ADAS) and self-driving cars. Although extensively studied independently, there is still need for studies that propose a combined solution for the multiple problems related to the ego-lane, such as lane departure warning (LDW), lane change detection, lane marking type (LMT) classification, road markings detection and classification, and detection of adjacent lanes presence. This work proposes a real-time Ego-Lane Analysis System (ELAS) capable of estimating ego-lane position, classifying LMTs and road markings, performing LDW and detecting lane change events. The proposed vision-based system works on a temporal sequence of images. Lane marking features are extracted in perspective and Inverse Perspective Mapping (IPM) images that are combined to increase robustness. The final estimated lane is modeled as a spline using a combination of methods (Hough lines, Kalman filter and Particle filter). Based on the estimated lane, all other events are detected. Moreover, the proposed system was integrated for experimentation into an autonomous car that is being developed by the High Performance Computing Laboratory of the Universidade Federal do Espírito Santo. To validate the proposed algorithms and cover the lack of lane datasets in the literature, a new dataset with more than 20 different scenes (in more than 15,000 frames) and considering a variety of scenarios (urban road, highways, traffic, shadows, etc.) was created. The dataset was manually annotated and made publicly available to enable evaluation of several events that are of interest for the research community (i.e. lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes). Furthermore, the system was also validated qualitatively based on the integration with the autonomous vehicle. ELAS achieved high detection rates in all real-world events and proved to be ready for real-time applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

RAGONESI, RUGGERO. "Addressing Dataset Bias in Deep Neural Networks." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1069001.

Повний текст джерела
Анотація:
Deep Learning has achieved tremendous success in recent years in several areas such as image classification, text translation, autonomous agents, to name a few. Deep Neural Networks are able to learn non-linear features in a data-driven fashion from complex, large scale datasets to solve tasks. However, some fundamental issues remain to be fixed: the kind of data that is provided to the neural network directly influences its capability to generalize. This is especially true when training and test data come from different distributions (the so called domain gap or domain shift problem): in this case, the neural network may learn a data representation that is representative for the training data but not for the test, thus performing poorly when deployed in actual scenarios. The domain gap problem is addressed by the so-called Domain Adaptation, for which a large literature was recently developed. In this thesis, we first present a novel method to perform Unsupervised Domain Adaptation. Starting from the typical scenario in which we dispose of labeled source distributions and an unlabeled target distribution, we pursue a pseudo-labeling approach to assign a label to the target data, and then, in an iterative way, we refine them using Generative Adversarial Networks. Subsequently, we faced the debiasing problem. Simply speaking, bias occurs when there are factors in the data which are spuriously correlated with the task label, e.g., the background, which might be a strong clue to guess what class is depicted in an image. When this happens, neural networks may erroneously learn such spurious correlations as predictive factors, and may therefore fail when deployed on different scenarios. Learning a debiased model can be done using supervision regarding the type of bias affecting the data, or can be done without any annotation about what are the spurious correlations. We tackled the problem of supervised debiasing -- where a ground truth annotation for the bias is given -- under the lens of information theory. We designed a neural network architecture that learns to solve the task while achieving at the same time, statistical independence of the data embedding with respect to the bias label. We finally addressed the unsupervised debiasing problem, in which there is no availability of bias annotation. we address this challenging problem by a two-stage approach: we first split coarsely the training dataset into two subsets, samples that exhibit spurious correlations and those that do not. Second, we learn a feature representation that can accommodate both subsets and an augmented version of them.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Xie, Shuang. "A Tiny Diagnostic Dataset and Diverse Modules for Learning-Based Optical Flow Estimation." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39634.

Повний текст джерела
Анотація:
Recent work has shown that flow estimation from a pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNN). However, the basic straightforward CNN methods estimate optical flow with motion and occlusion boundary blur. To tackle this problem, we propose a tiny diagnostic dataset called FlowClevr to quickly evaluate various modules that can use to enhance standard CNN architectures. Based on the experiments of the FlowClevr dataset, we find that a deformable module can improve model prediction accuracy by around 30% to 100% in most tasks and more significantly reduce boundary blur. Based on these results, we are able to design modifications to various existing network architectures improving their performance. Compared with the original model, the model with the deformable module clearly reduces boundary blur and achieves a large improvement on the MPI sintel dataset, an omni-directional stereo (ODS) and a novel omni-directional optical flow dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nett, Ryan. "Dataset and Evaluation of Self-Supervised Learning for Panoramic Depth Estimation." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2234.

Повний текст джерела
Анотація:
Depth detection is a very common computer vision problem. It shows up primarily in robotics, automation, or 3D visualization domains, as it is essential for converting images to point clouds. One of the poster child applications is self driving cars. Currently, the best methods for depth detection are either very expensive, like LIDAR, or require precise calibration, like stereo cameras. These costs have given rise to attempts to detect depth from a monocular camera (a single camera). While this is possible, it is harder than LIDAR or stereo methods since depth can't be measured from monocular images, it has to be inferred. A good example is covering one eye: you still have some idea how far away things are, but it's not exact. Neural networks are a natural fit for this. Here, we build on previous neural network methods by applying a recent state of the art model to panoramic images in addition to pinhole ones and performing a comparative evaluation. First, we create a simulated depth detection dataset that lends itself to panoramic comparisons and contains pre-made cylindrical and spherical panoramas. We then modify monodepth2 to support cylindrical and cubemap panoramas, incorporating current best practices for depth detection on those panorama types, and evaluate its performance for each type of image using our dataset. We also consider the resources used in training and other qualitative factors.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Andruccioli, Matteo. "Previsione del Successo di Prodotti di Moda Prima della Commercializzazione: un Nuovo Dataset e Modello di Vision-Language Transformer." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24956/.

Повний текст джерела
Анотація:
A differenza di quanto avviene nel commercio tradizionale, in quello online il cliente non ha la possibilità di toccare con mano o provare il prodotto. La decisione di acquisto viene maturata in base ai dati messi a disposizione dal venditore attraverso titolo, descrizioni, immagini e alle recensioni di clienti precedenti. É quindi possibile prevedere quanto un prodotto venderà sulla base di queste informazioni. La maggior parte delle soluzioni attualmente presenti in letteratura effettua previsioni basandosi sulle recensioni, oppure analizzando il linguaggio usato nelle descrizioni per capire come questo influenzi le vendite. Le recensioni, tuttavia, non sono informazioni note ai venditori prima della commercializzazione del prodotto; usando solo dati testuali, inoltre, si tralascia l’influenza delle immagini. L'obiettivo di questa tesi è usare modelli di machine learning per prevedere il successo di vendita di un prodotto a partire dalle informazioni disponibili al venditore prima della commercializzazione. Si fa questo introducendo un modello cross-modale basato su Vision-Language Transformer in grado di effettuare classificazione. Un modello di questo tipo può aiutare i venditori a massimizzare il successo di vendita dei prodotti. A causa della mancanza, in letteratura, di dataset contenenti informazioni relative a prodotti venduti online che includono l’indicazione del successo di vendita, il lavoro svolto comprende la realizzazione di un dataset adatto a testare la soluzione sviluppata. Il dataset contiene un elenco di 78300 prodotti di Moda venduti su Amazon, per ognuno dei quali vengono riportate le principali informazioni messe a disposizione dal venditore e una misura di successo sul mercato. Questa viene ricavata a partire dal gradimento espresso dagli acquirenti e dal posizionamento del prodotto in una graduatoria basata sul numero di esemplari venduti.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Joubert, Deon. "Saliency grouped landmarks for use in vision-based simultaneous localisation and mapping." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/40834.

Повний текст джерела
Анотація:
The effective application of mobile robotics requires that robots be able to perform tasks with an extended degree of autonomy. Simultaneous localisation and mapping (SLAM) aids automation by providing a robot with the means of exploring an unknown environment while being able to position itself within this environment. Vision-based SLAM benefits from the large amounts of data produced by cameras but requires intensive processing of these data to obtain useful information. In this dissertation it is proposed that, as the saliency content of an image distils a large amount of the information present, it can be used to benefit vision-based SLAM implementations. The proposal is investigated by developing a new landmark for use in SLAM. Image keypoints are grouped together according to the saliency content of an image to form the new landmark. A SLAM system utilising this new landmark is implemented in order to demonstrate the viability of using the landmark. The landmark extraction, data filtering and data association routines necessary to make use of the landmark are discussed in detail. A Microsoft Kinect is used to obtain video images as well as 3D information of a viewed scene. The system is evaluated using computer simulations and real-world datasets from indoor structured environments. The datasets used are both newly generated and freely available benchmarking ones.
Dissertation (MEng)--University of Pretoria, 2013.
gm2014
Electrical, Electronic and Computer Engineering
unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Horečný, Peter. "Metody segmentace obrazu s malými trénovacími množinami." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-412996.

Повний текст джерела
Анотація:
The goal of this thesis was to propose an image segmentation method, which is capable of effective segmentation process with small datasets. Recently published ODE neural network was used for this method, because its features should provide better generalization in case of tasks with only small datasets available. The proposed ODE-UNet network was created by combining UNet architecture with ODE neural network, while using benefits of both networks. ODE-UNet reached following results on ISBI dataset: Rand: 0,950272 and Info: 0,978061. These results are better than the ones received from UNet model, which was also tested in this thesis, but it has been proven that state of the art can not be outperformed using ODE neural networks. However, the advantages of ODE neural network over tested UNet architecture and other methods were confirmed, and there is still a room for improvement by extending this method.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tagebrand, Emil, and Ek Emil Gustafsson. "Dataset Generation in a Simulated Environment Using Real Flight Data for Reliable Runway Detection Capabilities." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54974.

Повний текст джерела
Анотація:
Implementing object detection methods for runway detection during landing approaches is limited in the safety-critical aircraft domain. This limitation is due to the difficulty that comes with verification of the design and the ability to understand how the object detection behaves during operation. During operation, object detection needs to consider the aircraft's position, environmental factors, different runways and aircraft attitudes. Training such an object detection model requires a comprehensive dataset that defines the features mentioned above. The feature's impact on the detection capabilities needs to be analysed to ensure the correct distribution of images in the dataset. Gathering images for these scenarios would be costly and needed due to the aviation industry's safety standards. Synthetic data can be used to limit the cost and time required to create a dataset where all features occur. By using synthesised data in the form of generating datasets in a simulated environment, these features could be applied to the dataset directly. The features could also be implemented separately in different datasets and compared to each other to analyse their impact on the object detections capabilities. By utilising this method for the features mentioned above, the following results could be determined. For object detection to consider most landing cases and different runways, the dataset needs to replicate real flight data and generate additional extreme landing cases. The dataset also needs to consider landings at different altitudes, which can differ at a different airport. Environmental conditions such as clouds and time of day reduce detection capabilities far from the runway, while attitude and runway appearance reduce it at close range. Runway appearance did also affect the runway at long ranges but only for darker runways.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sievert, Rolf. "Instance Segmentation of Multiclass Litter and Imbalanced Dataset Handling : A Deep Learning Model Comparison." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175173.

Повний текст джерела
Анотація:
Instance segmentation has a great potential for improving the current state of littering by autonomously detecting and segmenting different categories of litter. With this information, litter could, for example, be geotagged to aid litter pickers or to give precise locational information to unmanned vehicles for autonomous litter collection. Land-based litter instance segmentation is a relatively unexplored field, and this study aims to give a comparison of the instance segmentation models Mask R-CNN and DetectoRS using the multiclass litter dataset called Trash Annotations in Context (TACO) in conjunction with the Common Objects in Context precision and recall scores. TACO is an imbalanced dataset, and therefore imbalanced data-handling is addressed, exercising a second-order relation iterative stratified split, and additionally oversampling when training Mask R-CNN. Mask R-CNN without oversampling resulted in a segmentation of 0.127 mAP, and with oversampling 0.163 mAP. DetectoRS achieved 0.167 segmentation mAP, and improves the segmentation mAP of small objects most noticeably, with a factor of at least 2, which is important within the litter domain since small objects such as cigarettes are overrepresented. In contrast, oversampling with Mask R-CNN does not seem to improve the general precision of small and medium objects, but only improves the detection of large objects. It is concluded that DetectoRS improves results compared to Mask R-CNN, as well does oversampling. However, using a dataset that cannot have an all-class representation for train, validation, and test splits, together with an iterative stratification that does not guarantee all-class representations, makes it hard for future works to do exact comparisons to this study. Results are therefore approximate considering using all categories since 12 categories are missing from the test set, where 4 of those were impossible to split into train, validation, and test set. Further image collection and annotation to mitigate the imbalance would most noticeably improve results since results depend on class-averaged values. Doing oversampling with DetectoRS would also help improve results. There is also the option to combine the two datasets TACO and MJU-Waste to enforce training of more categories.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Dataset VISION"

1

Geiger, Andreas, Joel Janai, Fatma Güney, and Aseem Behl. Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art. Now Publishers, 2020.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chirimuuta, Mazviita. The Development and Application of Efficient Coding Explanation in Neuroscience. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0009.

Повний текст джерела
Анотація:
In the philosophy of neuroscience, much attention has been paid to mechanistic causal explanations, both in terms of their theoretical virtues, and their application in potential therapeutic interventions. Non-mechanistic, non-causal explanatory models, it is often assumed, would have no role to play in any practical endeavors. This assumption ignores the fact that many of the non-mechanistic explanatory models which have been successfully employed in neuroscience have their origins in engineering and applied sciences, and are central to many new neuro-technologies. This chapter examines the development of explanations of lateral inhibition in the early visual system as implementing an efficient code for converting photoreceptor input into a data-compressed output from the eye to the brain. Two applications of the efficient coding approach are considered: in streamlining the vast datasets of current neuroscience by offering unifying principles, and in building artificial systems that replicate vision and other cognitive functions.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Dataset VISION"

1

Damen, Dima, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, et al. "Scaling Egocentric Vision: The Dataset." In Computer Vision – ECCV 2018, 753–71. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01225-0_44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jalal, Ahsan, and Usman Tariq. "The LFW-Gender Dataset." In Computer Vision – ACCV 2016 Workshops, 531–40. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54526-4_39.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhang, Lvmin, Yi Ji, and Chunping Liu. "DanbooRegion: An Illustration Region Dataset." In Computer Vision – ECCV 2020, 137–54. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58601-0_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Antequera, Manuel López, Pau Gargallo, Markus Hofinger, Samuel Rota Bulò, Yubin Kuang, and Peter Kontschieder. "Mapillary Planet-Scale Depth Dataset." In Computer Vision – ECCV 2020, 589–604. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58536-5_35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hu, Yang, Dong Yi, Shengcai Liao, Zhen Lei, and Stan Z. Li. "Cross Dataset Person Re-identification." In Computer Vision - ACCV 2014 Workshops, 650–64. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16634-6_47.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Khosla, Aditya, Tinghui Zhou, Tomasz Malisiewicz, Alexei A. Efros, and Antonio Torralba. "Undoing the Damage of Dataset Bias." In Computer Vision – ECCV 2012, 158–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33718-5_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Aliakbarian, Mohammad Sadegh, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, and Lars Andersson. "VIENA $$^2$$ : A Driving Anticipation Dataset." In Computer Vision – ACCV 2018, 449–66. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20887-5_28.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Neumann, Lukáš, Michelle Karg, Shanshan Zhang, Christian Scharfenberger, Eric Piegert, Sarah Mistr, Olga Prokofyeva, et al. "NightOwls: A Pedestrians at Night Dataset." In Computer Vision – ACCV 2018, 691–705. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20887-5_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Follmann, Patrick, Tobias Böttger, Philipp Härtinger, Rebecca König, and Markus Ulrich. "MVTec D2S: Densely Segmented Supermarket Dataset." In Computer Vision – ECCV 2018, 581–97. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01249-6_35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tommasi, Tatiana, and Tinne Tuytelaars. "A Testbed for Cross-Dataset Analysis." In Computer Vision - ECCV 2014 Workshops, 18–31. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16199-0_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Dataset VISION"

1

Ammirato, Phil, Alexander C. Berg, and Jana Kosecka. "Active Vision Dataset Benchmark." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2018. http://dx.doi.org/10.1109/cvprw.2018.00277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bama, B. Sathya, S. Mohamed Mansoor Roomi, D. Sabarinathan, M. Senthilarasi, and G. Manimala. "Idol dataset." In ICVGIP '21: Indian Conference on Computer Vision, Graphics and Image Processing. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3490035.3490295.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ramisa, Arnau, Fei Yan, Francesc Moreno-Noguer, and Krystian Mikolajczyk. "The BreakingNews Dataset." In Proceedings of the Sixth Workshop on Vision and Language. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-2005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tursun, Osman, and Sinan Kalkan. "METU dataset: A big dataset for benchmarking trademark retrieval." In 2015 14th IAPR International Conference on Machine Vision Applications (MVA). IEEE, 2015. http://dx.doi.org/10.1109/mva.2015.7153243.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Delgado, Kevin, Juan Manuel Origgi, Tania Hasanpoor, Hao Yu, Danielle Allessio, Ivon Arroyo, William Lee, Margrit Betke, Beverly Woolf, and Sarah Adel Bargal. "Student Engagement Dataset." In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Beigpour, Shida, MaiLan Ha, Sven Kunz, Andreas Kolb, and Volker Blanz. "Multi-view Multi-illuminant Intrinsic Dataset." In British Machine Vision Conference 2016. British Machine Vision Association, 2016. http://dx.doi.org/10.5244/c.30.10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shugrina, Maria, Ziheng Liang, Amlan Kar, Jiaman Li, Angad Singh, Karan Singh, and Sanja Fidler. "Creative Flow+ Dataset." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00553.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ghafourian, Sarvenaz, Ramin Sharifi, and Amirali Baniasadi. "Facial Emotion Recognition in Imbalanced Datasets." In 9th International Conference on Artificial Intelligence and Applications (AIAPP 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120920.

Повний текст джерела
Анотація:
The wide usage of computer vision has become popular in the recent years. One of the areas of computer vision that has been studied is facial emotion recognition, which plays a crucial role in the interpersonal communication. This paper tackles the problem of intraclass variances in the face images of emotion recognition datasets. We test the system on augmented datasets including CK+, EMOTIC, and KDEF dataset samples. After modifying our dataset, using SMOTETomek approach, we observe improvement over the default method.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Saint, Alexandre, Eman Ahmed, Abd El Rahman Shabayek, Kseniya Cherenkova, Gleb Gusev, Djamila Aouada, and Bjorn Ottersten. "3DBodyTex: Textured 3D Body Dataset." In 2018 International Conference on 3D Vision (3DV). IEEE, 2018. http://dx.doi.org/10.1109/3dv.2018.00063.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tausch, Frederic, Simon Stock, Julian Fricke, and Olaf Klein. "Bumblebee Re-Identification Dataset." In 2020 IEEE Winter Applications of Computer Vision Workshops (WACVW). IEEE, 2020. http://dx.doi.org/10.1109/wacvw50321.2020.9096909.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Dataset VISION"

1

Ferrell, Regina, Deniz Aykac, Thomas Karnowski, and Nisha Srinivas. A Publicly Available, Annotated Dataset for Naturalistic Driving Study and Computer Vision Algorithm Development. Office of Scientific and Technical Information (OSTI), January 2021. http://dx.doi.org/10.2172/1760158.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bragdon, Sophia, Vuong Truong, and Jay Clausen. Environmentally informed buried object recognition. Engineer Research and Development Center (U.S.), November 2022. http://dx.doi.org/10.21079/11681/45902.

Повний текст джерела
Анотація:
The ability to detect and classify buried objects using thermal infrared imaging is affected by the environmental conditions at the time of imaging, which leads to an inconsistent probability of detection. For example, periods of dense overcast or recent precipitation events result in the suppression of the soil temperature difference between the buried object and soil, thus preventing detection. This work introduces an environmentally informed framework to reduce the false alarm rate in the classification of regions of interest (ROIs) in thermal IR images containing buried objects. Using a dataset that consists of thermal images containing buried objects paired with the corresponding environmental and meteorological conditions, we employ a machine learning approach to determine which environmental conditions are the most impactful on the visibility of the buried objects. We find the key environmental conditions include incoming shortwave solar radiation, soil volumetric water content, and average air temperature. For each image, ROIs are computed using a computer vision approach and these ROIs are coupled with the most important environmental conditions to form the input for the classification algorithm. The environmentally informed classification algorithm produces a decision on whether the ROI contains a buried object by simultaneously learning on the ROIs with a classification neural network and on the environmental data using a tabular neural network. On a given set of ROIs, we have shown that the environmentally informed classification approach improves the detection of buried objects within the ROIs.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Chen, Z., S. E. Grasby, C. Deblonde, and X. Liu. AI-enabled remote sensing data interpretation for geothermal resource evaluation as applied to the Mount Meager geothermal prospective area. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330008.

Повний текст джерела
Анотація:
The objective of this study is to search for features and indicators from the identified geothermal resource sweet spot in the south Mount Meager area that are applicable to other volcanic complexes in the Garibaldi Volcanic Belt. A Landsat 8 multi-spectral band dataset, for a total of 57 images ranging from visible through infrared to thermal infrared frequency channels and covering different years and seasons, were selected. Specific features that are indicative of high geothermal heat flux, fractured permeable zones, and groundwater circulation, the three key elements in exploring for geothermal resource, were extracted. The thermal infrared images from different seasons show occurrence of high temperature anomalies and their association with volcanic and intrusive bodies, and reveal the variation in location and intensity of the anomalies with time over four seasons, allowing inference of specific heat transform mechanisms. Automatically extracted linear features using AI/ML algorithms developed for computer vision from various frequency bands show various linear segment groups that are likely surface expression associated with local volcanic activities, regional deformation and slope failure. In conjunction with regional structural models and field observations, the anomalies and features from remotely sensed images were interpreted to provide new insights for improving our understanding of the Mount Meager geothermal system and its characteristics. After validation, the methods developed and indicators identified in this study can be applied to other volcanic complexes in the Garibaldi, or other volcanic belts for geothermal resource reconnaissance.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Huang, Haohang, Erol Tutumluer, Jiayi Luo, Kelin Ding, Issam Qamhia, and John Hart. 3D Image Analysis Using Deep Learning for Size and Shape Characterization of Stockpile Riprap Aggregates—Phase 2. Illinois Center for Transportation, September 2022. http://dx.doi.org/10.36501/0197-9191/22-017.

Повний текст джерела
Анотація:
Riprap rock and aggregates are extensively used in structural, transportation, geotechnical, and hydraulic engineering applications. Field determination of morphological properties of aggregates such as size and shape can greatly facilitate the quality assurance/quality control (QA/QC) process for proper aggregate material selection and engineering use. Many aggregate imaging approaches have been developed to characterize the size and morphology of individual aggregates by computer vision. However, 3D field characterization of aggregate particle morphology is challenging both during the quarry production process and at construction sites, particularly for aggregates in stockpile form. This research study presents a 3D reconstruction-segmentation-completion approach based on deep learning techniques by combining three developed research components: field 3D reconstruction procedures, 3D stockpile instance segmentation, and 3D shape completion. The approach was designed to reconstruct aggregate stockpiles from multi-view images, segment the stockpile into individual instances, and predict the unseen side of each instance (particle) based on the partial visible shapes. Based on the dataset constructed from individual aggregate models, a state-of-the-art 3D instance segmentation network and a 3D shape completion network were implemented and trained, respectively. The application of the integrated approach was demonstrated on re-engineered stockpiles and field stockpiles. The validation of results using ground-truth measurements showed satisfactory algorithm performance in capturing and predicting the unseen sides of aggregates. The algorithms are integrated into a software application with a user-friendly graphical user interface. Based on the findings of this study, this stockpile aggregate analysis approach is envisioned to provide efficient field evaluation of aggregate stockpiles by offering convenient and reliable solutions for on-site QA/QC tasks of riprap rock and aggregate stockpiles.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Тарасова, Олена Юріївна, and Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.

Повний текст джерела
Анотація:
Facial recognition technology is named one of the main trends of recent years. It’s wide range of applications, such as access control, biometrics, video surveillance and many other interactive humanmachine systems. Facial landmarks can be described as key characteristics of the human face. Commonly found landmarks are, for example, eyes, nose or mouth corners. Analyzing these key points is useful for a variety of computer vision use cases, including biometrics, face tracking, or emotion detection. Different methods produce different facial landmarks. Some methods use only basic facial landmarks, while others bring out more detail. We use 68 facial markup, which is a common format for many datasets. Cloud computing creates all the necessary conditions for the successful implementation of even the most complex tasks. We created a web application using the Django framework, Python language, OpenCv and Dlib libraries to recognize faces in the image. The purpose of our work is to create a software system for face recognition in the photo and identify wrinkles on the face. The algorithm for determining the presence and location of various types of wrinkles and determining their geometric determination on the face is programmed.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hudgens, Bian, Jene Michaud, Megan Ross, Pamela Scheffler, Anne Brasher, Megan Donahue, Alan Friedlander та ін. Natural resource condition assessment: Puʻuhonua o Hōnaunau National Historical Park. National Park Service, вересень 2022. http://dx.doi.org/10.36967/2293943.

Повний текст джерела
Анотація:
Natural Resource Condition Assessments (NRCAs) evaluate current conditions of natural resources and resource indicators in national park units (parks). NRCAs are meant to complement—not replace—traditional issue- and threat-based resource assessments. NRCAs employ a multi-disciplinary, hierarchical framework within which reference conditions for natural resource indicators are developed for comparison against current conditions. NRCAs do not set management targets for study indicators, and reference conditions are not necessarily ideal or target conditions. The goal of a NRCA is to deliver science-based information that will assist park managers in their efforts to describe and quantify a park’s desired resource conditions and management targets, and inform management practices related to natural resource stewardship. The resources and indicators emphasized in a given NRCA depend on the park’s resource setting, status of resource stewardship planning and science in identifying high-priority indicators, and availability of data and expertise to assess current conditions for a variety of potential study resources and indicators. Puʻuhonua o Hōnaunau National Historical Park (hereafter Puʻuhonua o Hōnaunau NHP) encompasses 1.7 km2 (0.7 mi2) at the base of the Mauna Loa Volcano on the Kona coast of the island of Hawaiʻi. The Kona coast of Hawaiʻi Island is characterized by calm winds that increase in the late morning to evening hours, especially in the summer when there is also a high frequency of late afternoon or early evening showers. The climate is mild, with mean high temperature of 26.2° C (79.2° F) and a mean low temperature of 16.6° C (61.9° F) and receiving on average 66 cm (26 in) of rainfall per year. The Kona coast is the only region in Hawaiʻi where more precipitation falls in the summer than in the winter. There is limited surface water runoff or stream development at Puʻuhonua o Hōnaunau NHP due to the relatively recent lava flows (less than 1,500 years old) overlaying much of the park. Kiʻilae Stream is the only watercourse within the park. Kiʻilae Stream is ephemeral, with occasional flows and a poorly characterized channel within the park. A stream gauge was located uphill from the park, but no measurements have been taken since 1982. Floods in Kiʻilae Stream do occur, resulting in transport of fluvial sediment to the ocean, but there are no data documenting this phenomenon. There are a small number of naturally occurring anchialine pools occupying cracks and small depressions in the lava flows, including the Royal Fishponds; an anchialine pool modified for the purpose of holding fish. Although the park’s legal boundaries end at the high tide mark, the sense of place, story, and visitor experience would be completely different without the marine waters adjacent to the park. Six resource elements were chosen for evaluation: air and night sky, water-related processes, terrestrial vegetation, vertebrates, anchialine pools, and marine resources. Resource conditions were determined through reviewing existing literature, meta-analysis, and where appropriate, analysis of unpublished short- and long-term datasets. However, in a number of cases, data were unavailable or insufficient to either establish a quantitative reference condition or conduct a formal statistical comparison of the status of a resource within the park to a quantitative reference condition. In those cases, data gaps are noted, and comparisons were made based on qualitative descriptions. Overall, the condition of natural resources within Puʻuhonua o Hōnaunau NHP reflects the surrounding landscape. The coastal lands immediately surrounding Puʻuhonua o Hōnaunau NHP are zoned for conservation, while adjacent lands away from the coast are agricultural. The condition of most natural resources at Puʻuhonua o Hōnaunau NHP reflect the overall condition of ecological communities on the west Hawai‘i coast. Although little of the park’s vegetation...
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Encuesta a firmas exportadoras de América Latina y el Caribe: buscando comprender el nuevo ADN exportador: segunda edición, septiembre 2021 - Dataset. Inter-American Development Bank, September 2021. http://dx.doi.org/10.18235/0003637.

Повний текст джерела
Анотація:
Desde el Instituto para la Integración de América Latina y el Caribe (INTAL) del Sector de Integración y Comercio del Banco Interamericano de Desarrollo (BID) se realizó la segunda edición de la encuesta a firmas de América Latina y el Caribe (ALC) que exportan tanto dentro de la región como a nivel extrarregional. Este conjunto de datos contiene los insumos que permitieron el análisis al interior de 405 firmas, a fines de conocer como transitan este segundo año pandémico: cómo evolucionan sus exportaciones, cuáles son los problemas que este contexto particular les ha ocasionado, qué medidas han tomado, cuáles son las políticas públicas de apoyo que han recibido y cuál es la visión prospectiva de las empresas. Además, se presentan los anexos metodológicos y una muestra de la encuesta.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії