Добірка наукової літератури з теми "Dataset VISION"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Dataset VISION".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Dataset VISION"
Scheuerman, Morgan Klaus, Alex Hanna, and Emily Denton. "Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–37. http://dx.doi.org/10.1145/3476058.
Повний текст джерелаGeiger, A., P. Lenz, C. Stiller, and R. Urtasun. "Vision meets robotics: The KITTI dataset." International Journal of Robotics Research 32, no. 11 (August 23, 2013): 1231–37. http://dx.doi.org/10.1177/0278364913491297.
Повний текст джерелаLiew, Yu Liang, and Jeng Feng Chin. "Vision-based biomechanical markerless motion classification." Machine Graphics and Vision 32, no. 1 (February 16, 2023): 3–24. http://dx.doi.org/10.22630/mgv.2023.32.1.1.
Повний текст джерелаAlyami, Hashem, Abdullah Alharbi, and Irfan Uddin. "Lifelong Machine Learning for Regional-Based Image Classification in Open Datasets." Symmetry 12, no. 12 (December 16, 2020): 2094. http://dx.doi.org/10.3390/sym12122094.
Повний текст джерелаBai, Long, Liangyu Wang, Tong Chen, Yuanhao Zhao, and Hongliang Ren. "Transformer-Based Disease Identification for Small-Scale Imbalanced Capsule Endoscopy Dataset." Electronics 11, no. 17 (August 31, 2022): 2747. http://dx.doi.org/10.3390/electronics11172747.
Повний текст джерелаWang, Zhixue, Yu Zhang, Lin Luo, and Nan Wang. "AnoDFDNet: A Deep Feature Difference Network for Anomaly Detection." Journal of Sensors 2022 (August 16, 2022): 1–14. http://dx.doi.org/10.1155/2022/3538541.
Повний текст джерелаVoytov, D. Y., S. B. Vasil’ev, and D. V. Kormilitsyn. "Technology development for determining tree species using computer vision." FORESTRY BULLETIN 27, no. 1 (February 2023): 60–66. http://dx.doi.org/10.18698/2542-1468-2023-1-60-66.
Повний текст джерелаAyana, Gelan, and Se-woon Choe. "BUViTNet: Breast Ultrasound Detection via Vision Transformers." Diagnostics 12, no. 11 (November 1, 2022): 2654. http://dx.doi.org/10.3390/diagnostics12112654.
Повний текст джерелаHanji, Param, Muhammad Z. Alam, Nicola Giuliani, Hu Chen, and Rafał K. Mantiuk. "HDR4CV: High Dynamic Range Dataset with Adversarial Illumination for Testing Computer Vision Methods." Journal of Imaging Science and Technology 65, no. 4 (July 1, 2021): 40404–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2021.65.4.040404.
Повний текст джерелаJing Li, Jing Li, and Xueping Luo Jing Li. "Malware Family Classification Based on Vision Transformer." 電腦學刊 34, no. 1 (February 2023): 087–99. http://dx.doi.org/10.53106/199115992023023401007.
Повний текст джерелаДисертації з теми "Dataset VISION"
Toll, Abigail. "Matrices of Vision : Sonic Disruption of a Dataset." Thesis, Kungl. Musikhögskolan, Institutionen för komposition, dirigering och musikteori, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kmh:diva-4152.
Повний текст джерелаBerriel, Rodrigo Ferreira. "Vision-based ego-lane analysis system : dataset and algorithms." Mestrado em Informática, 2016. http://repositorio.ufes.br/handle/10/6775.
Повний текст джерелаApproved for entry into archive by Patricia Barros (patricia.barros@ufes.br) on 2017-04-13T14:00:19Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) dissertacao Rodrigo Ferreira Berriel.pdf: 18168750 bytes, checksum: 52805e1f943170ef4d6cc96046ea48ec (MD5)
Made available in DSpace on 2017-04-13T14:00:19Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) dissertacao Rodrigo Ferreira Berriel.pdf: 18168750 bytes, checksum: 52805e1f943170ef4d6cc96046ea48ec (MD5)
FAPES
A detecção e análise da faixa de trânsito são tarefas importantes e desafiadoras em sistemas avançados de assistência ao motorista e direção autônoma. Essas tarefas são necessárias para auxiliar veículos autônomos e semi-autônomos a operarem com segurança. A queda no custo dos sensores de visão e os avanços em hardware embarcado impulsionaram as pesquisas relacionadas a faixa de trânsito –detecção, estimativa, rastreamento, etc. – nas últimas duas décadas. O interesse nesse tópico aumentou ainda mais com a demanda por sistemas avançados de assistência ao motorista (ADAS) e carros autônomos. Embora amplamente estudado de forma independente, ainda há necessidade de estudos que propõem uma solução combinada para os vários problemas relacionados a faixa do veículo, tal como aviso de saída de faixa (LDW), detecção de troca de faixa, classificação do tipo de linhas de divisão de fluxo (LMT), detecção e classificação de inscrições no pavimento, e detecção da presença de faixas ajdacentes. Esse trabalho propõe um sistema de análise da faixa do veículo (ELAS) em tempo real capaz de estimar a posição da faixa do veículo, classificar as linhas de divisão de fluxo e inscrições na faixa, realizar aviso de saída de faixa e detectar eventos de troca de faixa. O sistema proposto, baseado em visão, funciona em uma sequência temporal de imagens. Características das marcações de faixa são extraídas tanto na perspectiva original quanto em images mapeadas para a vista aérea, que então são combinadas para aumentar a robustez. A estimativa final da faixa é modelada como uma spline usando uma combinação de métodos (linhas de Hough, filtro de Kalman e filtro de partículas). Baseado na faixa estimada, todos os outros eventos são detectados. Além disso, o sistema proposto foi integrado para experimentação em um sistema para carros autônomos que está sendo desenvolvido pelo Laboratório de Computação de Alto Desempenho (LCAD) da Universidade Federal do Espírito Santo (UFES). Para validar os algorítmos propostos e cobrir a falta de base de dados para essas tarefas na literatura, uma nova base dados com mais de 20 cenas diferentes (com mais de 15.000 imagens) e considerando uma variedade de cenários (estrada urbana, rodovias, tráfego, sombras, etc.) foi criada. Essa base de dados foi manualmente anotada e disponilizada publicamente para possibilitar a avaliação de diversos eventos que são de interesse para a comunidade de pesquisa (i.e. estimativa, mudança e centralização da faixa; inscrições no pavimento; cruzamentos; tipos de linhas de divisão de fluxo; faixas de pedestre e faixas adjacentes). Além disso, o sistema também foi validado qualitativamente com base na integração com o veículo autônomo. O sistema alcançou altas taxas de detecção em todos os eventos do mundo real e provou estar pronto para aplicações em tempo real.
Lane detection and analysis are important and challenging tasks in advanced driver assistance systems and autonomous driving. These tasks are required in order to help autonomous and semi-autonomous vehicles to operate safely. Decreasing costs of vision sensors and advances in embedded hardware boosted lane related research – detection, estimation, tracking, etc. – in the past two decades. The interest in this topic has increased even more with the demand for advanced driver assistance systems (ADAS) and self-driving cars. Although extensively studied independently, there is still need for studies that propose a combined solution for the multiple problems related to the ego-lane, such as lane departure warning (LDW), lane change detection, lane marking type (LMT) classification, road markings detection and classification, and detection of adjacent lanes presence. This work proposes a real-time Ego-Lane Analysis System (ELAS) capable of estimating ego-lane position, classifying LMTs and road markings, performing LDW and detecting lane change events. The proposed vision-based system works on a temporal sequence of images. Lane marking features are extracted in perspective and Inverse Perspective Mapping (IPM) images that are combined to increase robustness. The final estimated lane is modeled as a spline using a combination of methods (Hough lines, Kalman filter and Particle filter). Based on the estimated lane, all other events are detected. Moreover, the proposed system was integrated for experimentation into an autonomous car that is being developed by the High Performance Computing Laboratory of the Universidade Federal do Espírito Santo. To validate the proposed algorithms and cover the lack of lane datasets in the literature, a new dataset with more than 20 different scenes (in more than 15,000 frames) and considering a variety of scenarios (urban road, highways, traffic, shadows, etc.) was created. The dataset was manually annotated and made publicly available to enable evaluation of several events that are of interest for the research community (i.e. lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes). Furthermore, the system was also validated qualitatively based on the integration with the autonomous vehicle. ELAS achieved high detection rates in all real-world events and proved to be ready for real-time applications.
RAGONESI, RUGGERO. "Addressing Dataset Bias in Deep Neural Networks." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1069001.
Повний текст джерелаXie, Shuang. "A Tiny Diagnostic Dataset and Diverse Modules for Learning-Based Optical Flow Estimation." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39634.
Повний текст джерелаNett, Ryan. "Dataset and Evaluation of Self-Supervised Learning for Panoramic Depth Estimation." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2234.
Повний текст джерелаAndruccioli, Matteo. "Previsione del Successo di Prodotti di Moda Prima della Commercializzazione: un Nuovo Dataset e Modello di Vision-Language Transformer." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24956/.
Повний текст джерелаJoubert, Deon. "Saliency grouped landmarks for use in vision-based simultaneous localisation and mapping." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/40834.
Повний текст джерелаDissertation (MEng)--University of Pretoria, 2013.
gm2014
Electrical, Electronic and Computer Engineering
unrestricted
Horečný, Peter. "Metody segmentace obrazu s malými trénovacími množinami." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-412996.
Повний текст джерелаTagebrand, Emil, and Ek Emil Gustafsson. "Dataset Generation in a Simulated Environment Using Real Flight Data for Reliable Runway Detection Capabilities." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54974.
Повний текст джерелаSievert, Rolf. "Instance Segmentation of Multiclass Litter and Imbalanced Dataset Handling : A Deep Learning Model Comparison." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175173.
Повний текст джерелаКниги з теми "Dataset VISION"
Geiger, Andreas, Joel Janai, Fatma Güney, and Aseem Behl. Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art. Now Publishers, 2020.
Знайти повний текст джерелаChirimuuta, Mazviita. The Development and Application of Efficient Coding Explanation in Neuroscience. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0009.
Повний текст джерелаЧастини книг з теми "Dataset VISION"
Damen, Dima, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, et al. "Scaling Egocentric Vision: The Dataset." In Computer Vision – ECCV 2018, 753–71. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01225-0_44.
Повний текст джерелаJalal, Ahsan, and Usman Tariq. "The LFW-Gender Dataset." In Computer Vision – ACCV 2016 Workshops, 531–40. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54526-4_39.
Повний текст джерелаZhang, Lvmin, Yi Ji, and Chunping Liu. "DanbooRegion: An Illustration Region Dataset." In Computer Vision – ECCV 2020, 137–54. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58601-0_9.
Повний текст джерелаAntequera, Manuel López, Pau Gargallo, Markus Hofinger, Samuel Rota Bulò, Yubin Kuang, and Peter Kontschieder. "Mapillary Planet-Scale Depth Dataset." In Computer Vision – ECCV 2020, 589–604. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58536-5_35.
Повний текст джерелаHu, Yang, Dong Yi, Shengcai Liao, Zhen Lei, and Stan Z. Li. "Cross Dataset Person Re-identification." In Computer Vision - ACCV 2014 Workshops, 650–64. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16634-6_47.
Повний текст джерелаKhosla, Aditya, Tinghui Zhou, Tomasz Malisiewicz, Alexei A. Efros, and Antonio Torralba. "Undoing the Damage of Dataset Bias." In Computer Vision – ECCV 2012, 158–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33718-5_12.
Повний текст джерелаAliakbarian, Mohammad Sadegh, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, and Lars Andersson. "VIENA $$^2$$ : A Driving Anticipation Dataset." In Computer Vision – ACCV 2018, 449–66. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20887-5_28.
Повний текст джерелаNeumann, Lukáš, Michelle Karg, Shanshan Zhang, Christian Scharfenberger, Eric Piegert, Sarah Mistr, Olga Prokofyeva, et al. "NightOwls: A Pedestrians at Night Dataset." In Computer Vision – ACCV 2018, 691–705. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20887-5_43.
Повний текст джерелаFollmann, Patrick, Tobias Böttger, Philipp Härtinger, Rebecca König, and Markus Ulrich. "MVTec D2S: Densely Segmented Supermarket Dataset." In Computer Vision – ECCV 2018, 581–97. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01249-6_35.
Повний текст джерелаTommasi, Tatiana, and Tinne Tuytelaars. "A Testbed for Cross-Dataset Analysis." In Computer Vision - ECCV 2014 Workshops, 18–31. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16199-0_2.
Повний текст джерелаТези доповідей конференцій з теми "Dataset VISION"
Ammirato, Phil, Alexander C. Berg, and Jana Kosecka. "Active Vision Dataset Benchmark." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2018. http://dx.doi.org/10.1109/cvprw.2018.00277.
Повний текст джерелаBama, B. Sathya, S. Mohamed Mansoor Roomi, D. Sabarinathan, M. Senthilarasi, and G. Manimala. "Idol dataset." In ICVGIP '21: Indian Conference on Computer Vision, Graphics and Image Processing. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3490035.3490295.
Повний текст джерелаRamisa, Arnau, Fei Yan, Francesc Moreno-Noguer, and Krystian Mikolajczyk. "The BreakingNews Dataset." In Proceedings of the Sixth Workshop on Vision and Language. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-2005.
Повний текст джерелаTursun, Osman, and Sinan Kalkan. "METU dataset: A big dataset for benchmarking trademark retrieval." In 2015 14th IAPR International Conference on Machine Vision Applications (MVA). IEEE, 2015. http://dx.doi.org/10.1109/mva.2015.7153243.
Повний текст джерелаDelgado, Kevin, Juan Manuel Origgi, Tania Hasanpoor, Hao Yu, Danielle Allessio, Ivon Arroyo, William Lee, Margrit Betke, Beverly Woolf, and Sarah Adel Bargal. "Student Engagement Dataset." In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00405.
Повний текст джерелаBeigpour, Shida, MaiLan Ha, Sven Kunz, Andreas Kolb, and Volker Blanz. "Multi-view Multi-illuminant Intrinsic Dataset." In British Machine Vision Conference 2016. British Machine Vision Association, 2016. http://dx.doi.org/10.5244/c.30.10.
Повний текст джерелаShugrina, Maria, Ziheng Liang, Amlan Kar, Jiaman Li, Angad Singh, Karan Singh, and Sanja Fidler. "Creative Flow+ Dataset." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00553.
Повний текст джерелаGhafourian, Sarvenaz, Ramin Sharifi, and Amirali Baniasadi. "Facial Emotion Recognition in Imbalanced Datasets." In 9th International Conference on Artificial Intelligence and Applications (AIAPP 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120920.
Повний текст джерелаSaint, Alexandre, Eman Ahmed, Abd El Rahman Shabayek, Kseniya Cherenkova, Gleb Gusev, Djamila Aouada, and Bjorn Ottersten. "3DBodyTex: Textured 3D Body Dataset." In 2018 International Conference on 3D Vision (3DV). IEEE, 2018. http://dx.doi.org/10.1109/3dv.2018.00063.
Повний текст джерелаTausch, Frederic, Simon Stock, Julian Fricke, and Olaf Klein. "Bumblebee Re-Identification Dataset." In 2020 IEEE Winter Applications of Computer Vision Workshops (WACVW). IEEE, 2020. http://dx.doi.org/10.1109/wacvw50321.2020.9096909.
Повний текст джерелаЗвіти організацій з теми "Dataset VISION"
Ferrell, Regina, Deniz Aykac, Thomas Karnowski, and Nisha Srinivas. A Publicly Available, Annotated Dataset for Naturalistic Driving Study and Computer Vision Algorithm Development. Office of Scientific and Technical Information (OSTI), January 2021. http://dx.doi.org/10.2172/1760158.
Повний текст джерелаBragdon, Sophia, Vuong Truong, and Jay Clausen. Environmentally informed buried object recognition. Engineer Research and Development Center (U.S.), November 2022. http://dx.doi.org/10.21079/11681/45902.
Повний текст джерелаChen, Z., S. E. Grasby, C. Deblonde, and X. Liu. AI-enabled remote sensing data interpretation for geothermal resource evaluation as applied to the Mount Meager geothermal prospective area. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330008.
Повний текст джерелаHuang, Haohang, Erol Tutumluer, Jiayi Luo, Kelin Ding, Issam Qamhia, and John Hart. 3D Image Analysis Using Deep Learning for Size and Shape Characterization of Stockpile Riprap Aggregates—Phase 2. Illinois Center for Transportation, September 2022. http://dx.doi.org/10.36501/0197-9191/22-017.
Повний текст джерелаТарасова, Олена Юріївна, and Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.
Повний текст джерелаHudgens, Bian, Jene Michaud, Megan Ross, Pamela Scheffler, Anne Brasher, Megan Donahue, Alan Friedlander та ін. Natural resource condition assessment: Puʻuhonua o Hōnaunau National Historical Park. National Park Service, вересень 2022. http://dx.doi.org/10.36967/2293943.
Повний текст джерелаEncuesta a firmas exportadoras de América Latina y el Caribe: buscando comprender el nuevo ADN exportador: segunda edición, septiembre 2021 - Dataset. Inter-American Development Bank, September 2021. http://dx.doi.org/10.18235/0003637.
Повний текст джерела