Добірка наукової літератури з теми "Training visivo"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Training visivo".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Training visivo"
Konukseven, E. Ilhan, M. Ercument Önder, Erkan Mumcuoglu, and Reha Sukru Kisnisci. "Development of a Visio-Haptic Integrated Dental Training Simulation System." Journal of Dental Education 74, no. 8 (August 2010): 880–91. http://dx.doi.org/10.1002/j.0022-0337.2010.74.8.tb04945.x.
Повний текст джерелаWei Lou, Wei Lou, Dewen Cheng Dewen Cheng, Luo Gu Luo Gu, Weihong Hou Weihong Hou, and Yongtian Wang Yongtian Wang. "Optical design and evaluation of Alvarez-type vision-training system." Chinese Optics Letters 16, no. 7 (2018): 072201. http://dx.doi.org/10.3788/col201816.072201.
Повний текст джерелаGlow, Steven D., Vincent J. Colucci, Douglas R. Allington, Curtis W. Noonan, and Earl C. Hall. "Managing Multiple-Casualty Incidents: A Rural Medical Preparedness Training Assessment." Prehospital and Disaster Medicine 28, no. 4 (April 18, 2013): 334–41. http://dx.doi.org/10.1017/s1049023x13000423.
Повний текст джерелаCaicedo-Quiroz, Rosangela, and Julia Céspedez-Acuña. "HACIA UNA FORMACIÓN DEL TÉCNICO SUPERIOR ENENFERMERÍA DESDE UNA VISION SOCIO-PEDAGOGICA." Identidad Bolivariana 1, no. 1 (January 5, 2017): 22–32. http://dx.doi.org/10.37611/ib1ol122-32.
Повний текст джерелаVailland, Guillaume, Yoren Gaffary, Louise Devigne, Valérie Gouranton, Bruno Arnaldi, and Marie Babel. "Power Wheelchair Virtual Reality Simulator with Vestibular Feedback." Modelling, Measurement and Control C 81, no. 1-4 (December 31, 2020): 35–42. http://dx.doi.org/10.18280/mmc_c.811-407.
Повний текст джерелаRíos Garit, Jesús, Yanet Pérez Surita, Aurelio Olmedilla Zafra, and Verónica Gómez-Espejo. "Psicología y lesiones deportivas: Un estudio en lanzadores de beisbol." Cuadernos de Psicología del Deporte 21, no. 1 (January 1, 2021): 102–18. http://dx.doi.org/10.6018/cpd.416351.
Повний текст джерелаPrzybył, Krzysztof, Piotr Boniecki, Krzysztof Koszela, Łukasz Gierz, and Mateusz Łukomski. "Computer vision and artificial neural network techniques for classification of damage in potatoes during the storage process." Czech Journal of Food Sciences 37, No. 2 (May 10, 2019): 135–40. http://dx.doi.org/10.17221/427/2017-cjfs.
Повний текст джерелаJing Li, Jing Li, and Xueping Luo Jing Li. "Malware Family Classification Based on Vision Transformer." 電腦學刊 34, no. 1 (February 2023): 087–99. http://dx.doi.org/10.53106/199115992023023401007.
Повний текст джерелаSu, Chen, Ao Chai, Xikai Tu, Hongyu Zhou, Haiqiang Wang, Zufang Zheng, Jingyan Cao, and Jiping He. "Passive and Active Control Strategies of a Leg Rehabilitation Exoskeleton Powered by Pneumatic Artificial Muscles." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 10 (March 9, 2017): 1759021. http://dx.doi.org/10.1142/s0218001417590212.
Повний текст джерелаCAILLAULT, EMILIE, and CHRISTIAN VIARD-GAUDIN. "MIXED DISCRIMINANT TRAINING OF HYBRID ANN/HMM SYSTEMS FOR ONLINE HANDWRITTEN WORD RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 01 (February 2007): 117–34. http://dx.doi.org/10.1142/s0218001407005338.
Повний текст джерелаДисертації з теми "Training visivo"
Cole, Timothy R. "Investigating Augmented Reality Visio-Haptic Techniques for Medical Training." Thesis, Bangor University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.536476.
Повний текст джерелаWilkins, Luke. "Vision testing and visual training in sport." Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/6313/.
Повний текст джерелаGajić, Bojana. "Training strategies for efficient deep image retrieval." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/673961.
Повний текст джерелаEn esta tesis nos centramos en la recuperación y re-identificación de imágenes. El entrenamiento de redes neuronales profundas usando funciones de pérdida basadas en ranking se ha convertido en un estándar de facto para las tareas de recuperación y re-identificación. Analizamos y aportamos propuestas de respuestas a tres cuestiones principales: 1) ¿Cuáles son las estrategias más relevantes de los métodos del estado del arte y cómo se pueden combinar para obtener un mejor rendimiento? 2) ¿Se puede realizar unmuestreo de muestras negativas restrictivo de manera eficiente (O(1)) mientras se proporciona un rendimiento mejorado respecto almuestreo aleatorio simple? 3) ¿Se pueden conseguir objetivos de reconocimiento y recuperación mediante una función de pérdida basada en el reconocimiento? En primer lugar, en el capítulo 4 analizamos la importancia de algunas estrategias del estado del arte relacionadas con la formación de un modelo de aprendizaje profundo que abarca el aumento de imágenes, la arquitectura vertebral y la minería de tripletas restrictivas. A continuación, combinamos las mejores estrategias para diseñar una arquitectura profunda sencilla, además de una metodología de entrenamiento para una identificación de personas efectiva y de alta calidad. Evaluamos ampliamente cada opción de diseño, dando lugar a una lista de buenas prácticas para la re-identificación de personas. Siguiendo estas prácticas, nuestro enfoque supera el estado del arte, incluidos métodos más complejos con componentes auxiliares, de forma amplia en cuatro conjuntos de datos de referencia. También proporcionamos un análisis cualitativo de nuestra representación entrenada que indica que, a pesar de ser compacta, es capaz de captar información de regiones focalizadas y discriminativas, de una manera similar a un mecanismo de atención implícita. En segundo lugar, el capítulo 5 abordamos el problema del muestreo demuestras negativas restrictivo cuando se entrena un modelo con funciones del tipo pérdida por tripletas. En este capítulo presentamos “Bag of Negative (BoN)”, un método de minería de muestras negativas rápido y restrictivo, que proporciona un conjunto, tripleta o pareja de muestras de entrenamiento potencialmente relevantes. BoN es un método eficiente que selecciona una bolsa de muestras negativas restringidas basado en una nueva estrategia de indexación dispersa (hashing) en línea. Mostramos la superioridad de BoN frente a losmétodos de minería demuestras negativas del estado del arte en términos de precisión y tiempo de entrenamiento en tres grandes conjuntos de datos. Finalmente, en el capítulo 6 hacemos la hipótesis de que entrenar un modelo de aprendizaje demétricas maximizando el área bajo la curva ROC (que es una medida de rendimiento típica de los sistemas de reconocimiento automático) puede inducir una clasificación implícita adecuada para tareas de recuperación. Esta hipótesis se apoya en el hecho de que üna curva es relevante en el espacio ROC si y sólo si es relevante en el espacio Precisión / Exhaustividad (PrecisionRecall)-[17]. Para probar esta hipótesis, diseñamos una relajación derivable y aproximada del área bajo la curva ROC. A pesar de su simplicidad, la función de pérdida basada en área bajo la curva (AUC), combinada con ResNet50 como arquitectura vertebral, consigue los resultados del estado del arte en dos conjuntos de datos para recuperación de muestras a gran escala disponibles públicamente. Además, la función de pérdida basada en AUC consigue un rendimiento comparable a métodosmás complejos, específicos de dominio, que marcan el estado del arte en el problema de la reidentificación de vehículos.
In this thesis we focus on image retrieval and re-identification. Training a deep architecture using a ranking loss has become standard for the retrieval and re-identification tasks. We analyze and propose answers on three main issues: 1) What are the most relevant strategies of state-of-the-art methods and how can they be combined in order to obtain a better performance? 2) Can hard negative sampling be performed efficiently (O(1)) while providing improved performance over naïve random sampling? 3) Can recognition and retrieval objectives be achieved by using a recognition-based loss? First, in chapter 4 we analyze the importance of some state of the art strategies related to the training of a deep model such as image augmentation, backbone architecture and hard triplet mining. We then combine the best strategies to design a simple deep architecture plus a training methodology for effective and high quality person re-identification. We extensively evaluate each design choice, leading to a list of good practices for person re-identification. By following these practices, our approach outperforms the state of the art, including more complex methods with auxiliary components, by large margins on four benchmark datasets. We also provide a qualitative analysis of our trained representation which indicates that, while compact, it is able to capture information from localized and discriminative regions, in a manner akin to an implicit attention mechanism. Second, in chapter 5 we address the problem of hard negative sampling when training a model with triplet-like loss. In this chapter we present Bag of Negatives (BoN), a fast hard negative mining method, that provides a set, triplet or pair of potentially relevant training samples. BoN is an efficient method that selects a bag of hard negatives based on a novel online hashing strategy. We show the superiority of BoN against state-of-the-art hard negative mining methods in terms of accuracy and training time over three large datasets. Finally, in chapter 6 we hypothesize that training a metric learning model by maximizing the area under the ROC curve (which is a typical performance measure of recognition systems) can induce an implicit ranking suitable for retrieval problems. This hypothesis is supported by the fact that “a curve dominates in ROC space if and only if it dominates in PR space” [17]. To test this hypothesis, we design an approximated, derivable relaxation of the area under the ROC curve. Despite its simplicity, AUC loss, combined with ResNet50 as a backbone architecture, achieves state-of-the-art results on two large scale publicly available retrieval datasets. Additionally, the AUC loss achieves comparable performance to the more complex, domain specific, state-of-the-art methods for vehicle re-identification.
Universitat Autònoma de Barcelona. Programa de Doctorat en Informàtica
MATTEO, BARBARA MARIA. "Brain Stimulation for Vision Recovery." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/199049.
Повний текст джерелаThe work was divided into three phases: 1. The first phase was aimed to discover the state-of-art in the rehabilitation of hemianopia. The review work results in an overview of all the chances for treat hemianopia and evaluates which one would be the most appropriate approach. We analysed 56 articles describing the use of various techniques used to promote visual field recovery, but concentrating on two approaches: “border training”, and “blindsight training”. Although no formal meta-analysis was possible, the results of a semi-quantitative evaluation suggested that the improvement in visual skills obtained is related to the type of training used: border rehabilitation seems to improve the detection of visual stimuli, whereas blindsight rehabilitation seems to improve their processing. Finally, the addition of transcranial direct current stimulation seems to enhance the effects of visual field rehabilitation. 2. The second phase was aimed to test in two hemianopic patients the rehabilitation method that looks us more suitable in order to treat hemianopia. The first patient underwent blindsight treatment which was combined with tDCS followed by blindsight training alone. The second patient underwent the two training rounds in reverse order. The patients showed better scores in clinical-instrumental, functional, and ecological assessments after tDCS combined with blindsight rehabilitation rather than rehabilitation alone. In this two-case report parietal-occipital tDCS modulate the effects induced by blindsight treatment on hemianopia. 3. The third phase was aimed to test the most promising treatment using an appropriate study design and a large sample of people. In this phase, we collaborate with the Institute for Medical Psychology at Otto-von-Guericke University. The study results in some consideration about the effectiveness of the current stimulation on hemianopia. The project was aimed at investigating the effects of stimulation with electrical current on hemianopics people by using the different stimulation techniques. The results showed that the brain stimulation with electrical current could effectively re-modulate the neuronal network enabling different way to transport the info to the brain: this technique could have been a facilitator in the hemianopia’s rehabilitation.
Romero, Adriana. "Assisting the training of deep neural networks with applications to computer vision." Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/316577.
Повний текст джерелаEpperson, Sean T. "Animation within a multimedia training system for night vision goggles." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA294095.
Повний текст джерела"March 1995." Thesis advisor(s): Kishore Sengupta, Alice Crawford. Bibliography: p. 43-45. Also available online.
Rae, Sheila M. "The effect of vision training on accommodation and myopia progression." Thesis, Anglia Ruskin University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441555.
Повний текст джерелаTreleaven, Allison Jean. "Improving reading performance in peripheral vision: An adaptive training method." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1460670659.
Повний текст джерелаKamolpattana, Supara. "Science museum explainer training : exploring factors that influence visitor-explainer interactions." Thesis, University of the West of England, Bristol, 2016. http://eprints.uwe.ac.uk/28534/.
Повний текст джерелаOkapuu-von, Veh Alexander. "Sound and vision : audiovisual aspects of a virtual-reality personnel-training system." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23752.
Повний текст джерелаWith the simulator, trainees can carry out all the switching operations necessary for their work in absolute safety, while staying in a realistic environment. A speech-recognition system controls the training session, while audio immersion adds a dimension of realism to the virtual world. An expert-system validates the trainee's operations at all times and a steady-state power-flow simulator recalculates network parameters. The automatic conversion of single-line diagrams enables the construction of three-dimensional models of substation equipment.
The present thesis focuses on the speech command, audio, video and network aspects of the system. A survey of current VR applications and an overview of VR technology are followed by a summary of the E scSOPE-VR project.
Книги з теми "Training visivo"
Shankman, Albert L. Vision enhancement training. Santa Ana, CA: Optometric Extension Program, 1988.
Знайти повний текст джерелаKraskin, Robert A. Improve your vision. Santa Ana, CA: Optometric Extention Program, 2010.
Знайти повний текст джерелаHatfield, Coleman. Visual training: The joy of optometry. Santa Ana, CA: Optometric Extension Program, 1990.
Знайти повний текст джерела1923-, Forkiotis Constantine, ed. Essays on vision. Santa Ana, CA: Optometric Extension Program Foundation, 1990.
Знайти повний текст джерелаMaharaja Sayajirao University of Baroda. Centre of Advanced Study in Education., ed. Teacher education, vision and action. Baroda: Centre of Advanced Study in Education, Faculty of Education and Psychology, M.S. University of Baroda, 2000.
Знайти повний текст джерелаSeiderman, Arthur. Overlooked: 20/20 is not enough. 3rd ed. Santa Ana, CA: Optometric Extension Program Foundation, 2012.
Знайти повний текст джерелаM, Beresford Steven, and American Vision Institute, eds. Improve your vision without glasses or contact lenses: The AVI program. New York: Simon & Schuster, 1996.
Знайти повний текст джерелаFedorov, Aleksandr. Metody uluchsheniíà zreniíà: Kak izbavitʹsíà ot ochkov. Sankt-Peterburg: Nevskiĭ prospekt, 2001.
Знайти повний текст джерелаSeiderman, Arthur. 20/20 is not enough: The new world of vision. New York: Fawcett Crest, 1991.
Знайти повний текст джерелаSeiderman, Arthur. 20/20 is not enough: The new world of vision. New York: Knopf, 1989.
Знайти повний текст джерелаЧастини книг з теми "Training visivo"
Sundberg, Molly. "Realizing the Development Vision 2020." In Training for Model Citizenship, 219–54. New York: Palgrave Macmillan US, 2016. http://dx.doi.org/10.1057/978-1-137-58422-9_8.
Повний текст джерелаCiuffreda, Kenneth J., and Bin Wang. "Vision Training and Sports." In Bioengineering, Mechanics, and Materials: Principles and Applications in Sports, 407–33. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4419-8887-4_16.
Повний текст джерелаVivek, B. S., Konda Reddy Mopuri, and R. Venkatesh Babu. "Gray-Box Adversarial Training." In Computer Vision – ECCV 2018, 213–28. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01267-0_13.
Повний текст джерелаYeakley, Celeste Labrunda, and Jeffrey D. Fiebrich. "World Vision-Training the Organization." In Collaborative Process Improvement, 51–63. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781119134664.ch4.
Повний текст джерелаKataoka, Hirokatsu, Kazushige Okayasu, Asato Matsumoto, Eisuke Yamagata, Ryosuke Yamada, Nakamasa Inoue, Akio Nakamura, and Yutaka Satoh. "Pre-training Without Natural Images." In Computer Vision – ACCV 2020, 583–600. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69544-6_35.
Повний текст джерелаFaubert, Jocelyn, Olga Overbury, and Gregory L. Goodrich. "A Hierarchy of Perceptual Training in Low Vision." In Low Vision, 471–89. New York, NY: Springer New York, 1987. http://dx.doi.org/10.1007/978-1-4612-4780-7_37.
Повний текст джерелаXiong, Yuanhao, and Cho-Jui Hsieh. "Improved Adversarial Training via Learned Optimizer." In Computer Vision – ECCV 2020, 85–100. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58598-3_6.
Повний текст джерелаBrubaker, S. Charles, Matthew D. Mullin, and James M. Rehg. "Towards Optimal Training of Cascaded Detectors." In Computer Vision – ECCV 2006, 325–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11744023_26.
Повний текст джерелаArai, Kohei. "Sports Vision Based Tennis Player Training." In Advances in Intelligent Systems and Computing, 1193–201. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22868-2_83.
Повний текст джерелаDuke, Simon. "Visits of Goodwill and Training Purposes." In US Defence Bases in the United Kingdom, 15–36. London: Palgrave Macmillan UK, 1987. http://dx.doi.org/10.1007/978-1-349-18482-8_3.
Повний текст джерелаТези доповідей конференцій з теми "Training visivo"
Shang, Junyuan, Tengfei Ma, Cao Xiao, and Jimeng Sun. "Pre-training of Graph Augmented Transformers for Medication Recommendation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/825.
Повний текст джерелаJoy, Mirasol. "Statistical human resource development: the case of bukidnon state university, philippines." In Next Steps in Statistics Education. IASE international Association for Statistical Education, 2009. http://dx.doi.org/10.52041/srap.09503.
Повний текст джерелаMwansa, Peter Levison, Ahmad Othman Alshaigy, Dawoud Saleh Madani Almaeeni, Khalifa Ghulam Hussain Qasem, Luiz Rego, Premachandran Nair, and Hussain Ahmed Saeed Baniyas. "Augmented Reality Delivers Differential Value in Safety Assurance on Rigs Onshore Abu Dhabi During Covid-19 Pandemic Courtesy of the Wearable Camera." In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/210870-ms.
Повний текст джерелаChuang, Hsiu-Min, Yang Liu, and Akio Namiki. "Vision-based batting training system." In 2017 IEEE International Conference on Cyborg and Bionic Systems (CBS). IEEE, 2017. http://dx.doi.org/10.1109/cbs.2017.8266105.
Повний текст джерелаKim, Jongsung, and Myunggyu Kim. "Smart vision system for soccer training." In 2015 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2015. http://dx.doi.org/10.1109/ictc.2015.7354543.
Повний текст джерелаIzard, Santiago González, Juan A. Juanes Méndez, Francisco J. García-Peñalvo, Marcelo Jiménez López, Francisco Pastor Vázquez, and Pablo Ruisoto. "360° vision applications for medical training." In TEEM 2017: 5th International Conference Technological Ecosystems for Enhancing Multiculturality. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3144826.3145405.
Повний текст джерелаLee, Ju-Hee, and Je-Won Kang. "Relation Enhanced Vision Language Pre-Training." In 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897623.
Повний текст джерелаKubozono, Ryusuke, Yunan He, Osamu Fukuda, Nobuhiko Yamaguchi, Hiroshi Okumura, and Anik Nur Handayani. "Vision-Based Robot Hand Using Open Source Software." In 2020 4th International Conference on Vocational Education and Training (ICOVET). IEEE, 2020. http://dx.doi.org/10.1109/icovet50258.2020.9230275.
Повний текст джерелаMiyata, Ryosuke, Yunan He, Osamu Fukuda, Nobuhiko Yamaguchi, Hiroshi Okumura, and Anik Nur Handayani. "Vision-based Control for Open-source Mobile Robots." In 2020 4th International Conference on Vocational Education and Training (ICOVET). IEEE, 2020. http://dx.doi.org/10.1109/icovet50258.2020.9230321.
Повний текст джерелаAl-Abdulwahed, Khalid, and Nouf Al-Ashwan. "Female Vocational Training." In SPE Middle East Oil & Gas Show and Conference. SPE, 2021. http://dx.doi.org/10.2118/204528-ms.
Повний текст джерелаЗвіти організацій з теми "Training visivo"
Waggett, Michael L. Night Vision Goggles Computer Based Training. Fort Belvoir, VA: Defense Technical Information Center, April 1999. http://dx.doi.org/10.21236/ada398875.
Повний текст джерелаBeck, Richard R. Training Tomorrow's Navy: The Impact of Joint Vision 2010 on Training Naval Forces. Fort Belvoir, VA: Defense Technical Information Center, February 1997. http://dx.doi.org/10.21236/ada325148.
Повний текст джерелаJoralmon, DeForest Q. Multimedia Development for Night Vision Device Aircrew Training. Fort Belvoir, VA: Defense Technical Information Center, October 1995. http://dx.doi.org/10.21236/ada303615.
Повний текст джерелаReising, Jack D., and Elizabeth L. Martin. Distance Estimation Training with Night Vision Goggles Under Low Illumination. Fort Belvoir, VA: Defense Technical Information Center, January 1995. http://dx.doi.org/10.21236/ada291338.
Повний текст джерелаTrautman, Edward, William Little, and Michael Mittleman. A Survey of Fleet Opinions Regarding Unaided Vision Training Topics. Fort Belvoir, VA: Defense Technical Information Center, December 1990. http://dx.doi.org/10.21236/ada233619.
Повний текст джерелаЛаврентьєва, Олена Олександрівна. Methodological Approaches To Vocational Training Organization. IASHE, 2017. http://dx.doi.org/10.31812/0564/2557.
Повний текст джерелаMcKnight, Katherine, Nitya Venkateswaran, Jennifer Laird, Rita Dilig, Jessica Robles, and Talia Shalev. Parent Teacher Home Visits: An Approach to Addressing Biased Mindsets and Practices to Support Student Success. RTI Press, September 2022. http://dx.doi.org/10.3768/rtipress.2022.op.0077.2209.
Повний текст джерелаAnderson, Gretchen M., Craig A. Vrana, Joseph T. Riegler, and Elizabeth L. Martin. Integration of a Legacy System with Night Vision Training System (NVTS). Fort Belvoir, VA: Defense Technical Information Center, August 2002. http://dx.doi.org/10.21236/ada408580.
Повний текст джерелаNiall, Keith K., Jack D. Reising, Elizabeth L. Martin, and Marcus H. Gregory. Distance Estimation with Night Vision Goggles: A Direct Feedback Training Method. Fort Belvoir, VA: Defense Technical Information Center, June 1997. http://dx.doi.org/10.21236/ada328758.
Повний текст джерелаEstrada, Arthur, Patricia A. LeDuc, Larry C. Woodrum, Terri L. Rowe, Elizabeth G. Stokes, and John S. Crowley. A Comparison Study of Peripheral Vision-Restricting Devices Used for Instrument Training. Fort Belvoir, VA: Defense Technical Information Center, March 2005. http://dx.doi.org/10.21236/ada431147.
Повний текст джерела