Добірка наукової літератури з теми "Event-driven vision"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Event-driven vision".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Event-driven vision":

1

Sun, Ruolin, Dianxi Shi, Yongjun Zhang, Ruihao Li, and Ruoxiang Li. "Data-Driven Technology in Event-Based Vision." Complexity 2021 (March 27, 2021): 1–19. http://dx.doi.org/10.1155/2021/6689337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Event cameras which transmit per-pixel intensity changes have emerged as a promising candidate in applications such as consumer electronics, industrial automation, and autonomous vehicles, owing to their efficiency and robustness. To maintain these inherent advantages, the trade-off between efficiency and accuracy stands as a priority in event-based algorithms. Thanks to the preponderance of deep learning techniques and the compatibility between bio-inspired spiking neural networks and event-based sensors, data-driven approaches have become a hot spot, which along with the dedicated hardware and datasets constitute an emerging field named event-based data-driven technology. Focusing on data-driven technology in event-based vision, this paper first explicates the operating principle, advantages, and intrinsic nature of event cameras, as well as background knowledge in event-based vision, presenting an overview of this research field. Then, we explain why event-based data-driven technology becomes a research focus, including reasons for the rise of event-based vision and the superiority of data-driven approaches over other event-based algorithms. Current status and future trends of event-based data-driven technology are presented successively in terms of hardware, datasets, and algorithms, providing guidance for future research. Generally, this paper reveals the great prospects of event-based data-driven technology and presents a comprehensive overview of this field, aiming at a more efficient and bio-inspired visual system to extract visual features from the external environment.
2

Camunas-Mesa, Luis, Carlos Zamarreno-Ramos, Alejandro Linares-Barranco, Antonio J. Acosta-Jimenez, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco. "An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors." IEEE Journal of Solid-State Circuits 47, no. 2 (February 2012): 504–17. http://dx.doi.org/10.1109/jssc.2011.2167409.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Semeniuta, Oleksandr, and Petter Falkman. "EPypes: a framework for building event-driven data processing pipelines." PeerJ Computer Science 5 (February 11, 2019): e176. http://dx.doi.org/10.7717/peerj-cs.176.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many data processing systems are naturally modeled as pipelines, where data flows though a network of computational procedures. This representation is particularly suitable for computer vision algorithms, which in most cases possess complex logic and a big number of parameters to tune. In addition, online vision systems, such as those in the industrial automation context, have to communicate with other distributed nodes. When developing a vision system, one normally proceeds from ad hoc experimentation and prototyping to highly structured system integration. The early stages of this continuum are characterized with the challenges of developing a feasible algorithm, while the latter deal with composing the vision function with other components in a networked environment. In between, one strives to manage the complexity of the developed system, as well as to preserve existing knowledge. To tackle these challenges, this paper presents EPypes, an architecture and Python-based software framework for developing vision algorithms in a form of computational graphs and their integration with distributed systems based on publish-subscribe communication. EPypes facilitates flexibility of algorithm prototyping, as well as provides a structured approach to managing algorithm logic and exposing the developed pipelines as a part of online systems.
4

Liu, Shih-Chii, Bodo Rueckauer, Enea Ceolini, Adrian Huber, and Tobi Delbruck. "Event-Driven Sensing for Efficient Perception: Vision and Audition Algorithms." IEEE Signal Processing Magazine 36, no. 6 (November 2019): 29–37. http://dx.doi.org/10.1109/msp.2019.2928127.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tominski, Christian. "Event-Based Concepts for User-Driven Visualization." Information Visualization 10, no. 1 (December 24, 2009): 65–81. http://dx.doi.org/10.1057/ivs.2009.32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Visualization has become an increasingly important tool to support exploration and analysis of the large volumes of data we are facing today. However, interests and needs of users are still not being considered sufficiently. The goal of this work is to shift the user into the focus. To that end, we apply the concept of event-based visualization that combines event-based methodology and visualization technology. Previous approaches that make use of events are mostly specific to a particular application case, and hence, can not be applied otherwise. We introduce a novel general model of event-based visualization that comprises three fundamental stages. (1) Users are enabled to specify what their interests are. (2) During visualization, matches of these interests are sought in the data. (3) It is then possible to automatically adjust visual representations according to the detected matches. This way, it is possible to generate visual representations that better reflect what users need for their task at hand. The model's generality allows its application in many visualization contexts. We substantiate the general model with specific data-driven events that focus on relational data so prevalent in today's visualization scenarios. We show how the developed methods and concepts can be implemented in an interactive event-based visualization framework, which includes event-enhanced visualizations for temporal and spatio-temporal data.
6

Roheda, Siddharth, Hamid Krim, Zhi-Quan Luo, and Tianfu Wu. "Event driven sensor fusion." Signal Processing 188 (November 2021): 108241. http://dx.doi.org/10.1016/j.sigpro.2021.108241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Berjón, Roberto, Montserrat Mateos, M. Encarnación Beato, and Ana Fermoso García. "An Event Mesh for Event Driven IoT Applications." International Journal of Interactive Multimedia and Artificial Intelligence 7, no. 6 (2022): 54. http://dx.doi.org/10.9781/ijimai.2022.09.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Matsui, Chihiro, Kazuhide Higuchi, Shunsuke Koshino, and Ken Takeuchi. "Event data-based computation-in-memory (CiM) configuration by co-designing integrated in-sensor and CiM computing for extremely energy-efficient edge computing." Japanese Journal of Applied Physics 61, SC (April 7, 2022): SC1085. http://dx.doi.org/10.35848/1347-4065/ac5533.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract This paper discusses co-designing integrated in-sensor and in-memory computing based on the analysis of event data and gives a system-level solution. By integrating an event-based vision sensor (EVS) as a sensor and event-driven computation-in-memory (CiM) as a processor, event data taken by EVS are processed in CiM. In this work, EVS is used to acquire the scenery from a driving car and the event data are analyzed. Based on the EVS data characteristics of temporally dense and spatially sparse, event-driven SRAM-CiM is proposed for extremely energy-efficient edge computing. In the event-driven SRAM-CiM, a set of 8T-SRAMs stores multiple-bit synaptic weights of spiking neural networks. Multiply-accumulate operation with the multiple-bit synaptic weights is demonstrated by pulse amplitude modulation and pulse width modulation. By considering future EVS of high image resolution and high time resolution, the configuration of event-driven CiM for EVS is discussed.
9

Lenero-Bardallo, Juan Antonio, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco. "A 3.6 $\mu$s Latency Asynchronous Frame-Free Event-Driven Dynamic-Vision-Sensor." IEEE Journal of Solid-State Circuits 46, no. 6 (June 2011): 1443–55. http://dx.doi.org/10.1109/jssc.2011.2118490.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Schraml, Stephan, Ahmed Nabil Belbachir, and Horst Bischof. "An Event-Driven Stereo System for Real-Time 3-D 360° Panoramic Vision." IEEE Transactions on Industrial Electronics 63, no. 1 (January 2016): 418–28. http://dx.doi.org/10.1109/tie.2015.2477265.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Event-driven vision":

1

IACONO, MASSIMILIANO. "Object detection and recognition with event driven cameras." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1005981.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents study, analysis and implementation of algorithms to perform object detection and recognition using an event-based cam era. This sensor represents a novel paradigm which opens a wide range of possibilities for future developments of computer vision. In partic ular it allows to produce a fast, compressed, illumination invariant output, which can be exploited for robotic tasks, where fast dynamics and significant illumination changes are frequent. The experiments are carried out on the neuromorphic version of the iCub humanoid platform. The robot is equipped with a novel dual camera setup mounted directly in the robot’s eyes, used to generate data with a moving camera. The motion causes the presence of background clut ter in the event stream. In such scenario the detection problem has been addressed with an at tention mechanism, specifically designed to respond to the presence of objects, while discarding clutter. The proposed implementation takes advantage of the nature of the data to simplify the original proto object saliency model which inspired this work. Successively, the recognition task was first tackled with a feasibility study to demonstrate that the event stream carries sufficient informa tion to classify objects and then with the implementation of a spiking neural network. The feasibility study provides the proof-of-concept that events are informative enough in the context of object classifi cation, whereas the spiking implementation improves the results by employing an architecture specifically designed to process event data. The spiking network was trained with a three-factor local learning rule which overcomes weight transport, update locking and non-locality problem. The presented results prove that both detection and classification can be carried-out in the target application using the event data.
2

Von, Arnim Axel. "Capteur visuel pour l'identification et la communication optique entre objets mobiles : des images aux événements." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nous présentons dans cette thèse de doctorat le résultat de cinq années de travaux de recherche, de conception et de réalisation d'un capteur actif d'identification optique composé d'un émetteur proche infrarouge et d'une caméra réceptrice à haute fréquence. Ce capteur permet de localiser et d'identifier un objet en mouvement dans la scène visuelle par transmission de données optiques. Il est ainsi possible, soit de transmettre des données de façon purement optique entre objets mobiles dans leurs champs de vision respectifs, soit d'associer l'identification optique à des moyens conventionnels de télécommunication, dans le but de localiser précisément les émetteurs de données, sans recours ou en l'absence de GPS ou d'autres techniques de localisation. Cette technique, que nous avons explorée à ses débuts, en 2005, est connue sous le terme de communication optique par caméra (Optical Camera Communication, ou OCC). Nous avons dans un premier temps, entre 2005 et 2008, mis en œuvre le récepteur avec une caméra CCD cadencée à 595Hz, atteignant un débit de communication de 250 bits par seconde et un temps d'identification moyen de 76ms (pour un identifiant de 16 bits) sur une portée maximale de 378m. Dans une deuxième phase d'étude en 2022-2023, nous avons utilisé une caméra événementielle, atteignant un débit de communication de 2500 bits par seconde avec un taux de décodage de 94%, donc un temps de décodage moyen égale au temps théorique de 6,4ms pour 16 bits. Nous avons donc gagné un ordre de grandeur. Notre capteur se différencie de l'état de l'art a deux titres. Sa première version est arrivée très tôt et a contribué à l'émergence du concept de communication optique par caméra. Un brevet français a d'ailleurs protégé l'invention pendant dix ans. Sa seconde version atteint des performances de débit surpassant l'état de l'art et y ajoutant la robustesse dans le suivi d'objets mobiles. Le cas d'utilisation que nous avions choisi était d'abord la localisation d'objets routiers pour la communication inter-véhiculaire et véhicule-infrastructure. Dans nos travaux récents, nous avons choisi la surveillance et le suivi d'objets par drone. Les applications de notre capteur sont nombreuses, notamment là où d'autres moyens de communication ou d'identification sont soit indisponibles, soit non souhaités. Ainsi en va-t-il des sites industriels, militaires, des communications à vue confidentielles, du suivi précis de véhicules d'urgence par drone, etc. Des applications de technologies similaires dans le domaine du sport prouvent l'utilité et la viabilité économique du capteur. Cette thèse présente aussi l'ensemble d'une carrière de recherche, du rôle d'ingénieur de recherche, à celui de chercheur, puis chef de projet de recherche, enfin directeur de recherche en EPIC. Les domaines d'application de la recherche ont beaucoup varié, des assistances à la conduite à l'IA neuromorphique, mais ont toujours suivi le fil rouge de la robotique, sous ses diverses mises en œuvre. Nous espérons convaincre le lecteur de l'innovation scientifique apportée par nos travaux et plus généralement de notre contribution à la recherche, à son management et à sa direction
In this doctoral thesis, we present the results of five years of research, design and production work on an active optical identification sensor comprising a near-infrared transmitter and a high-frequency receiver camera. This sensor locates and identifies a moving object in the visual scene by transmitting optical data. It is thus possible either to transmit data purely optically between moving objects in their respective fields of view, or to combine optical identification with conventional telecommunication means, with the aim of precisely locating the data transmitters, without recourse to or in the absence of GPS or other localization techniques. This technique, which we first explored in 2005, is known as Optical Camera Communication (OCC). Initially, between 2005 and 2008, we implemented the receiver with a CCD camera clocked at 595Hz, achieving a communication rate of 250 bits per second and an average identification time of 76ms (for a 16-bit identifier) over a maximum range of 378m. In a second study phase in 2022-2023, we used an event-driven camera, achieving a communication rate of 2,500 bits per second with a decoding rate of 94%, i.e. an average decoding time equal to the theoretical time of 6.4ms for 16 bits. So we've gained an order of magnitude. Our sensor differs from the state of the art in two ways. Its first version arrived very early, and contributed to the emergence of the concept of optical camera communication. A French patent protected the invention for ten years. Its second version outperforms the state of the art in terms of throughput, while adding robustness for tracking moving objects.Our initial use case was the localization of road objects for inter-vehicular and vehicle-to-infrastructure communication. In our more recent work, we have chosen drone surveillance and object tracking. Our sensor has many applications, particularly where other means of communication or identification are either unavailable or undesirable. These include industrial and military sites, confidential visual communications, the precise tracking of emergency vehicles by drone, and so on. Applications of similar technologies in the field of sports prove the usefulness and economic viability of the sensor.This thesis also presents my entire research career, from research engineer to researcher, then research project manager, and finally research director in a research institute. The areas of research application have varied widely, from driver assistance to neuromorphic AI, but have always followed the common thread of robotics, in its various implementations. We hope to convince the reader of the scientific innovation brought about by our work, and more generally of our contribution to research, its management and its direction
3

(8921381), Ali Lenjani. "Developing Artificial Intelligence-Based Decision Support for Resilient Socio-Technical Systems." Thesis, 2020.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
During 2017 and 2018, two of the costliest years on record regarding natural disasters, the U.S. experienced 30 events with total losses of $400 billion. These exuberant costs arise primarily from the lack of adequate planning spanning the breadth from pre-event preparedness to post-event response. It is imperative to start thinking about ways to make our built environment more resilient. However, empirically-calibrated and structure-specific vulnerability models, a critical input required to formulate decision-making problems, are not currently available. Here, the research objective is to improve the resilience of the built environment through an automated vision-based system that generates actionable information in the form of probabilistic pre-event prediction and post-event assessment of damage. The central hypothesis is that pre-event, e.g., street view images, along with the post-event image database, contain sufficient information to construct pre-event probabilistic vulnerability models for assets in the built environment. The rationale for this research stems from the fact that probabilistic damage prediction is the most critical input for formulating the decision-making problems under uncertainty targeting the mitigation, preparedness, response, and recovery efforts. The following tasks are completed towards the goal.
First, planning for one of the bottleneck processes of the post-event recovery is formulated as a decision making problem considering the consequences imposed on the community (module 1). Second, a technique is developed to automate the process of extracting multiple street-view images of a given built asset, thereby creating a dataset that illustrates its pre-event state (module 2). Third, a system is developed that automatically characterizes the pre-event state of the built asset and quantifies the probability that it is damaged by fusing information from deep neural network (DNN) classifiers acting on pre-event and post-event images (module 3). To complete the work, a methodology is developed to enable associating each asset of the built environment with a structural probabilistic vulnerability model by correlating the pre-event structure characterization to the post-event damage state (module 4). The method is demonstrated and validated using field data collected from recent hurricanes within the US.
The vision of this research is to enable the automatic extraction of information about exposure and risk to enable smarter and more resilient communities around the world.

Частини книг з теми "Event-driven vision":

1

Lin, Songnan, Jiawei Zhang, Jinshan Pan, Zhe Jiang, Dongqing Zou, Yongtian Wang, Jing Chen, and Jimmy Ren. "Learning Event-Driven Video Deblurring and Interpolation." In Computer Vision – ECCV 2020, 695–710. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58598-3_41.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yuen, Jenny, and Antonio Torralba. "A Data-Driven Approach for Event Prediction." In Computer Vision – ECCV 2010, 707–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15552-9_51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zaharescu, Andrei, and Richard Wildes. "Anomalous Behaviour Detection Using Spatiotemporal Oriented Energies, Subset Inclusion Histogram Comparison and Event-Driven Processing." In Computer Vision – ECCV 2010, 563–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15549-9_41.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Schmitt, R. H., R. Kiesel, D. Buschmann, S. Cramer, C. Enslin, M. Fischer, T. Gries, et al. "Improving Shop Floor-Near Production Management Through Data-Driven Insights." In Internet of Production, 1–23. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-98062-7_16-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn short-term production management of the Internet of Production (IoP) the vision of a Production Control Center is pursued, in which interlinked decision-support applications contribute to increasing decision-making quality and speed. The applications developed focus in particular on use cases near the shop floor with an emphasis on the key topics of production planning and control, production system configuration, and quality control loops.Within the Predictive Quality application, predictive models are used to derive insights from production data and subsequently improve the process- and product-related quality as well as enable automated Root Cause Analysis. The Parameter Prediction application uses invertible neural networks to predict process parameters that can be used to produce components with desired quality properties. The application Production Scheduling investigates the feasibility of applying reinforcement learning to common scheduling tasks in production and compares the performance of trained reinforcement learning agents to traditional methods. In the two applications Deviation Detection and Process Analyzer, the potentials of process mining in the context of production management are investigated. While the Deviation Detection application is designed to identify and mitigate performance and compliance deviations in production systems, the Process Analyzer concept enables the semi-automated detection of weaknesses in business and production processes utilizing event logs.With regard to the overall vision of the IoP, the developed applications contribute significantly to the intended interdisciplinary of production and information technology. For example, application-specific digital shadows are drafted based on the ongoing research work, and the applications are prototypically embedded in the IoP.
5

Schmitt, Robert H., Raphael Kiesel, Daniel Buschmann, Simon Cramer, Chrismarie Enslin, Markus Fischer, Thomas Gries, et al. "Improving Shop Floor-Near Production Management Through Data-Driven Insights." In Internet of Production, 367–90. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-44497-5_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn short-term production management of the Internet of Production (IoP) the vision of a Production Control Center is pursued, in which interlinked decision-support applications contribute to increasing decision-making quality and speed. The applications developed focus in particular on use cases near the shop floor with an emphasis on the key topics of production planning and control, production system configuration, and quality control loops.Within the Predictive Quality application, predictive models are used to derive insights from production data and subsequently improve the process- and product-related quality as well as enable automated Root Cause Analysis. The Parameter Prediction application uses invertible neural networks to predict process parameters that can be used to produce components with desired quality properties. The application Production Scheduling investigates the feasibility of applying reinforcement learning to common scheduling tasks in production and compares the performance of trained reinforcement learning agents to traditional methods. In the two applications Deviation Detection and Process Analyzer, the potentials of process mining in the context of production management are investigated. While the Deviation Detection application is designed toidentify and mitigate performance and compliance deviations in production systems, the Process Analyzer concept enables the semi-automated detection of weaknesses in business and production processes utilizing event logs.With regard to the overall vision of the IoP, the developed applications contribute significantly to the intended interdisciplinary of production and information technology. For example, application-specific digital shadows are drafted based on the ongoing research work, and the applications are prototypically embedded in the IoP.
6

Vandenbroucke, Gabriel Marin, Simon Gérard, and Anthony May. "The impact of the Rio 2016 Olympic and Paralympic Games on the visitor economy: a human rights perspective." In Managing events, festivals and the visitor economy: concepts, collaborations and cases, 145–59. Wallingford: CABI, 2021. http://dx.doi.org/10.1079/9781789242843.00011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The overall findings of this research point to a mix of positive and negative human rights impacts of the Rio 2016 Olympic and Paralympic Games, and on the visitor economy of the host city. On a positive note, affirmative action included persons with disabilities and from underprivileged communities in the workforce. New sports and leisure centres were built. Freedom of expression and association was reinforced by protesters demonstrating and using the platform of the event to raise issues. Several initiatives by the Organizing Committee, government, companies, and associations constituted positive mechanisms for leverage of the human rights to education and to participate in the cultural life of the community, albeit with limited long-term impacts. These wider economic and social successes associated with the hosting of the Games can positively contribute to the quality and inclusivity of the visitor economy. redevelopment, the Games' land use displaced thousands of people, violating the right to housing and several other human rights through abusive practices used by the government in the eviction process. Under the pretext of creating safe spaces for visitors and safeguarding their image of the city, the government's violence towards poor and black communities was aggravated, with the militarisation of the city impacting on the right to life, protection, education, and justice. Attempting to mask the city's socio-economic problems and undesirable aspects for sponsors and visitors, freedom of expression was undermined as protesters were targeted by the police and street vendors were driven out of public spaces.
7

Ahuja, Neel. "Weather as War." In Planetary Specters, 131–60. University of North Carolina Press, 2021. http://dx.doi.org/10.5149/northcarolina/9781469664477.003.0005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This chapter examines how the Syrian civil war has been configured by security experts, climate scientists, and Western journalists as a “climate war,” particularly in the wake of a 2015 study claiming that drought was the trigger event for the uprisings. Detailing how narratives of racial disability are used to buttress claims that the Syrian uprisings are climate conflicts requiring humanitarian intervention, Ahuja argues that visions of climate-driven resource conflict attempt to skirt the complex political and economic underpinnings of Syrian resistance to the Assad government. By focusing on accounts of aerial bombing and infrastructure collapse during the war, it is possible to give an alternative account of how environmental factors relate to the war and to witness other histories of environmental resistance in the conflict, including efforts to redistribute land and develop ecological collectives in Rojava. This reflects some potential alternatives to narratives of scarcity-driven climate migration, as forms of ecological thought can envision collective approaches to land, wealth, and interspecies relations.
8

Gross, Alan G. "Rachel Carson: The Ethical Sublime." In The Scientific Sublime. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190637774.003.0012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Rachel Carson has become Saint Rachel, canonized time and again by the environmental movement. May 27, 2007, marked the 100th anniversary of her birth. In that year, the Cape Cod Museum of Natural History in Brewster, Massachusetts, hosted a major Rachel Carson centennial exhibition. The show was a partnership project of the museum and the US Fish and Wildlife Service, and it featured artifacts, writings, photographs, and artwork from Carson’s life and career. In 2012, the 50th anniversary of the publication of Silent Spring was commemorated by a Coastal Maine Botanical Gardens event and exhibit. From September 7 through October 23, the exhibit presented artwork, photos, and interpretive panels in the visitor center. Canonization, and the posthumous fame it bestows, comes at a price: the disappearance of the Rachel Carson whose work was driven by two forces. The first was the love of nature. A perceptive review of The Sea Around Us compares Carson with great science writers who share with her a love of nature: . . . It is not an accident of history that Gilbert White and Charles Darwin described flora and fauna with genius, nor that the great mariners and voyagers in distant lands can re-create their experiences as part of our own. They wrote as they saw and their honest, questing eye, their care for detail is raised to the power of art by a deep-felt love of nature, and respect for all things that live and move and have their being. . . . The second force was the love of a woman, Dorothy Freeman, a person who in Carson’s view made her later life endurable and her later work possible: . . . All I am certain of is this: that it is quite necessary for me to know that there is someone who is deeply devoted to me as a person, and who also has the capacity and the depth of understanding to share, vicariously, the sometimes crushing burden of creative effort, recognizing the heartache, the great weariness of mind and body, the occasional black despair it may involve—someone who cherishes me and what I am trying to create, as well. . . .

Тези доповідей конференцій з теми "Event-driven vision":

1

Delbruck, Tobi, Bernabe Linares-Barranco, Eugenio Culurciello, and Christoph Posch. "Activity-driven, event-based vision sensors." In 2010 IEEE International Symposium on Circuits and Systems - ISCAS 2010. IEEE, 2010. http://dx.doi.org/10.1109/iscas.2010.5537149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Camunas-Mesa, L. A., T. Serrano-Gotarredona, B. Linares-Barranco, S. Ieng, and R. Benosman. "Event-driven stereo vision with orientation filters." In 2014 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2014. http://dx.doi.org/10.1109/iscas.2014.6865114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Belbachir, Ahmed Nabil, Stephan Schraml, and Aneta Nowakowska. "Event-driven stereo vision for fall detection." In 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops). IEEE, 2011. http://dx.doi.org/10.1109/cvprw.2011.5981819.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wang, Zihao W., Weixin Jiang, Kuan He, Boxin Shi, Aggelos Katsaggelos, and Oliver Cossairt. "Event-Driven Video Frame Synthesis." In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00532.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Muller-Budack, Eric, Matthias Springstein, Sherzod Hakimov, Kevin Mrutzek, and Ralph Ewerth. "Ontology-driven Event Type Classification in Images." In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2021. http://dx.doi.org/10.1109/wacv48630.2021.00297.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Messikommer, Nico, Carter Fang, Mathias Gehrig, and Davide Scaramuzza. "Data-Driven Feature Tracking for Event Cameras." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00546.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Czarlinska, Alexandra, and Deepa Kundur. "Event-Driven Visual Sensor Networks: Issues in Reliability." In 2008 IEEE Workshop on Applications of Computer Vision (WACV). IEEE, 2008. http://dx.doi.org/10.1109/wacv.2008.4544039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Camunas-Mesa, L. A., T. Serrano-Gotarredona, and B. Linares-Barranco. "Event-driven sensing and processing for high-speed robotic vision." In 2014 IEEE Biomedical Circuits and Systems Conference (BioCAS). IEEE, 2014. http://dx.doi.org/10.1109/biocas.2014.6981776.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Schraml, Stephan, Ahmed Nabil Belbachir, and Horst Bischof. "Event-driven stereo matching for real-time 3D panoramic vision." In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2015. http://dx.doi.org/10.1109/cvpr.2015.7298644.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Eibensteiner, Florian, Hans Georg Brachtendorf, and Josef Scharinger. "Event-driven stereo vision algorithm based on silicon retina sensors." In 2017 27th International Conference Radioelektronika (RADIOELEKTRONIKA). IEEE, 2017. http://dx.doi.org/10.1109/radioelek.2017.7937602.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії