Journal articles on the topic 'Event-driven vision'

To see the other types of publications on this topic, follow the link: Event-driven vision.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Event-driven vision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sun, Ruolin, Dianxi Shi, Yongjun Zhang, Ruihao Li, and Ruoxiang Li. "Data-Driven Technology in Event-Based Vision." Complexity 2021 (March 27, 2021): 1–19. http://dx.doi.org/10.1155/2021/6689337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Event cameras which transmit per-pixel intensity changes have emerged as a promising candidate in applications such as consumer electronics, industrial automation, and autonomous vehicles, owing to their efficiency and robustness. To maintain these inherent advantages, the trade-off between efficiency and accuracy stands as a priority in event-based algorithms. Thanks to the preponderance of deep learning techniques and the compatibility between bio-inspired spiking neural networks and event-based sensors, data-driven approaches have become a hot spot, which along with the dedicated hardware and datasets constitute an emerging field named event-based data-driven technology. Focusing on data-driven technology in event-based vision, this paper first explicates the operating principle, advantages, and intrinsic nature of event cameras, as well as background knowledge in event-based vision, presenting an overview of this research field. Then, we explain why event-based data-driven technology becomes a research focus, including reasons for the rise of event-based vision and the superiority of data-driven approaches over other event-based algorithms. Current status and future trends of event-based data-driven technology are presented successively in terms of hardware, datasets, and algorithms, providing guidance for future research. Generally, this paper reveals the great prospects of event-based data-driven technology and presents a comprehensive overview of this field, aiming at a more efficient and bio-inspired visual system to extract visual features from the external environment.
2

Camunas-Mesa, Luis, Carlos Zamarreno-Ramos, Alejandro Linares-Barranco, Antonio J. Acosta-Jimenez, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco. "An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors." IEEE Journal of Solid-State Circuits 47, no. 2 (February 2012): 504–17. http://dx.doi.org/10.1109/jssc.2011.2167409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Semeniuta, Oleksandr, and Petter Falkman. "EPypes: a framework for building event-driven data processing pipelines." PeerJ Computer Science 5 (February 11, 2019): e176. http://dx.doi.org/10.7717/peerj-cs.176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many data processing systems are naturally modeled as pipelines, where data flows though a network of computational procedures. This representation is particularly suitable for computer vision algorithms, which in most cases possess complex logic and a big number of parameters to tune. In addition, online vision systems, such as those in the industrial automation context, have to communicate with other distributed nodes. When developing a vision system, one normally proceeds from ad hoc experimentation and prototyping to highly structured system integration. The early stages of this continuum are characterized with the challenges of developing a feasible algorithm, while the latter deal with composing the vision function with other components in a networked environment. In between, one strives to manage the complexity of the developed system, as well as to preserve existing knowledge. To tackle these challenges, this paper presents EPypes, an architecture and Python-based software framework for developing vision algorithms in a form of computational graphs and their integration with distributed systems based on publish-subscribe communication. EPypes facilitates flexibility of algorithm prototyping, as well as provides a structured approach to managing algorithm logic and exposing the developed pipelines as a part of online systems.
4

Liu, Shih-Chii, Bodo Rueckauer, Enea Ceolini, Adrian Huber, and Tobi Delbruck. "Event-Driven Sensing for Efficient Perception: Vision and Audition Algorithms." IEEE Signal Processing Magazine 36, no. 6 (November 2019): 29–37. http://dx.doi.org/10.1109/msp.2019.2928127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tominski, Christian. "Event-Based Concepts for User-Driven Visualization." Information Visualization 10, no. 1 (December 24, 2009): 65–81. http://dx.doi.org/10.1057/ivs.2009.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Visualization has become an increasingly important tool to support exploration and analysis of the large volumes of data we are facing today. However, interests and needs of users are still not being considered sufficiently. The goal of this work is to shift the user into the focus. To that end, we apply the concept of event-based visualization that combines event-based methodology and visualization technology. Previous approaches that make use of events are mostly specific to a particular application case, and hence, can not be applied otherwise. We introduce a novel general model of event-based visualization that comprises three fundamental stages. (1) Users are enabled to specify what their interests are. (2) During visualization, matches of these interests are sought in the data. (3) It is then possible to automatically adjust visual representations according to the detected matches. This way, it is possible to generate visual representations that better reflect what users need for their task at hand. The model's generality allows its application in many visualization contexts. We substantiate the general model with specific data-driven events that focus on relational data so prevalent in today's visualization scenarios. We show how the developed methods and concepts can be implemented in an interactive event-based visualization framework, which includes event-enhanced visualizations for temporal and spatio-temporal data.
6

Roheda, Siddharth, Hamid Krim, Zhi-Quan Luo, and Tianfu Wu. "Event driven sensor fusion." Signal Processing 188 (November 2021): 108241. http://dx.doi.org/10.1016/j.sigpro.2021.108241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Berjón, Roberto, Montserrat Mateos, M. Encarnación Beato, and Ana Fermoso García. "An Event Mesh for Event Driven IoT Applications." International Journal of Interactive Multimedia and Artificial Intelligence 7, no. 6 (2022): 54. http://dx.doi.org/10.9781/ijimai.2022.09.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Matsui, Chihiro, Kazuhide Higuchi, Shunsuke Koshino, and Ken Takeuchi. "Event data-based computation-in-memory (CiM) configuration by co-designing integrated in-sensor and CiM computing for extremely energy-efficient edge computing." Japanese Journal of Applied Physics 61, SC (April 7, 2022): SC1085. http://dx.doi.org/10.35848/1347-4065/ac5533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract This paper discusses co-designing integrated in-sensor and in-memory computing based on the analysis of event data and gives a system-level solution. By integrating an event-based vision sensor (EVS) as a sensor and event-driven computation-in-memory (CiM) as a processor, event data taken by EVS are processed in CiM. In this work, EVS is used to acquire the scenery from a driving car and the event data are analyzed. Based on the EVS data characteristics of temporally dense and spatially sparse, event-driven SRAM-CiM is proposed for extremely energy-efficient edge computing. In the event-driven SRAM-CiM, a set of 8T-SRAMs stores multiple-bit synaptic weights of spiking neural networks. Multiply-accumulate operation with the multiple-bit synaptic weights is demonstrated by pulse amplitude modulation and pulse width modulation. By considering future EVS of high image resolution and high time resolution, the configuration of event-driven CiM for EVS is discussed.
9

Lenero-Bardallo, Juan Antonio, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco. "A 3.6 $\mu$s Latency Asynchronous Frame-Free Event-Driven Dynamic-Vision-Sensor." IEEE Journal of Solid-State Circuits 46, no. 6 (June 2011): 1443–55. http://dx.doi.org/10.1109/jssc.2011.2118490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Schraml, Stephan, Ahmed Nabil Belbachir, and Horst Bischof. "An Event-Driven Stereo System for Real-Time 3-D 360° Panoramic Vision." IEEE Transactions on Industrial Electronics 63, no. 1 (January 2016): 418–28. http://dx.doi.org/10.1109/tie.2015.2477265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Zhengfa, Guang Chen, Ya Wu, Jiatong Du, Jörg Conradt, and Alois Knoll. "Mixed Event-Frame Vision System for Daytime Preceding Vehicle Taillight Signal Measurement Using Event-Based Neuromorphic Vision Sensor." Journal of Advanced Transportation 2022 (September 22, 2022): 1–20. http://dx.doi.org/10.1155/2022/2673191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An important aspect of the perception system for intelligent vehicles is the detection and signal measurement of vehicle taillights. In this work, we present a novel vision-based measurement (VBM) system, using an event-based neuromorphic vision sensor, which is able to detect and measure the vehicle taillight signal robustly. To the best of our knowledge, it is for the first time the neuromorphic vision sensor is paid attention to for utilizing in the field of vehicle taillight signal measurement. The event-based neuromorphic vision sensor is a bioinspired sensor that records pixel-level intensity changes, called events, as well as the whole picture of the scene. The events naturally respond to illumination changes (such as the ON and OFF state of taillights) in the scene with very low latency. Moreover, the property of a higher dynamic range increases the sensor sensitivity and performance in poor lighting conditions. In this paper, we consider an event-driven solution to measure vehicle taillight signals. In contrast to most existing work that relies purely on standard frame-based cameras for the taillight signal measurement, the presented mixed event/frame system extracts the frequency domain features from the spatial and temporal signal of each taillight region and measures the taillight signal by combining the active-pixel sensor (APS) frames and dynamic vision sensor (DVS) events. A thresholding algorithm and a learned classifier are proposed to jointly achieve the brake-light and turn-light signal measurement. Experiments with real traffic scenes demonstrate the performance of measuring taillight signals under different traffic conditions with a single event-based neuromorphic vision sensor. The results show the high potential of the event-based neuromorphic vision sensor being used for optical signal measurement applications, especially in dynamic environments.
12

She, Xueyuan, and Saibal Mukhopadhyay. "SPEED: Spiking Neural Network With Event-Driven Unsupervised Learning and Near-Real-Time Inference for Event-Based Vision." IEEE Sensors Journal 21, no. 18 (September 15, 2021): 20578–88. http://dx.doi.org/10.1109/jsen.2021.3098013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

van Horssen, Eelco P., Jeroen A. A. van Hooijdonk, Duarte Antunes, and W. P. M. H. Heemels. "Event- and Deadline-Driven Control of a Self-Localizing Robot With Vision-Induced Delays." IEEE Transactions on Industrial Electronics 67, no. 2 (February 2020): 1212–21. http://dx.doi.org/10.1109/tie.2019.2899553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Yoon, Rina, Seokjin Oh, Seungmyeong Cho, and Kyeong-Sik Min. "Memristor–CMOS Hybrid Circuits Implementing Event-Driven Neural Networks for Dynamic Vision Sensor Camera." Micromachines 15, no. 4 (March 22, 2024): 426. http://dx.doi.org/10.3390/mi15040426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For processing streaming events from a Dynamic Vision Sensor camera, two types of neural networks can be considered. One are spiking neural networks, where simple spike-based computation is suitable for low-power consumption, but the discontinuity in spikes can make the training complicated in terms of hardware. The other one are digital Complementary Metal Oxide Semiconductor (CMOS)-based neural networks that can be trained directly using the normal backpropagation algorithm. However, the hardware and energy overhead can be significantly large, because all streaming events must be accumulated and converted into histogram data, which requires a large amount of memory such as SRAM. In this paper, to combine the spike-based operation with the normal backpropagation algorithm, memristor–CMOS hybrid circuits are proposed for implementing event-driven neural networks in hardware. The proposed hybrid circuits are composed of input neurons, synaptic crossbars, hidden/output neurons, and a neural network’s controller. Firstly, the input neurons perform preprocessing for the DVS camera’s events. The events are converted to histogram data using very simple memristor-based latches in the input neurons. After preprocessing the events, the converted histogram data are delivered to an ANN implemented using synaptic memristor crossbars. The memristor crossbars can perform low-power Multiply–Accumulate (MAC) calculations according to the memristor’s current–voltage relationship. The hidden and output neurons can convert the crossbar’s column currents to the output voltages according to the Rectified Linear Unit (ReLU) activation function. The neural network’s controller adjusts the MAC calculation frequency according to the workload of the event computation. Moreover, the controller can disable the MAC calculation clock automatically to minimize unnecessary power consumption. The proposed hybrid circuits have been verified by circuit simulation for several event-based datasets such as POKER-DVS and MNIST-DVS. The circuit simulation results indicate that the neural network’s performance proposed in this paper is degraded by as low as 0.5% while saving as much as 79% in power consumption for POKER-DVS. The recognition rate of the proposed scheme is lower by 0.75% compared to the conventional one, for the MNIST-DVS dataset. In spite of this little loss, the power consumption can be reduced by as much as 75% for the proposed scheme.
15

Bergner, Florian, Emmanuel Dean-Leon, Julio Rogelio Guadarrama-Olvera, and Gordon Cheng. "Evaluation of a Large Scale Event Driven Robot Skin." IEEE Robotics and Automation Letters 4, no. 4 (October 2019): 4247–54. http://dx.doi.org/10.1109/lra.2019.2930493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Xu, Zimin, Guoli Wang, and Xuemei Guo. "Event-driven daily activity recognition with enhanced emergent modeling." Pattern Recognition 135 (March 2023): 109149. http://dx.doi.org/10.1016/j.patcog.2022.109149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Shixiong, Wenmin Wang, Honglei Li, and Shenyong Zhang. "EVtracker: An Event-Driven Spatiotemporal Method for Dynamic Object Tracking." Sensors 22, no. 16 (August 15, 2022): 6090. http://dx.doi.org/10.3390/s22166090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An event camera is a novel bio-inspired sensor that effectively compensates for the shortcomings of current frame cameras, which include high latency, low dynamic range, motion blur, etc. Rather than capturing images at a fixed frame rate, an event camera produces an asynchronous signal by measuring the brightness change of each pixel. Consequently, an appropriate algorithm framework that can handle the unique data types of event-based vision is required. In this paper, we propose a dynamic object tracking framework using an event camera to achieve long-term stable tracking of event objects. One of the key novel features of our approach is to adopt an adaptive strategy that adjusts the spatiotemporal domain of event data. To achieve this, we reconstruct event images from high-speed asynchronous streaming data via online learning. Additionally, we apply the Siamese network to extract features from event data. In contrast to earlier models that only extract hand-crafted features, our method provides powerful feature description and a more flexible reconstruction strategy for event data. We assess our algorithm in three challenging scenarios: 6-DoF (six degrees of freedom), translation, and rotation. Unlike fixed cameras in traditional object tracking tasks, all three tracking scenarios involve the simultaneous violent rotation and shaking of both the camera and objects. Results from extensive experiments suggest that our proposed approach achieves superior accuracy and robustness compared to other state-of-the-art methods. Without reducing time efficiency, our novel method exhibits a 30% increase in accuracy over other recent models. Furthermore, results indicate that event cameras are capable of robust object tracking, which is a task that conventional cameras cannot adequately perform, especially for super-fast motion tracking and challenging lighting situations.
18

Akolkar, Himanshu, Cedric Meyer, Xavier Clady, Olivier Marre, Chiara Bartolozzi, Stefano Panzeri, and Ryad Benosman. "What Can Neuromorphic Event-Driven Precise Timing Add to Spike-Based Pattern Recognition?" Neural Computation 27, no. 3 (March 2015): 561–93. http://dx.doi.org/10.1162/neco_a_00703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time—pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30–60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.
19

Semeniuta, Oleksandr, and Petter Falkman. "Event-driven industrial robot control architecture for the Adept V+ platform." PeerJ Computer Science 5 (July 29, 2019): e207. http://dx.doi.org/10.7717/peerj-cs.207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Modern industrial robotic systems are highly interconnected. They operate in a distributed environment and communicate with sensors, computer vision systems, mechatronic devices, and computational components. On the fundamental level, communication and coordination between all parties in such distributed system are characterized by discrete event behavior. The latter is largely attributed to the specifics of communication over the network, which, in terms, facilitates asynchronous programming and explicit event handling. In addition, on the conceptual level, events are an important building block for realizing reactivity and coordination. Event-driven architecture has manifested its effectiveness for building loosely-coupled systems based on publish-subscribe middleware, either general-purpose or robotic-oriented. Despite all the advances in middleware, industrial robots remain difficult to program in context of distributed systems, to a large extent due to the limitation of the native robot platforms. This paper proposes an architecture for flexible event-based control of industrial robots based on the Adept V+ platform. The architecture is based on the robot controller providing a TCP/IP server and a collection of robot skills, and a high-level control module deployed to a dedicated computing device. The control module possesses bidirectional communication with the robot controller and publish/subscribe messaging with external systems. It is programmed in asynchronous style using pyadept, a Python library based on Python coroutines, AsyncIO event loop and ZeroMQ middleware. The proposed solution facilitates integration of Adept robots into distributed environments and building more flexible robotic solutions with event-based logic.
20

Krijnders, J. D., M. E. Niessen, and T. C. Andringa. "Sound event recognition through expectancy-based evaluation ofsignal-driven hypotheses." Pattern Recognition Letters 31, no. 12 (September 2010): 1552–59. http://dx.doi.org/10.1016/j.patrec.2009.11.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kepski, Michal, and Bogdan Kwolek. "Event‐driven system for fall detection using body‐worn accelerometer and depth sensor." IET Computer Vision 12, no. 1 (November 27, 2017): 48–58. http://dx.doi.org/10.1049/iet-cvi.2017.0119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cottini, Nicola, Leonardo Gasparini, Marco De Nicola, Nicola Massari, and Massimo Gottardi. "A CMOS Ultra-Low Power Vision Sensor With Image Compression and Embedded Event-Driven Energy-Management." IEEE Journal on Emerging and Selected Topics in Circuits and Systems 1, no. 3 (September 2011): 299–307. http://dx.doi.org/10.1109/jetcas.2011.2167072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Pardo, Fernando, Càndid Reig, José A. Boluda, and Francisco Vegara. "A 4K-Input High-Speed Winner-Take-All (WTA) Circuit with Single-Winner Selection for Change-Driven Vision Sensors." Sensors 19, no. 2 (January 21, 2019): 437. http://dx.doi.org/10.3390/s19020437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Winner-Take-All (WTA) circuits play an important role in applications where a single element must be selected according to its relevance. They have been successfully applied in neural networks and vision sensors. These applications usually require a large number of inputs for the WTA circuit, especially for vision applications where thousands to millions of pixels may compete to be selected. WTA circuits usually exhibit poor response-time scaling with the number of competitors, and most of the current WTA implementations are designed to work with less than 100 inputs. Another problem related to the large number of inputs is the difficulty to select just one winner, since many competitors may have differences below the WTA resolution. In this paper, a WTA circuit is presented that handles more than four thousand inputs, to our best knowledge the hitherto largest WTA, with response times below the microsecond, and with a guaranty of just a single winner selection. This performance is obtained by the combination of a standard analog WTA circuit and a fast digital single-winner selector with almost no size penalty. This WTA circuit has been successfully employed in the fabrication of a Selective Change-Driven Vision Sensor based on 180 nm CMOS technology. Both simulated and experimental results are presented in the paper, showing that a single pixel event can be selected in just 560 ns, and a multipixel pixel event can be processed in 100 us. Similar results with a conventional approach would require a camera working at more than 1 Mfps for the single-pixel event detection, and 10 kfps for the whole multipixel event to be processed.
24

Nur Khozin, Nursaid,. "ISLAMIC EDUCATION REORIENTATION IN GROWING THE FITRAH GOODNESS IN THE ERA OF GLOBALIZATION." al-Iltizam: Jurnal Pendidikan Agama Islam 4, no. 1 (May 30, 2019): 121. http://dx.doi.org/10.33477/alt.v4i1.755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: The aim of the study was to find out how the position of Islamic education institutions in the era of globalization is transparent. Family institutions are the main foundation for the growth and development of today's generation. The role of the family is driven by both parents, the role of the school is driven by teachers, the community is motivated by community leaders and in universities driven by lecturers to foster religious nature in internalizing the faith and fostering a kindness of kindness. This study is a descriptive qualitative research approach that is to describe a phenomenon, event, event that occurred in the field. Data collection researchers used observation, then researchers also conducted activities Focus Group Discussion (FGD) is a form of data collection activities through group discussions and discussion in groups. Data analysis techniques use data reduction, data presentation, conclusion drawing. From this study it was found that there needs to be a reorientation of the vision and mission of Islamic education. with a big vision and mission that produces great output. If you want participants to return to their nature, then Islamic education in the process requires a great willingness to realize their vision and mission, so that the nature of goodness that exists in students can grow well as our human being. There needs to be a Reorientation of the Islamic Education Strategy that is needed to improve the quality and equity of education in all educational institutions. To foster fitrah, it is necessary to develop the character of the vision of ulul al-bab, al-ulama, al-muzakki, ahl al-dzikr, and al-rasikhuna fi al-‘ilm and position educators who are authoritative and have sacredness. It is necessary to optimize learning and internalize the main tasks of teachers as a profession, humanity, social quality of educators of Islamic education from various sides in the era of globalization. Keywords: Islamic Education, Fitrah Goodness, Era of Globalization
25

Qu, Qiang, Yiran Shen, Xiaoming Chen, Yuk Ying Chung, and Tongliang Liu. "E2HQV: High-Quality Video Generation from Event Camera via Theory-Inspired Model-Aided Deep Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 5 (March 24, 2024): 4632–40. http://dx.doi.org/10.1609/aaai.v38i5.28263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The bio-inspired event cameras or dynamic vision sensors are capable of asynchronously capturing per-pixel brightness changes (called event-streams) in high temporal resolution and high dynamic range. However, the non-structural spatial-temporal event-streams make it challenging for providing intuitive visualization with rich semantic information for human vision. It calls for events-to-video (E2V) solutions which take event-streams as input and generate high quality video frames for intuitive visualization. However, current solutions are predominantly data-driven without considering the prior knowledge of the underlying statistics relating event-streams and video frames. It highly relies on the non-linearity and generalization capability of the deep neural networks, thus, is struggling on reconstructing detailed textures when the scenes are complex. In this work, we propose E2HQV, a novel E2V paradigm designed to produce high-quality video frames from events. This approach leverages a model-aided deep learning framework, underpinned by a theory-inspired E2V model, which is meticulously derived from the fundamental imaging principles of event cameras. To deal with the issue of state-reset in the recurrent components of E2HQV, we also design a temporal shift embedding module to further improve the quality of the video frames. Comprehensive evaluations on the real world event camera datasets validate our approach, with E2HQV, notably outperforming state-of-the-art approaches, e.g., surpassing the second best by over 40% for some evaluation metrics.
26

Sarhan, Amany M., Ahmed I. Saleh, and Ramy K. Elsadek. "A Reliable Event-Driven Strategy for Real-Time Multiple Object Tracking Using Static Cameras." Advances in Multimedia 2011 (2011): 1–20. http://dx.doi.org/10.1155/2011/976463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recently, because of its importance in computer vision and surveillance systems, object tracking has progressed rapidly over the last two decades. Researches on such systems still face several theoretical and technical problems that badly impact not only the accuracy of position measurements but also the continuity of tracking. In this paper, a novel strategy for tracking multiple objects using static cameras is introduced, which can be used to grant a cheap, easy installation and robust tracking system. The proposed tracking strategy is based on scenes captured by a number of static video cameras. Each camera is attached to a workstation that analyzes its stream. All workstations are connected directly to the tracking server, which harmonizes the system, collects the data, and creates the output spatial-tempo database. Our contribution comes in two issues. The first is to present a new methodology for transforming the image coordinates of an object to its real coordinates. The second is to offer a flexible event-based object tracking strategy. The proposed tracking strategy has been tested over a CAD of soccer game environment. Preliminary experimental results show the robust performance of the proposed tracking strategy.
27

Perez-Carrasco, J. A., Bo Zhao, C. Serrano, B. Acha, T. Serrano-Gotarredona, Shouchun Chen, and B. Linares-Barranco. "Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 11 (November 2013): 2706–19. http://dx.doi.org/10.1109/tpami.2013.71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

TSUZAKI, Tomoya, and Shinsuke YASUKAWA. "Construction of an Event-Driven Binocular Vision System and Proposal of a Time-Surface-Based Disparity Calculation Method." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2023 (2023): 1P1—H02. http://dx.doi.org/10.1299/jsmermd.2023.1p1-h02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Hui, Youming Li, Tingcheng Chang, Shengming Chang, and Yexian Fan. "Event-Driven Sensor Deployment in an Underwater Environment Using a Distributed Hybrid Fish Swarm Optimization Algorithm." Applied Sciences 8, no. 9 (September 13, 2018): 1638. http://dx.doi.org/10.3390/app8091638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In open and complex underwater environments, targets to be monitored are highly dynamic and exhibit great uncertainty. To optimize monitoring target coverage, the development of a method for adjusting sensor positions based on environments and targets is of crucial importance. In this paper, we propose a distributed hybrid fish swarm optimization algorithm (DHFSOA) based on the influence of water flow and the operation of an artificial fish swarm system to improve the coverage efficacy of the event set and to avoid blind movements of sensor nodes. First, by simulating the behavior of foraging fish, sensor nodes autonomously tend to cover events, with congestion control being used to match node distribution density to event distribution density. Second, the construction of an information pool is used to achieve information-sharing between nodes within the network connection range, to increase the nodes’ field of vision, and to enhance their global search abilities. Finally, we conduct extensive simulation experiments to evaluate network performance in different deployment environments. The results show that the proposed DHFSOA performs well in terms of coverage efficacy, energy efficiency, and convergence rate of the event set.
30

Pfeiffer, Friedrich. "The TUM walking machines." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 365, no. 1850 (November 17, 2006): 109–31. http://dx.doi.org/10.1098/rsta.2006.1922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents some aspects of walking machine design with a special emphasis on the three machines MAX, MORITZ and JOHNNIE, having been developed at the Technical University of Munich within the last 20 years. The design of such machines is discussed as an iterative process improving the layout with every iteration. The control concepts are event-driven and follow logical rules, which have largely been transferred from neurobiological findings. At least for the six-legged machine MAX, a nearly perfect autonomy could be achieved, whereas for the biped JOHNNIE, a certain degree of autonomy could be realized by a vision system with appropriate decision algorithms. This vision system was developed by the group of Prof. G. Schmidt, TU-München. A more detailed description of the design and realization is presented for the biped JOHNNIE.
31

Huang, Xiaoqian, Rajkumar Muthusamy, Eman Hassan, Zhenwei Niu, Lakmal Seneviratne, Dongming Gan, and Yahya Zweiri. "Neuromorphic Vision Based Contact-Level Classification in Robotic Grasping Applications." Sensors 20, no. 17 (August 21, 2020): 4724. http://dx.doi.org/10.3390/s20174724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent years, robotic sorting is widely used in the industry, which is driven by necessity and opportunity. In this paper, a novel neuromorphic vision-based tactile sensing approach for robotic sorting application is proposed. This approach has low latency and low power consumption when compared to conventional vision-based tactile sensing techniques. Two Machine Learning (ML) methods, namely, Support Vector Machine (SVM) and Dynamic Time Warping-K Nearest Neighbor (DTW-KNN), are developed to classify material hardness, object size, and grasping force. An Event-Based Object Grasping (EBOG) experimental setup is developed to acquire datasets, where 243 experiments are produced to train the proposed classifiers. Based on predictions of the classifiers, objects can be automatically sorted. If the prediction accuracy is below a certain threshold, the gripper re-adjusts and re-grasps until reaching a proper grasp. The proposed ML method achieves good prediction accuracy, which shows the effectiveness and the applicability of the proposed approach. The experimental results show that the developed SVM model outperforms the DTW-KNN model in term of accuracy and efficiency for real time contact-level classification.
32

Zhang, Jin, Cheng Wu, and Yiming Wang. "Human Fall Detection Based on Body Posture Spatio-Temporal Evolution." Sensors 20, no. 3 (February 10, 2020): 946. http://dx.doi.org/10.3390/s20030946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abnormal falls in public places have significant safety hazards and can easily lead to serious consequences, such as trampling by people. Vision-driven fall event detection has the huge advantage of being non-invasive. However, in actual scenes, the fall behavior is rich in diversity, resulting in strong instability in detection. Based on the study of the stability of human body dynamics, the article proposes a new model of human posture representation of fall behavior, called the “five-point inverted pendulum model”, and uses an improved two-branch multi-stage convolutional neural network (M-CNN) to extract and construct the inverted pendulum structure of human posture in real-world complex scenes. Furthermore, we consider the continuity of the fall event in time series, use multimedia analytics to observe the time series changes of human inverted pendulum structure, and construct a spatio-temporal evolution map of human posture movement. Finally, based on the integrated results of computer vision and multimedia analytics, we reveal the visual characteristics of the spatio-temporal evolution of human posture under the potentially unstable state, and explore two key features of human fall behavior: motion rotational energy and generalized force of motion. The experimental results in actual scenes show that the method has strong robustness, wide universality, and high detection accuracy.
33

Grabenhorst, Matthias, Laurence T. Maloney, David Poeppel, and Georgios Michalareas. "Two sources of uncertainty independently modulate temporal expectancy." Proceedings of the National Academy of Sciences 118, no. 16 (April 14, 2021): e2019342118. http://dx.doi.org/10.1073/pnas.2019342118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The environment is shaped by two sources of temporal uncertainty: the discrete probability of whether an event will occur and—if it does—the continuous probability of when it will happen. These two types of uncertainty are fundamental to every form of anticipatory behavior including learning, decision-making, and motor planning. It remains unknown how the brain models the two uncertainty parameters and how they interact in anticipation. It is commonly assumed that the discrete probability of whether an event will occur has a fixed effect on event expectancy over time. In contrast, we first demonstrate that this pattern is highly dynamic and monotonically increases across time. Intriguingly, this behavior is independent of the continuous probability of when an event will occur. The effect of this continuous probability on anticipation is commonly proposed to be driven by the hazard rate (HR) of events. We next show that the HR fails to account for behavior and propose a model of event expectancy based on the probability density function of events. Our results hold for both vision and audition, suggesting independence of the representation of the two uncertainties from sensory input modality. These findings enrich the understanding of fundamental anticipatory processes and have provocative implications for many aspects of behavior and its neural underpinnings.
34

Copeland, Bruce R., Min Chen, Brad D. Wade, and Linda S. Powers. "A noise-driven strategy for background estimation and event detection in data streams." Signal Processing 86, no. 12 (December 2006): 3739–51. http://dx.doi.org/10.1016/j.sigpro.2006.03.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hameed, Shameem, Swapnaa Jayaraman, Melissa Ballard, and Nadine Sarter. "Guiding Visual Attention by Exploiting Crossmodal Spatial Links: An Application in Air Traffic Control." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no. 4 (October 2007): 220–24. http://dx.doi.org/10.1177/154193120705100416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent research on multimodal information processing has provided evidence for the existence of crossmodal links in spatial attention between vision, audition, and touch. The present study examined whether these links can be exploited to support attention allocation in workplaces that involve competing task demands and the potential for visual data overload. In particular, the effectiveness of tactile cues for guiding visual attention to the location of a critical event was tested in the context of an air traffic control simulation. Participants monitored a display depicting the flight paths of 40 aircraft and were presented with tactile cues indicating either just the occurrence, or both the occurrence and display location, of an event requiring a participant response. Tactile cuing, especially when combined with location information, resulted in significantly higher detection rates and faster response times to these events. These findings indicate that tactile cuing is a promising means of directing visual attention in a data-driven manner.
36

Cardell-Oliver, Rachel, Mark Kranz, Keith Smettem, and Kevin Mayer. "A Reactive Soil Moisture Sensor Network: Design and Field Evaluation." International Journal of Distributed Sensor Networks 1, no. 2 (March 2005): 149–62. http://dx.doi.org/10.1080/15501320590966422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Wireless sensor network technology has the potential to reveal finegrained, dynamic changes in monitored variables of an outdoor landscape. But there are significant problems to be overcome in order to realize this vision in working systems. This paper describes the design and implementation of a reactive, event driven network for environmental monitoring of soil moisture and evaluates its effectiveness. A novel feature of our solution is its reactivity to the environment: when rain falls and soil moisture is changing rapidly, measurements are collected frequently, whereas during dry periods, between rainfall, measurements are collected less often. Field trials demonstrating the reactivity, robustness, and longevity of the network are presented and evaluated, and future improvements proposed.
37

Tătaru, Ioana Miruna, Elena Fleacă, Bogdan Fleacă, and Radu D. Stanciu. "Modelling the Implementation of a Sustainable Development Strategy through Process Mapping." Proceedings 63, no. 1 (December 10, 2020): 6. http://dx.doi.org/10.3390/proceedings2020063006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Industry 4.0 implies sustainable production by providing green products created through environmentally responsible processes. This paper aims to analyze the two main business processes responsible for energy innovation in a telecommunications company: “develop property strategy and long-term vision” and “evaluate environmental impact of products, services, and operations”. The processes will be introduced using an initial set of key-performance indicators (KPIs) and American Productivity & Quality Center (APQC) Process Classification Framework activities. Through value stream analysis, the non-value-added activities will be eliminated. Ultimately, to provide an overview for the stakeholders, a new set of KPIs will be proposed and the processes will be modeled using Event-Driven Process Chain (EPC) and Suppliers-Inputs-Process-Outputs-Customers (SIPOC) methods.
38

Farmakis, Ioannis, D. Jean Hutchinson, Nicholas Vlachopoulos, Matthew Westoby, and Michael Lim. "Slope-Scale Rockfall Susceptibility Modeling as a 3D Computer Vision Problem." Remote Sensing 15, no. 11 (May 23, 2023): 2712. http://dx.doi.org/10.3390/rs15112712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Rockfall constitutes a major threat to the safety and sustainability of transport corridors bordered by rocky cliffs. This research introduces a new approach to rockfall susceptibility modeling for the identification of potential rockfall source zones. This is achieved by developing a data-driven model to assess the local slope morphological attributes with respect to the rock slope evolution processes. The ability to address “where” a rockfall is more likely to occur via the analysis of historical event inventories with respect to terrain attributes and to define the probability of a given area producing a rockfall is a critical advance toward effective transport corridor management. The availability of high-quality digital volumetric change detection products permits new developments in rockfall assessment and prediction. We explore the potential of simulating the conceptualization of slope-scale rockfall susceptibility modeling using computer power and artificial intelligence (AI). We employ advanced 3D computer vision algorithms for analyzing point clouds to interpret high-resolution digital observations capturing the rock slope evolution via long-term, LiDAR-based 3D differencing. The approach has been developed and tested on data from three rock slopes: two in Canada and one in the UK. The results indicate clear potential for AI advances to develop local susceptibility indicators from local geometry and learning from recent rockfall activity. The resultant models produce slope-wide rockfall susceptibility maps in high resolution, producing up to 75% agreement with validated occurrences.
39

SHIMODA, Masayuki, Shimpei SATO, and Hiroki NAKAHARA. "Power Efficient Object Detector with an Event-Driven Camera for Moving Object Surveillance on an FPGA." IEICE Transactions on Information and Systems E102.D, no. 5 (May 1, 2019): 1020–28. http://dx.doi.org/10.1587/transinf.2018rcp0005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cheng, Rui, Jiaming Wang, and Pin-Chao Liao. "Temporal Visual Patterns of Construction Hazard Recognition Strategies." International Journal of Environmental Research and Public Health 18, no. 16 (August 20, 2021): 8779. http://dx.doi.org/10.3390/ijerph18168779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Visual cognitive strategies in construction hazard recognition (CHR) signifies prominent value for the development of CHR computer vision techniques and safety training. Nonetheless, most studies are based on either sparse fixations or cross-sectional (accumulative) statistics, which lack consideration of temporality and yielding limited visual pattern information. This research aims to investigate the temporal visual search patterns for CHR and the cognitive strategies they imply. An experimental study was designed to simulate CHR and document participants’ visual behavior. Temporal qualitative comparative analysis (TQCA) was applied to analyze the CHR visual sequences. The results were triangulated based on post-event interviews and show that: (1) In the potential electrical contact hazards, the intersection of the energy-releasing source and wire that reflected their interaction is the cognitively driven visual area that participants tend to prioritize; (2) in the PPE-related hazards, two different visual strategies, i.e., “scene-related” and “norm-guided”, can usually be generalized according to the participants’ visual cognitive logic, corresponding to the bottom-up (experience oriented) and top-down (safety knowledge oriented) cognitive models. This paper extended recognition-by-components (RBC) model and gestalt model as well as providing feasible practical guide for safety trainings and theoretical foundations of computer vision techniques for CHR.
41

Zhang, Tao, Shuiying Xiang, Wenzhuo Liu, Yanan Han, Xingxing Guo, and Yue Hao. "Hybrid Spiking Fully Convolutional Neural Network for Semantic Segmentation." Electronics 12, no. 17 (August 23, 2023): 3565. http://dx.doi.org/10.3390/electronics12173565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The spiking neural network (SNN) exhibits distinct advantages in terms of low power consumption due to its event-driven nature. However, it is limited to simple computer vision tasks because the direct training of SNNs is challenging. In this study, we propose a hybrid architecture called the spiking fully convolutional neural network (SFCNN) to expand the application of SNNs in the field of semantic segmentation. To train the SNN, we employ the surrogate gradient method along with backpropagation. The accuracy of mean intersection over union (mIoU) for the VOC2012 dataset is higher than that of existing spiking FCNs by almost 30%. The accuracy of mIoU can reach 39.6%. Moreover, the proposed hybrid SFCNN achieved excellent segmentation performance for other datasets such as COCO2017, DRIVE, and Cityscapes. Our hybrid SFCNN is a valuable and interesting contribution to extending the functionality of SNNs, especially for power-constrained applications.
42

Swathi, H. Y., and G. Shivakumar. "Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification." Mathematical Biosciences and Engineering 20, no. 7 (2023): 12529–61. http://dx.doi.org/10.3934/mbe.2023558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<abstract> <p>The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks.</p> </abstract>
43

Qiu, Xuerui, Rui-Jie Zhu, Yuhong Chou, Zhaorui Wang, Liang-Jian Deng, and Guoqi Li. "Gated Attention Coding for Training High-Performance and Efficient Spiking Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 1 (March 24, 2024): 601–10. http://dx.doi.org/10.1609/aaai.v38i1.27816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Spiking neural networks (SNNs) are emerging as an energy-efficient alternative to traditional artificial neural networks (ANNs) due to their unique spike-based event-driven nature. Coding is crucial in SNNs as it converts external input stimuli into spatio-temporal feature sequences. However, most existing deep SNNs rely on direct coding that generates powerless spike representation and lacks the temporal dynamics inherent in human vision. Hence, we introduce Gated Attention Coding (GAC), a plug-and-play module that leverages the multi-dimensional gated attention unit to efficiently encode inputs into powerful representations before feeding them into the SNN architecture. GAC functions as a preprocessing layer that does not disrupt the spike-driven nature of the SNN, making it amenable to efficient neuromorphic hardware implementation with minimal modifications. Through an observer model theoretical analysis, we demonstrate GAC's attention mechanism improves temporal dynamics and coding efficiency. Experiments on CIFAR10/100 and ImageNet datasets demonstrate that GAC achieves state-of-the-art accuracy with remarkable efficiency. Notably, we improve top-1 accuracy by 3.10% on CIFAR100 with only 6-time steps and 1.07% on ImageNet while reducing energy usage to 66.9% of the previous works. To our best knowledge, it is the first time to explore the attention-based dynamic coding scheme in deep SNNs, with exceptional effectiveness and efficiency on large-scale datasets. Code is available at https://github.com/bollossom/GAC.
44

He, Yingmei, Bin Xin, Sai Lu, Qing Wang, and Yulong Ding. "Memetic Algorithm for Dynamic Joint Flexible Job Shop Scheduling with Machines and Transportation Robots." Journal of Advanced Computational Intelligence and Intelligent Informatics 26, no. 6 (November 20, 2022): 974–82. http://dx.doi.org/10.20965/jaciii.2022.p0974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this study, the dynamic joint scheduling problem for processing machines and transportation robots in a flexible job shop is investigated. The study aims to minimize the order completion time (makespan) of a job shop manufacturing system. Considering breakdowns, order insertion and battery charging maintenance of robots, an event-driven global rescheduling strategy is adopted. A novel memetic algorithm combining genetic algorithm and variable neighborhood search is designed to handle dynamic events and obtain a new scheduling plan. Finally, numerical experiments are conducted to test the effect of the improved operators. For successive multiple rescheduling, the effectiveness of the proposed algorithm is verified by comparing it with three other algorithms under dynamic events, and through statistical analysis, the results verify the effectiveness of the proposed algorithm.
45

Li, Xiang, Hiroki Imanishi, Mamoru Minami, Takayuki Matsuno, and Akira Yanou. "Dynamical Model of Walking Transition Considering Nonlinear Friction with Floor." Journal of Advanced Computational Intelligence and Intelligent Informatics 20, no. 6 (November 20, 2016): 974–82. http://dx.doi.org/10.20965/jaciii.2016.p0974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Biped locomotion created by a controller based on Zero-Moment Point (ZMP) known as reliable control method looks different from human’s walking on the view point that ZMP-based walking does not include falling state, and it’s like monkey walking because of knee-bended walking profiles. However, the walking control that does not depend on ZMP is vulnerable to turnover. Therefore, keeping the event-driven walking of dynamical motion stable is important issue for realization of human-like natural walking. In this research, a walking model of humanoid robot including slipping, bumping, surface-contacting and line-contacting of foot is discussed, and its dynamical equation is derived by the Extended NE method. In this paper we introduce the humanoid model which including the slipping foot and verify the model.
46

Yao, Yaping, Bin Wan, Bo Long, Te Bu, and Yang Zhang. "In quest of China sports lottery development path to common prosperity in 2035." PLOS ONE 19, no. 1 (January 26, 2024): e0297629. http://dx.doi.org/10.1371/journal.pone.0297629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objectives The China sports lottery contributes to sports and welfare causes. This study aims to construct a macro forecasting model supporting its sustained growth aligned with Vision 2035. Methods The modeling employed a distributional regression. Sales data of the China sports lottery from 2011 to 2022 were chosen as the response variable, alongside various macro- and event-level explanatory factors. Results A gamma distribution best fit the data. In the stable model spanning 2011–2019, urbanization, population dynamics, and FIFA emerged as significant contributors (Chi–square p < 0.05) to the location shift parameter. These three factors retained their significance in the 2011–2022 shock model, where shock itself notably impacted sales (p < 0.001). Utilizing the shock model, we simulated the trajectory of the China sports lottery up to 2035. China’s demographics changes are poised to create structural headwinds starting in 2026, leading to an anticipated decline in sales driven by population shifts from 2032 onward. However, the FIFA effect is projected to continue fortifying this sector. Conclusions Beyond offering original insights into the sales trajectory until 2035, specifically concerning new urbanization, negative population growth, and the FIFA effect, this macro forecasting framework can assist in addressing the policy priority of balancing growth with risk mitigation. We recommend policymakers connect market development with mass sports, potentially garnering a dual boost from the growing population of older consumers and the inherent benefits of a “FIFA (mass sports)” effect. A people-centered approach to the China sports lottery could significantly contribute to the long–range objectives of achieving common prosperity outlined in Vision 2035.
47

Salah, Bashir, Sajjad Khan, Muawia Ramadan, and Nikola Gjeldum. "Integrating the Concept of Industry 4.0 by Teaching Methodology in Industrial Engineering Curriculum." Processes 8, no. 9 (August 19, 2020): 1007. http://dx.doi.org/10.3390/pr8091007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The movement to digitally transform Saudi Arabia in all sectors has already begun under the “Vision 2030” program. Consequently, renovating and standardizing production and manufacturing industries to compete with global challenges is essential. The fourth industrial revolution (Industry 4.0) triggered by the development of information and communications technologies (ICT) provides a baseline for smart automation, using decentralized control and smart connectivity (e.g., Internet of Things). Industrial engineering graduates need to have acquaintance with this industrial digital revolution. Several industries where the spirit of Industry 4.0 has been embraced and have already implemented these ideas yielded gains. In this paper, a roadmap containing an academic term course based on the concept of Industry 4.0, which our engineering graduates passed through, is presented. At first, an orientation program to students elaborating on the Industry 4.0 concept, its main pillars, the importance of event-driven execution, and smart product manufacturing techniques. Then, various tasks in developing a learning factory were split and assigned among student groups. Finally, the evaluation of student potential in incorporating the Industry 4.0 concept was analyzed. This methodology led to their professional skill development and promoted students’ innovative ideas for the manufacturing sector.
48

varma, Neha. "STYLESYNC: A FASHION RECOMMENDATION SYSTEM." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 27, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem34672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Our project focuses on an AI-powered Fashion Recommendation System tailored to assist users in curating outfits for diverse occasions. Users upload clothing images to create individualized wardrobes. Employing machine learning and deep learning, our system accurately classifies clothing types and identifies colors within these images. The core of our innovation lies in a sophisticated recommendation algorithm. By analyzing users' existing wardrobes, including garment types and colors, our algorithm delivers personalized outfit suggestions aligned with users' style preferences and specific event needs. The interface prioritizes user-friendliness, enabling seamless wardrobe management and presenting users with well-matched outfit recommendations. Continuous refinement through user feedback ensures ongoing enhancement of recommendation accuracy, user experience, and machine learning model performance. In summary, our AI-driven Fashion Recommendation The system aims to streamline the process of dressing for diverse occasions, offering personalized outfit suggestions based on individual wardrobes, thus empowering users with convenient and stylish clothing choices. Keyword – fashion-recommendation, Feature extraction, recommendation system, data analysis, transfer-learning, javascript, python, bootstrap, npm, machine learning, computer vision, deep-learning, reactjs, material-ui, sklearn, Collaborative Filtering, Content-Based Filtering, Deep Learning, Feature Extraction, Natural Language Processing (NLP), Image Recognition.
49

Dupuis-Déri, Francis. "L’Affaire Salman Rushdie: symptôme d'un « Clash of Civilizations »?" Études internationales 28, no. 1 (April 12, 2005): 27–45. http://dx.doi.org/10.7202/703706ar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Samuel Huntington proclaimed in an already well-known article ("Clash of Civilizations?") that deep incompatibilities between great civilizations will be the primary cause of future international conflicts. Conflicts will be cultural rather than economic or ideological. To test the validity of this claim, I analyse an international conflict which is truly cultural : the "Salman Rushdie Affair". This affair was provoked by the publication of Rushdie's novel, The Satanic Verses. By studying the motives of the actors in this event (the novelist Salman Rushdie, the imam Ruhollah Musavi Khomeini and the politician Margaret Thatcher), it seems at first sight that they were driven by political or financial interests. But a closer analysis shows that these actors were directed by cultural motivations. Does this prove that Huntington's thesis is right ? No, since even if the actors tried to defend a vision of their culture, there is no such a thing as monolithical civilizations but rather, there are only multicultural civilizations. Indeed, many people from the West refused to defend Rushdie, many Muslims condemned Khomeini's fatwa and Thatcher promoted only one aspect of Western political culture. Values are transnational and an Iranian may cherish the same values as an inhabitant of New York, while, on the other hand two Londonners living in the same flat dream about killing the other over the abortion issue.
50

Mabu, Shingo, Lu Yu, Jin Zhou, Shinji Eto, and Kotaro Hirasawa. "A Double-Deck Elevator Systems Controller with Idle Cage Assignment Algorithm Using Genetic Network Programming." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 5 (July 20, 2010): 487–96. http://dx.doi.org/10.20965/jaciii.2010.p0487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
So far, many studies on Double-Deck Elevator Systems (DDES) have been done for exploring more efficient algorithms to improve the system transportation capacity, especially in a heavy traffic mode. The main idea of these algorithms is to decrease the number of stops during a round trip by grouping the passengers with the same destination as much as possible. Unlike what occurs in this mode, where all cages almost always keep moving, there is the case, where some cages become idle in a light traffic mode. Therefore, how to dispatch these idle cages, which is seldom considered in the heavy traffic mode, becomes important when developing the controller of DDES. In this paper, we propose a DDES controller with idle cage assignment algorithm embedded using Genetic Network Programming (GNP) for a light traffic mode, which is based on a timer and event-driven hybrid model. To verify the efficiency and effectiveness of the proposed method, some experiments have been done under a special down-peak pattern where passengers emerge especially at the 7th floor. Simulation results show that the proposed method improves the performance compared with the case when the cage assignment algorithm is not employed and works better than six other heuristic methods in a light traffic mode.

To the bibliography