To see the other types of publications on this topic, follow the link: Outdoor scene analysis.

Journal articles on the topic 'Outdoor scene analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Outdoor scene analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ruan, Ling, Ling Zhang, Yi Long, and Fei Cheng. "Coordinate references for the indoor/outdoor seamless positioning." Proceedings of the ICA 1 (May 16, 2018): 1–7. http://dx.doi.org/10.5194/ica-proc-1-97-2018.

Full text
Abstract:
Indoor positioning technologies are being developed rapidly, and seamless positioning which connected indoor and outdoor space is a new trend. The indoor and outdoor positioning are not applying the same coordinate system and different indoor positioning scenes uses different indoor local coordinate reference systems. A specific and unified coordinate reference frame is needed as the space basis and premise in seamless positioning application. Trajectory analysis of indoor and outdoor integration also requires a uniform coordinate reference. However, the coordinate reference frame in seamless positioning which can applied to various complex scenarios is lacking of research for a long time. In this paper, we proposed a universal coordinate reference frame in indoor/outdoor seamless positioning. The research focus on analysis and classify the indoor positioning scenes and put forward the coordinate reference system establishment and coordinate transformation methods in each scene. And, through some experiments, the calibration method feasibility was verified.
APA, Harvard, Vancouver, ISO, and other styles
2

Kilawati, Andi, and Rosmalah Yanti. "Hubungan Outdor Activity dengan Scene Setting dalam Skill Mengajar Pembelajaran IPA Mahasiswa PGSD Universitas Cokroaminoto Palopo." Cokroaminoto Journal of Primary Education 2, no. 1 (April 30, 2019): 6–10. http://dx.doi.org/10.30605/cjpe.122019.100.

Full text
Abstract:
This study generally aims to determine the relationship of outdoor activity with Scene Settings in Elementary School Science Learning PGSD University Students Cokroaminoto Palopo. This research is a quantitative research. The research model used in this study is the correlational model. This research hypothesis was tested using two-way analysis of variance (ANAVA). The sample of this study was PGSD University Cokroaminoto Palopo University. The results of this study indicate that: (1) there is a positive relationship between outdoor activities and scene settings (2) there is a positive positive relationship between outdoor activities and scene settings in science learning of PGSD students at Cokroaminoto Palopo University.
APA, Harvard, Vancouver, ISO, and other styles
3

Musel, Benoit, Louise Kauffmann, Stephen Ramanoël, Coralie Giavarini, Nathalie Guyader, Alan Chauvin, and Carole Peyrin. "Coarse-to-fine Categorization of Visual Scenes in Scene-selective Cortex." Journal of Cognitive Neuroscience 26, no. 10 (October 2014): 2287–97. http://dx.doi.org/10.1162/jocn_a_00643.

Full text
Abstract:
Neurophysiological, behavioral, and computational data indicate that visual analysis may start with the parallel extraction of different elementary attributes at different spatial frequencies and follows a predominantly coarse-to-fine (CtF) processing sequence (low spatial frequencies [LSF] are extracted first, followed by high spatial frequencies [HSF]). Evidence for CtF processing within scene-selective cortical regions is, however, still lacking. In the present fMRI study, we tested whether such processing occurs in three scene-selective cortical regions: the parahippocampal place area (PPA), the retrosplenial cortex, and the occipital place area. Fourteen participants were subjected to functional scans during which they performed a categorization task of indoor versus outdoor scenes using dynamic scene stimuli. Dynamic scenes were composed of six filtered images of the same scene, from LSF to HSF or from HSF to LSF, allowing us to mimic a CtF or the reverse fine-to-coarse (FtC) sequence. Results showed that only the PPA was more activated for CtF than FtC sequences. Equivalent activations were observed for both sequences in the retrosplenial cortex and occipital place area. This study suggests for the first time that CtF sequence processing constitutes the predominant strategy for scene categorization in the PPA.
APA, Harvard, Vancouver, ISO, and other styles
4

CHAN, K. L. "VIDEO-BASED GAIT ANALYSIS BY SILHOUETTE CHAMFER DISTANCE AND KALMAN FILTER." International Journal of Image and Graphics 08, no. 03 (July 2008): 383–418. http://dx.doi.org/10.1142/s0219467808003155.

Full text
Abstract:
A markerless human gait analysis system using uncalibrated monocular video is developed. The background model is trained for extracting the subject silhouette, whether in static scene or dynamic scene, in each video frame. Generic 3D human model is manually fit to the subject silhouette in the first video frame. We propose the silhouette chamfer, which contains the chamfer distance of silhouette and region information, as one matching feature. This, combined dynamically with the model gradient, is used to search for the best fit between subject silhouette and 3D model. Finally, we use the discrete Kalman filter to predict and correct the pose of the walking subject in each video frame. We propose a quantitative measure that can be used to identify tracking faults automatically. Errors in the joint angle trajectories can then be corrected and the walking cycle is interpreted. Experiments have been carried out on video captured in static indoor as well as outdoor scenes.
APA, Harvard, Vancouver, ISO, and other styles
5

Tylecek, Radim, and Robert Fisher. "Consistent Semantic Annotation of Outdoor Datasets via 2D/3D Label Transfer." Sensors 18, no. 7 (July 12, 2018): 2249. http://dx.doi.org/10.3390/s18072249.

Full text
Abstract:
The advance of scene understanding methods based on machine learning relies on the availability of large ground truth datasets, which are essential for their training and evaluation. Construction of such datasets with imagery from real sensor data however typically requires much manual annotation of semantic regions in the data, delivered by substantial human labour. To speed up this process, we propose a framework for semantic annotation of scenes captured by moving camera(s), e.g., mounted on a vehicle or robot. It makes use of an available 3D model of the traversed scene to project segmented 3D objects into each camera frame to obtain an initial annotation of the associated 2D image, which is followed by manual refinement by the user. The refined annotation can be transferred to the next consecutive frame using optical flow estimation. We have evaluated the efficiency of the proposed framework during the production of a labelled outdoor dataset. The analysis of annotation times shows that up to 43% less effort is required on average, and the consistency of the labelling is also improved.
APA, Harvard, Vancouver, ISO, and other styles
6

Bielecki, Andrzej, and Piotr Śmigielski. "Three-Dimensional Outdoor Analysis of Single Synthetic Building Structures by an Unmanned Flying Agent Using Monocular Vision." Sensors 21, no. 21 (November 1, 2021): 7270. http://dx.doi.org/10.3390/s21217270.

Full text
Abstract:
An algorithm designed for analysis and understanding a 3D urban-type environment by an autonomous flying agent, equipped only with a monocular vision, is presented. The algorithm is hierarchical and is based on the structural representation of the analyzed scene. Firstly, the robot observes the scene from a high altitude to build a 2D representation of a single object and a graph representation of the 2D scene. The 3D representation of each object arises as a consequence of the robot’s actions, as a result of which it projects the object’s solid on different planes. The robot assigns the obtained representations to the corresponding vertex of the created graph. The algorithm was tested by using the embodied robot operating on the real scene. The tests showed that the robot equipped with the algorithm was able not only to localize the predefined object, but also to perform safe, collision-free maneuvers close to the structures in the scene.
APA, Harvard, Vancouver, ISO, and other styles
7

Kuhnert, Lars, and Klaus-Dieter Kuhnert. "Sensor-Fusion Based Real-Time 3D Outdoor Scene Reconstruction and Analysis on a Moving Mobile Outdoor Robot." KI - Künstliche Intelligenz 25, no. 2 (February 9, 2011): 117–23. http://dx.doi.org/10.1007/s13218-011-0093-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Oo, Thanda, Hiroshi Kawasaki, Yutaka Ohsawa, and Katsushi Ikeuchi. "Separation of Reflection and Transparency Based on Spatiotemporal Analysis for Outdoor Scene." IPSJ Digital Courier 2 (2006): 428–40. http://dx.doi.org/10.2197/ipsjdc.2.428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

He, Wei. "Research on Outdoor Garden Scene Reconstruction Based on PMVS Algorithm." Scientific Programming 2021 (November 24, 2021): 1–10. http://dx.doi.org/10.1155/2021/4491382.

Full text
Abstract:
The three-dimensional reconstruction of outdoor landscape is of great significance for the construction of digital city. With the rapid development of big data and Internet of things technology, when using the traditional image-based 3D reconstruction method to restore the 3D information of objects in the image, there will be a large number of redundant points in the point cloud and the density of the point cloud is insufficient. Based on the analysis of the existing three-dimensional reconstruction technology, combined with the characteristics of outdoor garden scene, this paper gives the detection and extraction methods of relevant feature points and adopts feature matching and repairing the holes generated by point cloud meshing. By adopting the candidate strategy of feature points and adding the mesh subdivision processing method, an improved PMVS algorithm is proposed and the problem of sparse point cloud in 3D reconstruction is solved. Experimental results show that the proposed method not only effectively realizes the three-dimensional reconstruction of outdoor garden scene, but also improves the execution efficiency of the algorithm on the premise of ensuring the reconstruction effect.
APA, Harvard, Vancouver, ISO, and other styles
10

Angsuwatanakul, Thanate, Jamie O’Reilly, Kajornvut Ounjai, Boonserm Kaewkamnerdpong, and Keiji Iramina. "Multiscale Entropy as a New Feature for EEG and fNIRS Analysis." Entropy 22, no. 2 (February 7, 2020): 189. http://dx.doi.org/10.3390/e22020189.

Full text
Abstract:
The present study aims to apply multiscale entropy (MSE) to analyse brain activity in terms of brain complexity levels and to use simultaneous electroencephalogram and functional near-infrared spectroscopy (EEG/fNIRS) recordings for brain functional analysis. A memory task was selected to demonstrate the potential of this multimodality approach since memory is a highly complex neurocognitive process, and the mechanisms governing selective retention of memories are not fully understood by other approaches. In this study, 15 healthy participants with normal colour vision participated in the visual memory task, which involved the making the executive decision of remembering or forgetting the visual stimuli based on his/her own will. In a continuous stimulus set, 250 indoor/outdoor scenes were presented at random, between periods of fixation on a black background. The participants were instructed to make a binary choice indicating whether they wished to remember or forget the image; both stimulus and response times were stored for analysis. The participants then performed a scene recognition test to confirm whether or not they remembered the images. The results revealed that the participants intentionally memorising a visual scene demonstrate significantly greater brain complexity levels in the prefrontal and frontal lobe than when purposefully forgetting a scene; p < 0.05 (two-tailed). This suggests that simultaneous EEG and fNIRS can be used for brain functional analysis, and MSE might be the potential indicator for this multimodality approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Gupta, Niharika, and Priya Khobragade. "Muti-class Image Classification using Transfer Learning." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (January 31, 2023): 700–704. http://dx.doi.org/10.22214/ijraset.2023.48665.

Full text
Abstract:
Abstract: Humans are very proficient at perceiving natural scenes and understanding their contents. Everyday image content across the globe is rapidly increasing and there is a need for classifying these images for further research. Scene classification is a challenging task, because in some natural scenes there will be common features in images and some images may contain half indoor and half outdoor scene features. In this project we are going to classify natural scenery in images using Artificial Intelligence. Based on the analysis of the error backpropagation algorithm, we propose an innovative training criterion of depth neural network for maximum interval minimum classification error. At the same time, the cross entropy and M3CE are analyzed and combined to obtain better results. Finally, we tested our proposed M3 CE-CEc on two deep learning standard databases, MNIST and CIFAR-10. The experimental results show that M3 CE can enhance the cross-entropy, and it is an effective supplement to the cross-entropy criterion. M3 CE-CEc has obtained good results in both databases.
APA, Harvard, Vancouver, ISO, and other styles
12

Wijayathunga, Liyana, Alexander Rassau, and Douglas Chai. "Challengesand Solutions for Autonomous Ground Robot Scene Understanding and Navigation in Unstructured Outdoor Environments: A Review." Applied Sciences 13, no. 17 (August 31, 2023): 9877. http://dx.doi.org/10.3390/app13179877.

Full text
Abstract:
The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges.
APA, Harvard, Vancouver, ISO, and other styles
13

Gong, Liang, Xiangyu Yu, and Jingchuan Wang. "Curve-Localizability-SVM Active Localization Research for Mobile Robots in Outdoor Environments." Applied Sciences 11, no. 10 (May 11, 2021): 4362. http://dx.doi.org/10.3390/app11104362.

Full text
Abstract:
Working environment of mobile robots has gradually expanded from indoor structured scenes to outdoor scenes such as wild areas in recent years. The expansion of application scene, change of sensors and the diversity of working tasks bring greater challenges and higher demands to active localization for mobile robots. The efficiency and stability of traditional localization strategies in wild environments are significantly reduced. On the basis of considering features of the environment and the robot motion curved surface, this paper proposes a curve-localizability-SVM active localization algorithm. Firstly, we present a curve-localizability-index based on 3D observation model, and then based on this index, a curve-localizability-SVM path planning strategy and an improved active localization method are proposed. Obtained by setting the constraint space and objective function of the planning algorithm, where curve-localizability is the main constraint, the path helps improve the convergence speed and stability in complex environments of the active localization algorithm. Helped by SVM, the path is smoother and safer for large robots. The algorithm was tested by comparative experiments and analysis in real environment and robot platform, which verified the improvement of efficiency and stability of the new strategy.
APA, Harvard, Vancouver, ISO, and other styles
14

Ning, Xiaojuan, Yishu Ma, Yuanyuan Hou, Zhiyong Lv, Haiyan Jin, Zengbo Wang, and Yinghui Wang. "Trunk-Constrained and Tree Structure Analysis Method for Individual Tree Extraction from Scanned Outdoor Scenes." Remote Sensing 15, no. 6 (March 13, 2023): 1567. http://dx.doi.org/10.3390/rs15061567.

Full text
Abstract:
The automatic extraction of individual tree from mobile laser scanning (MLS) scenes has important applications in tree growth monitoring, tree parameter calculation and tree modeling. However, trees often grow in rows and tree crowns overlap with varying shapes, and there is also incompleteness caused by occlusion, which makes individual tree extraction a challenging problem. In this paper, we propose a trunk-constrained and tree structure analysis method to extract trees from scanned urban scenes. Firstly, multi-feature enhancement is performed via PointNet to segment the tree points from raw urban scene point clouds. Next, the candidate local tree trunk clusters are obtained by clustering based on the intercepted local tree trunk layer, and the real local tree trunk is obtained by removing noise data. Then, the trunk is located and extracted by combining circle fitting and region growing, so as to obtain the center of the tree crown. Further, the points near the tree’s crown (core points) are segmented through distance difference, and the tree crown boundary (boundary points) is distinguished by analyzing the density and centroid deflection angle. Therefore, the core and boundary points are deleted to obtain the remaining points (intermediate points). Finally, the core, intermediate and boundary points, as well as the tree trunks, are combined to extract individual tree. The performance of the proposed method was evaluated on the Pairs-Lille-3D dataset, which is a benchmark for point cloud classification, and data were produced using a mobile laser system (MLS) applied to two different cities in France (Paris and Lille). Overall, the precision, recall, and F1-score of instance segmentation were 90.00%, 98.22%, and 99.08%, respectively. The experimental results demonstrate that our method can effectively extract trees with multiple rows of occlusion and improve the accuracy of tree extraction.
APA, Harvard, Vancouver, ISO, and other styles
15

Rodriguez, Leonardo Barriga, Hugo Jimenez Hernandez, Jorge Alberto Soto Cajiga, Luciano Nava Balanzar, Jose Joel Gonzalez Barbosa, Alfonso Gomez Espinosa, and Jesus Carlos Pedraza Ortega. "A Linear Criterion to sort Color Components in Images." Ingeniería e Investigación 37, no. 1 (January 1, 2017): 91. http://dx.doi.org/10.15446/ing.investig.v37n1.55444.

Full text
Abstract:
The color and its representation play a basic role in Image Analysis process. Several methods can be beneficial whenever they have a correct representation of wave-length variations used to represent scenes with a camera. A wide variety of spaces and color representations is founded in specialized literature. Each one is useful in concrete circumstances and others may offer redundant color information (for instance, all RGB components are high correlated). This work deals with the task of identifying and sorting which component from several color representations offers the majority of information about the scene. This approach is based on analyzing linear dependences among each color component, by the implementation of a new sorting algorithm based on entropy. The proposal is tested in several outdoor/indoor scenes with different light conditions. Repeatability and stability are tested in order to guarantee its use in several image analysis applications. Finally, the results of this work have been used to enhance an external algorithm to compensate the camera random vibrations.
APA, Harvard, Vancouver, ISO, and other styles
16

Weinmann, M., M. S. Müller, M. Hillemann, N. Reydel, S. Hinz, and B. Jutzi. "POINT CLOUD ANALYSIS FOR UAV-BORNE LASER SCANNING WITH HORIZONTALLY AND VERTICALLY ORIENTED LINE SCANNERS – CONCEPT AND FIRST RESULTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 24, 2017): 399–406. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-399-2017.

Full text
Abstract:
In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.
APA, Harvard, Vancouver, ISO, and other styles
17

Okutani, Keita, Takami Yoshida, Keisuke Nakamura, and Kazuhiro Nakadai. "Incremental Noise Estimation in Outdoor Auditory Scene Analysis using a Quadrocopter with a Microphone Array." Journal of the Robotics Society of Japan 31, no. 7 (2013): 676–83. http://dx.doi.org/10.7210/jrsj.31.676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cheng, Yayun, Fei Hu, Hongfei Wu, Peng Fu, and Yan Hu. "Multi-polarization passive millimeter-wave imager and outdoor scene imaging analysis for remote sensing applications." Optics Express 26, no. 16 (July 25, 2018): 20145. http://dx.doi.org/10.1364/oe.26.020145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Huo, Xin, Hong Chen, YuHao Ma, and Qing Wang. "Design and realization of relic augmented reality system of integration positioning and posture sensing technology." MATEC Web of Conferences 246 (2018): 03014. http://dx.doi.org/10.1051/matecconf/201824603014.

Full text
Abstract:
On the purpose of presenting the old appearance of the relics through digitalization, and overlapping the virtual scenes with the actual scenes at the relic site, in this paper, we introduced the positioning technology, posture sensing technology and system development technology, put forward constructing cultural relic tourism platform based on integration positioning and posture sensing technology, we conducted detailed research and analysis on the users’ experienced process of cultural tourism, designed a relic augmented reality system of integration positioning and posture sensing technology. This augmented reality system mainly utilizes positioning technology to guide the users to the correct location of the relic on the corresponding map, and then overlaps the virtual object with the real relic, achieveing a 360-degree view of the overlapping effect, and the presentation effect of near-small, far-big. The system mainly employs Unity to develop the system and realize the above system on mobile terminal. It is no longer limited to a fixed point experience environment, and is suitable for outdoor natural scenes, it makes a breakthrough on the traditional overlapping of virtual scenes and real scenes, realizes the precise overlap of virtual 3D scenes with actual images, and enables the users to feel the vicissitudes of history along the movement of the mobile device in outdoor natural scenes, so as to inherit the history and culture, enrich the information and add some fun to the displayed scene, it has the advantages of bringing people more immersive feelings compared with the traditional virtual display platform.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Wei, Yue Yang, and Longsheng Wei. "Weather Recognition of Street Scene Based on Sparse Deep Neural Networks." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 3 (May 19, 2017): 403–8. http://dx.doi.org/10.20965/jaciii.2017.p0403.

Full text
Abstract:
Recognizing different weather conditions is a core component of many different applications of outdoor video analysis and computer vision. Street analysis performance, including detecting street objects, detecting road lines, recognizing street sign and etc., varies greatly with weather, so modeling based on weather recognition is the key resolution in this field. Features derived from intrinsic properties of different weather conditions contribute to successful classification. We first propose using deep learning features from convolutional neural networks (CNN) for fine recognition. In order to reduce the parameter redundancy in CNN, we used sparse decomposition to dramatically cut down the computation. Recognition results for databases show superior performance and indicate the effectiveness of extracted features.
APA, Harvard, Vancouver, ISO, and other styles
21

Reina, Giulio, Mauro Bellone, Luigi Spedicato, and Nicola Ivan Giannoccaro. "3D traversability awareness for rough terrain mobile robots." Sensor Review 34, no. 2 (March 17, 2014): 220–32. http://dx.doi.org/10.1108/sr-03-2013-644.

Full text
Abstract:
Purpose – This research aims to address the issue of safe navigation for autonomous vehicles in highly challenging outdoor environments. Indeed, robust navigation of autonomous mobile robots over long distances requires advanced perception means for terrain traversability assessment. Design/methodology/approach – The use of visual systems may represent an efficient solution. This paper discusses recent findings in terrain traversability analysis from RGB-D images. In this context, the concept of point as described only by its Cartesian coordinates is reinterpreted in terms of local description. As a result, a novel descriptor for inferring the traversability of a terrain through its 3D representation, referred to as the unevenness point descriptor (UPD), is conceived. This descriptor features robustness and simplicity. Findings – The UPD-based algorithm shows robust terrain perception capabilities in both indoor and outdoor environment. The algorithm is able to detect obstacles and terrain irregularities. The system performance is validated in field experiments in both indoor and outdoor environments. Research limitations/implications – The UPD enhances the interpretation of 3D scene to improve the ambient awareness of unmanned vehicles. The larger implications of this method reside in its applicability for path planning purposes. Originality/value – This paper describes a visual algorithm for traversability assessment based on normal vectors analysis. The algorithm is simple and efficient providing fast real-time implementation, since the UPD does not require any data processing or previously generated digital elevation map to classify the scene. Moreover, it defines a local descriptor, which can be of general value for segmentation purposes of 3D point clouds and allows the underlining geometric pattern associated with each single 3D point to be fully captured and difficult scenarios to be correctly handled.
APA, Harvard, Vancouver, ISO, and other styles
22

Weinmann, M., J. Leitloff, L. Hoegner, B. Jutzi, U. Stilla, and S. Hinz. "Thermal 3D mapping for object detection in dynamic scenes." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-1 (November 7, 2014): 53–60. http://dx.doi.org/10.5194/isprsannals-ii-1-53-2014.

Full text
Abstract:
The automatic analysis of 3D point clouds has become a crucial task in photogrammetry, remote sensing and computer vision. Whereas modern range cameras simultaneously provide both range and intensity images with high frame rates, other devices can be used to obtain further information which could be quite valuable for tasks such as object detection or scene interpretation. In particular thermal information offers many advantages, since people can easily be detected as heat sources in typical indoor or outdoor environments and, furthermore, a variety of concealed objects such as heating pipes as well as structural properties such as defects in isolation may be observed. In this paper, we focus on thermal 3D mapping which allows to observe the evolution of a dynamic 3D scene over time. We present a fully automatic methodology consisting of four successive steps: (i) a radiometric correction, (ii) a geometric calibration, (iii) a robust approach for detecting reliable feature correspondences and (iv) a co-registration of 3D point cloud data and thermal information via a RANSAC-based EPnP scheme. For an indoor scene, we demonstrate that our methodology outperforms other recent approaches in terms of both accuracy and applicability. We additionally show that efficient straightforward techniques allow a categorization according to background, people, passive scene manipulation and active scene manipulation.
APA, Harvard, Vancouver, ISO, and other styles
23

Ning, Xiaojuan, Yishu Ma, Yuanyuan Hou, Zhiyong Lv, Haiyan Jin, and Yinghui Wang. "Semantic Segmentation Guided Coarse-to-Fine Detection of Individual Trees from MLS Point Clouds Based on Treetop Points Extraction and Radius Expansion." Remote Sensing 14, no. 19 (October 1, 2022): 4926. http://dx.doi.org/10.3390/rs14194926.

Full text
Abstract:
Urban trees are vital elements of outdoor scenes via mobile laser scanning (MLS), accurate individual trees detection from disordered, discrete, and high-density MLS is an important basis for the subsequent analysis of city management and planning. However, trees cannot be easily extracted because of the occlusion with other objects in urban scenes. In this work, we propose a coarse-to-fine individual trees detection method from MLS point cloud data (PCD) based on treetop points extraction and radius expansion. Firstly, an improved semantic segmentation deep network based on PointNet is applied to segment tree points from the scanned urban scene, which combining spatial features and dimensional features. Next, through calculating the local maximum, the candidate treetop points are located. In addition, the optimized treetop points are extracted after the tree point projection plane was filtered to locate the candidate treetop points, and a distance rule is used to eliminate the pseudo treetop points then the optimized treetop points are obtained. Finally, after the initial clustering of treetop points and vertical layering of tree points, a top-down layer-by-layer segmentation based on radius expansion to realize the complete individual extraction of trees. The effectiveness of the proposed method is tested and evaluated on five street scenes in point clouds from Oakland outdoor MLS dataset. Furthermore, the proposed method is compared with two existing individual trees segmentation methods. Overall, the precision, recall, and F-score of instance segmentation are 98.33%, 98.33%, and 98.33%, respectively. The results indicate that our method can extract individual trees effectively and robustly in different complex environments.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Hui-Yin, Florent Robert, Théo Fafet, Brice Graulier, Barthelemy Passin-Cauneau, Lucile Sassatelli, and Marco Winckler. "Designing Guided User Tasks in VR Embodied Experiences." Proceedings of the ACM on Human-Computer Interaction 6, EICS (June 14, 2022): 1–24. http://dx.doi.org/10.1145/3532208.

Full text
Abstract:
Virtual reality (VR) offers extraordinary opportunities in user behavior research to study and observe how people interact in immersive 3D environments. A major challenge of designing these 3D experiences and user tasks, however, lies in bridging the inter-relational gaps of perception between the designer, the user, and the 3D scene. Paul Dourish identified three gaps of perception: ontology between the scene representation and the user and designer interpretation, intersubjectivity of task communication between designer and user, and intentionality between the user's intentions and designer's interpretations. We present the GUsT-3D framework for designing Guided User Tasks in embodied VR experiences, i.e., tasks that require the user to carry out a series of interactions guided by the constraints of the 3D scene. GUsT-3D is implemented as a set of tools that support a 4-step workflow to (1) annotate entities in the scene with navigation and interaction possibilities, (2) define user tasks with interactive and timing constraints, (3) manage interactions, task validation, and user logging in real-time, and (4) conduct post-scenario analysis through spatio-temporal queries using ontology definitions. To illustrate the diverse possibilities enabled by our framework, we present two case studies with an indoor scene and an outdoor scene, and conducted a formative evaluation involving six expert interviews to assess the framework and the implemented workflow. Analysis of the responses show that the GUsT-3D framework fits well into a designer's creative process, providing a necessary workflow to create, manage, and understand VR embodied experiences.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Fei, Yan Zhuang, Hong Gu, and Huosheng Hu. "OctreeNet: A Novel Sparse 3-D Convolutional Neural Network for Real-Time 3-D Outdoor Scene Analysis." IEEE Transactions on Automation Science and Engineering 17, no. 2 (April 2020): 735–47. http://dx.doi.org/10.1109/tase.2019.2942068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ramya, C., S. Subha Rani, and G. Kayalvizhi. "Image Inpainting Based on Fast Inpainting and Sparse Representation Method." Advanced Materials Research 984-985 (July 2014): 1350–56. http://dx.doi.org/10.4028/www.scientific.net/amr.984-985.1350.

Full text
Abstract:
Especially there is a wide interest in handling the outdoor surveillance images in autonomous navigation, remote surveillance, automatic incident detection, vision based driver assistant system and law enforcement services. In each case, there is an underlying object or scene which is wished to be captured, processed, analyzed and interpreted. Live recording and transmission of outdoor surveillance images are often one of a forensic tool but while transmission it may introduce some variations in the pixels and it may visible as missing blocks. Thus it is significant to improve the visual quality of such outdoor surveillance images for efficient analysis and recognition. In this paper, a simple effective inpainting method is proposed by combining fast inpainting and sparse representation method. The proposed method fully considers the complementary between the fast inpainting method and sparse representation inpainting approach. This approach inpaints the small size missing blocks more effectively than the existing inpainting methods. The experimental results on practical images show that the proposed algorithm can achieve a plausible visual performance without discontinuity in boundary and blurring. Keywords:Image Inpainting, Sparse Representation, Fast inpainting
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Myeong-Hun, Su-Mi Han, and Hwan Kim. "A Phenomenological Study on the Exposing Experience and Overcoming of Repeated Fatal Accidents of Fire-fighting Officers on Outdoor Service." Fire Science and Engineering 35, no. 4 (August 31, 2021): 97–106. http://dx.doi.org/10.7731/kifse.b7253918.

Full text
Abstract:
The core purpose of this study is to provide useful information for educational and psychological programs for outdoor fire-fighters by comprehensive analysis of their repeated experience in exposure to fatal accidents and overcoming them. To this end, eight outdoor fire officials in charge of rescue and emergency affairs were selected as subjects of the study at a fire Station in Cheonan and conducted in-depth interviews. Subsequently, interview data were analyzed through Giorgi's phenomenological analysis method and 10 essential themes were derived as followings; the sorriness and regret of his inexperience at the scene of the fatal accident, the painfulness and sorriness for the inevitable fatal accident, a frightening and trembling emotional experience, the blocked emotional response, an emotionless experience, the spread of anxiety, the realization of the importance of confrontation, the realization of the importance of time to escape stress, an experience in the power of empathy and comfort from family and colleagues, and an experience in the importance of professional calling consciousness.
APA, Harvard, Vancouver, ISO, and other styles
28

SZABO, Mihaela, Adelina DUMITRAS, Diana-Maria MIRCEA, Adriana F. SESTRAS, and Robert F. BRZUSZEK. "Analysis of compositional lines in natural landscapes." Nova Geodesia 2, no. 2 (June 23, 2022): 29. http://dx.doi.org/10.55779/ng2229.

Full text
Abstract:
Nature was and remains an eternal companion of man, but at the same time, it is also a constant enigma. Man’s connection with nature is ancestral, and the need for this connection is constantly proven. Landscape design is the field where art (human creation) and nature meet, creating a field of endless possibilities in the development of outdoor spaces, but while the artistic side of the field is constantly improved and developed, sometimes the exploration of the natural side is left behind. The present study aims to identify and analyse the types of compositional lines that can be found in natural landscapes. For this, different photos from natural parks or Romanian reserves were selected and subjected to visual analysis. In the examined scenes, vertical, horizontal, diagonal, straight or sinuous lines (the most dominant) were identified, being suggested by tree trunks or logs, water beds or streams, paths, the topography of the land or the contour of the crown. Their properties, such as thickness or repetition, can provide diversity, animate a scene, and generate visual movement or invite further exploration. This work can serve as a basis for a more thorough study of natural landscapes and is relevant to landscape architects who support the use of natural style in their work or are interested in integrative landscape design for a better understanding of natural features, as well as researchers who want to explore the aesthetics of natural landscapes.
APA, Harvard, Vancouver, ISO, and other styles
29

Huang, Yuanyuan, Guyue Lu, Wei Zhao, Xinyao Zhang, Jiawen Jiang, and Qiang Xing. "FlyDetector—Automated Monitoring Platform for the Visual–Motor Coordination of Honeybees in a Dynamic Obstacle Scene Using Digital Paradigm." Sensors 23, no. 16 (August 10, 2023): 7073. http://dx.doi.org/10.3390/s23167073.

Full text
Abstract:
Vision plays a crucial role in the ability of compound-eyed insects to perceive the characteristics of their surroundings. Compound-eyed insects (such as the honeybee) can change the optical flow input of the visual system by autonomously controlling their behavior, and this is referred to as visual–motor coordination (VMC). To analyze an insect’s VMC mechanism in dynamic scenes, we developed a platform for studying insects that actively shape the optic flow of visual stimuli by adapting their flight behavior. Image-processing technology was applied to detect the posture and direction of insects’ movement, and automatic control technology provided dynamic scene stimulation and automatic acquisition of perceptual insect behavior. In addition, a virtual mapping technique was used to reconstruct the visual cues of insects for VMC analysis in a dynamic obstacle scene. A simulation experiment at different target speeds of 1–12 m/s was performed to verify the applicability and accuracy of the platform. Our findings showed that the maximum detection speed was 8 m/s, and triggers were 95% accurate. The outdoor experiments showed that flight speed in the longitudinal axis of honeybees was more stable when facing dynamic barriers than static barriers after analyzing the change in geometric optic flow. Finally, several experiments showed that the platform can automatically and efficiently monitor honeybees’ perception behavior, and can be applied to study most insects and their VMC.
APA, Harvard, Vancouver, ISO, and other styles
30

Bhadoria, Ajeeta Singh, and Vandana Vikas Thakre. "Improved Single Haze Removal using Weighted Filter and Gaussian-Laplacian." International Journal of Electrical and Electronics Research 8, no. 2 (June 30, 2020): 26–31. http://dx.doi.org/10.37391/ijeer.080201.

Full text
Abstract:
Generally computer applications use digital images. Digital image plays a vital role in the analysis and explanation of data, which is in the digital form. Images and videos of outside scenes are generally affected by the bad weather environment such as haze, fog, mist etc. It will result in bad visibility of the scene caused by the lack of quality. This paper exhibits a study about various image defogging techniques to eject the haze from the fog images caught in true world to recuperate a fast and enhanced nature of fog free images. In this paper, we propose a simple but effective the weighted median (WM) filter was first presented as an overview of the standard median filter, where a nonnegative integer weight is assigned to each position in the filter window image .Gaussian and laplacian pyramids are applying Gaussian and laplacian filter in an image in cascade order with different kernel sizes of gaussian and laplacian filter .The dark channel prior is a type of statistics of the haze-free outdoor images. It is based on a key observation - most local patches in haze-free outdoor images contain some pixels which have very low intensities in at least one-color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high-quality haze-free image. Results on a variety of outdoor haze images demonstrate the power of the proposed prior. Moreover, a high-quality depth map can also be obtained as a by-product of haze removal and Calculate the PSNR and MSE of three sample images.
APA, Harvard, Vancouver, ISO, and other styles
31

Chebi, Hocine, Hind Tabet-Derraz, Rafik Sayah, Abdelkader Benaissa, Abdelkader Meroufel, Dalila Acheli, and Yassine Meraihi. "Intelligence and Adaptive Global Algorithm Detection of Crowd Behavior." International Journal of Computer Vision and Image Processing 10, no. 1 (January 2020): 24–41. http://dx.doi.org/10.4018/ijcvip.2020010102.

Full text
Abstract:
The recognition and prediction of people's activities from videos are major concerns in the field of computer vision. The main objective of this article is to propose an adaptive global algorithm that analyzes human behavior from video. This problem is also called video content analysis or VCA. This analysis is performed in outdoor or indoor environments. The video scene can be depending on the number of people present, is characterized by the presence of only one person at a time in the video. We are interested in scenes containing a large number of people. This is called crowd scenes where we will address the problems of motion pattern extraction in crowd event detection. To achieve our goals, we propose an approach based on scheme analysis of a new adaptive architecture and hybrid technique detection movement. The first stage consists of acquiring the image from camera recordings. After several successive stages are applied, the active detection of movement by a hybrid technique, until classification by fuzzy logic is preformed, which is the last phase intervening in the process of detection of anomalies based on the increase in the speed of the reaction of safety services in order to carry out a precise analysis and detect events in real time. In order to provide the users with concrete results on the analysis of human behavior, result experimentation on datasets have validated our approaches, with very satisfying results compared to the other state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
32

Manfredi, Giovanni, Israel D. Sáenz Hinostroza, Michel Menelle, Stéphane Saillant, Jean-Philippe Ovarlez, and Laetitia Thirion-Lefevre. "Measurements and Analysis of the Doppler Signature of a Human Moving within the Forest in UHF-Band." Remote Sensing 13, no. 3 (January 26, 2021): 423. http://dx.doi.org/10.3390/rs13030423.

Full text
Abstract:
Measurements of the Doppler signature in UHF-band of a human moving in outdoor sites are presented in this paper. A radar campaign has been carried out, observing a subject walking and running outside, near and within a forest. A bistatic radar has been employed working in continuous wave (CW) at 1 GHz and 435 MHz. The spectrograms acquired in VV polarization are shown and discussed. This study aims to prove the feasibility of detecting people moving in forested areas at low frequencies. Besides, we highlight the impact of the frequencies and the different sites on the Doppler spectrum of the human motions. The Doppler frequency signature of the moving man has been well detected at 1 GHz and 435 MHz for each motor activity and scene. The working frequency 435 MHz has proved to be more efficient for the detection and classification of the physical activities.
APA, Harvard, Vancouver, ISO, and other styles
33

Fan, Lei, and Yuanzhi Cai. "An Efficient Filtering Approach for Removing Outdoor Point Cloud Data of Manhattan-World Buildings." Remote Sensing 13, no. 19 (September 22, 2021): 3796. http://dx.doi.org/10.3390/rs13193796.

Full text
Abstract:
Laser scanning is a popular means of acquiring the indoor scene data of buildings for a wide range of applications concerning indoor environment. During data acquisition, unwanted data points beyond the indoor space of interest can also be recorded due to the presence of openings, such as windows and doors on walls. For better visualization and further modeling, it is beneficial to filter out those data, which is often achieved manually in practice. To automate this process, an efficient image-based filtering approach was explored in this research. In this approach, a binary mask image was created and updated through mathematical morphology operations, hole filling and connectively analysis. The final mask obtained was used to remove the data points located outside the indoor space of interest. The application of the approach to several point cloud datasets considered confirms its ability to effectively keep the data points in the indoor space of interest with an average precision of 99.50%. The application cases also demonstrate the computational efficiency (0.53 s, at most) of the approach proposed.
APA, Harvard, Vancouver, ISO, and other styles
34

Emek Soylu, Busra, Mehmet Serdar Guzel, Gazi Erkan Bostanci, Fatih Ekinci, Tunc Asuroglu, and Koray Acici. "Deep-Learning-Based Approaches for Semantic Segmentation of Natural Scene Images: A Review." Electronics 12, no. 12 (June 19, 2023): 2730. http://dx.doi.org/10.3390/electronics12122730.

Full text
Abstract:
The task of semantic segmentation holds a fundamental position in the field of computer vision. Assigning a semantic label to each pixel in an image is a challenging task. In recent times, significant advancements have been achieved in the field of semantic segmentation through the application of Convolutional Neural Networks (CNN) techniques based on deep learning. This paper presents a comprehensive and structured analysis of approximately 150 methods of semantic segmentation based on CNN within the last decade. Moreover, it examines 15 well-known datasets in the semantic segmentation field. These datasets consist of 2D and 3D image and video frames, including general, indoor, outdoor, and street scenes. Furthermore, this paper mentions several recent techniques, such as SAM, UDA, and common post-processing algorithms, such as CRF and MRF. Additionally, this paper analyzes the performance evaluation of reviewed state-of-the-art methods, pioneering methods, common backbone networks, and popular datasets. These have been compared according to the results of Mean Intersection over Union (MIoU), the most popular evaluation metric of semantic segmentation. Finally, it discusses the main challenges and possible solutions and underlines some future research directions in the semantic segmentation task. We hope that our survey article will be useful to provide a foreknowledge to the readers who will work in this field.
APA, Harvard, Vancouver, ISO, and other styles
35

Che, E., and M. J. Olsen. "FAST EDGE DETECTION AND SEGMENTATION OF TERRESTRIAL LASER SCANS THROUGH NORMAL VARIATION ANALYSIS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W4 (September 12, 2017): 51–57. http://dx.doi.org/10.5194/isprs-annals-iv-2-w4-51-2017.

Full text
Abstract:
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
APA, Harvard, Vancouver, ISO, and other styles
36

Chow, Tsz-Yeung, King-Hung Lee, and Kwok-Leung Chan. "Detection of Targets in Road Scene Images Enhanced Using Conditional GAN-Based Dehazing Model." Applied Sciences 13, no. 9 (April 24, 2023): 5326. http://dx.doi.org/10.3390/app13095326.

Full text
Abstract:
Object detection is a classic image processing problem. For instance, in autonomous driving applications, targets such as cars and pedestrians are detected in the road scene video. Many image-based object detection methods utilizing hand-crafted features have been proposed. Recently, more research has adopted a deep learning approach. Object detectors rely on useful features, such as the object’s boundary, which are extracted via analyzing the image pixels. However, the images captured, for instance, in an outdoor environment, may be degraded due to bad weather such as haze and fog. One possible remedy is to recover the image radiance through the use of a pre-processing method such as image dehazing. We propose a dehazing model for image enhancement. The framework was based on the conditional generative adversarial network (cGAN). Our proposed model was improved with two modifications. Various image dehazing datasets were employed for comparative analysis. Our proposed model outperformed other hand-crafted and deep learning-based image dehazing methods by 2dB or more in PSNR. Moreover, we utilized the dehazed images for target detection using the object detector YOLO. In the experimentations, images were degraded by two weather conditions—rain and fog. We demonstrated that the objects detected in images enhanced by our proposed dehazing model were significantly improved over those detected in the degraded images.
APA, Harvard, Vancouver, ISO, and other styles
37

Liu, Jiping, Yong Wang, Mengmeng Liu, Shenghua Xu, Tao Jiang, and Yang Gu. "The Integrated Disaster Reduction Intelligent Service System and its Application." Proceedings of the ICA 2 (July 10, 2019): 1–7. http://dx.doi.org/10.5194/ica-proc-2-76-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Geo-spatial information technology can provide data resources, positioning benchmarks, basic framework and key technologies for disaster prevention and reduction. At present, there are some problems in China’s disaster reduction services. Too much focus is placed on decision-making but not on early warning. In addition, the integration of technology, system and application of disaster reduction services is not enough. There is still a lack of a unified understanding an integrated disaster reduction intelligent service system. In order to provide technical support for China's comprehensive disaster reduction decision analysis, from the perspective of surveying, mapping, and geoinformation, this paper introduces an integrated disaster reduction intelligent service prototype system, which is concluded key technologies such as indoor and outdoor integrated emergency location, multi-source emergency data fusion, disaster scene visualization, and disaster model analysis services. Moreover, the integrated disaster reduction intelligent service prototype system has been applied in relevant emergency departments in Tibet and Xinjiang, realizing the integrated sensing, positioning, integration, analysis and service of emergency information.</p>
APA, Harvard, Vancouver, ISO, and other styles
38

Nicholls, Victoria I., Benjamin Alsbury-Nealy, Alexandra Krugliak, and Alex Clarke. "Context effects on object recognition in real-world environments: A study protocol." Wellcome Open Research 7 (May 26, 2022): 165. http://dx.doi.org/10.12688/wellcomeopenres.17856.1.

Full text
Abstract:
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
APA, Harvard, Vancouver, ISO, and other styles
39

Nicholls, Victoria I., Benjamin Alsbury-Nealy, Alexandra Krugliak, and Alex Clarke. "Context effects on object recognition in real-world environments: A study protocol." Wellcome Open Research 7 (November 30, 2022): 165. http://dx.doi.org/10.12688/wellcomeopenres.17856.2.

Full text
Abstract:
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
APA, Harvard, Vancouver, ISO, and other styles
40

Nicholls, Victoria I., Benjamin Alsbury-Nealy, Alexandra Krugliak, and Alex Clarke. "Context effects on object recognition in real-world environments: A study protocol." Wellcome Open Research 7 (July 14, 2023): 165. http://dx.doi.org/10.12688/wellcomeopenres.17856.3.

Full text
Abstract:
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
APA, Harvard, Vancouver, ISO, and other styles
41

Cao, Minghe, Jianzhong Wang, and Li Ming. "Multi-Templates Based Robust Tracking for Robot Person-Following Tasks." Applied Sciences 11, no. 18 (September 18, 2021): 8698. http://dx.doi.org/10.3390/app11188698.

Full text
Abstract:
While the robotics techniques have not developed to full automation, robot following is common and crucial in robotic applications to reduce the need for dedicated teleoperation. To achieve this task, the target must first be robustly and consistently perceived. In this paper, a robust visual tracking approach is proposed. The approach adopts a scene analysis module (SAM) to identify the real target and similar distractors, leveraging statistical characteristics of cross-correlation responses. Positive templates are collected based on the tracking confidence constructed by the SAM, and negative templates are gathered by the recognized distractors. Based on the collected templates, response fusion is performed. As a result, the responses of the target are enhanced and the false responses are suppressed, leading to robust tracking results. The proposed approach is validated on an outdoor robot-person following dataset and a collection of public person tracking datasets. The results show that our approach achieved state-of-the-art tracking performance in terms of both the robustness and AUC score.
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Long, Huihui Wang, Haorui Li, Jiayi Liu, Sen Qiu, Hongyu Zhao, and Xiangyang Guo. "Ambulatory Human Gait Phase Detection Using Wearable Inertial Sensors and Hidden Markov Model." Sensors 21, no. 4 (February 14, 2021): 1347. http://dx.doi.org/10.3390/s21041347.

Full text
Abstract:
Gait analysis, as a common inspection method for human gait, can provide a series of kinematics, dynamics and other parameters through instrumental measurement. In recent years, gait analysis has been gradually applied to the diagnosis of diseases, the evaluation of orthopedic surgery and rehabilitation progress, especially, gait phase abnormality can be used as a clinical diagnostic indicator of Alzheimer Disease and Parkinson Disease, which usually show varying degrees of gait phase abnormality. This research proposed an inertial sensor based gait analysis method. Smoothed and filtered angular velocity signal was chosen as the input data of the 15-dimensional temporal characteristic feature. Hidden Markov Model and parameter adaptive model are used to segment gait phases. Experimental results show that the proposed model based on HMM and parameter adaptation achieves good recognition rate in gait phases segmentation compared to other classification models, and the recognition results of gait phase are consistent with ground truth. The proposed wearable device used for data collection can be embedded on the shoe, which can not only collect patients’ gait data stably and reliably, ensuring the integrity and objectivity of gait data, but also collect data in daily scene and ambulatory outdoor environment.
APA, Harvard, Vancouver, ISO, and other styles
43

Reddy, Etikala Raja Vikram, and Sushil Thale. "A Novel Efficient Dual-Gate Mixed Dilated Convolution Network for Multi-Scale Pedestrian Detection." Engineering, Technology & Applied Science Research 13, no. 6 (December 5, 2023): 11973–79. http://dx.doi.org/10.48084/etasr.6340.

Full text
Abstract:
With the increasing use of onboard high-speed computing systems, vehicle manufacturers are offering significant advanced features of driver assistance systems. Pedestrian detection is one of the major requirements of such systems, which commonly use cameras, radar, and ultrasonic sensors. Image recognition based on captured image streams is one of the powerful tools used for the detection of pedestrians, which exhibits similarities and distinguishing features compared to general object detection. Although pedestrian detection has advanced significantly along with deep learning, some issues still need to be addressed. Pedestrian detection is essential for several real-world applications and is an initial step in outdoor scene analysis. Typically, in a crowded situation, conventional detectors are unable to distinguish persons from each other successfully. This study presents a novel technique, based on the Dual Gate Mixed Dilated Convolution Network, to address this problem by adaptively filtering spatial areas where the patterns are still complicated and require further processing. The proposed technique manages obscured patterns while offering improved multiscale pedestrian recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhuchkov, Vitaly V., Dmitry A. Peterburgsky, and Evgeny N. Boldyrev. "Analytical assessment of fire tank and fire reservoir operating range." Fire and Emergencies: prevention, elimination 3 (2023): 39–47. http://dx.doi.org/10.25257/fe.2023.3.39-47.

Full text
Abstract:
PURPOSE. Draft amendments and addendums to code 8.13130.2020 “Fire protection systems. Outdoor fire-fighting water supply. Fire safety requirements”, according to which it is proposed to increase working radius of fire tanks and fire reservoirs to 400 m are analyzed. Regulatory documents regarding outdoor fire-fighting water supply for the period from 1947 to the present are examined. The authors analyzed aspects related to determining operating range of a fire tank and an artificial fire reservoir, depending on the capabilities of fire departments and fire-fighting equipment. Such aspects include level of equipping main fire trucks and technical capabilities of pumping equipment for using water sources located at a distance of up to 400 m from the fire site, as well as the number of combat crews arriving at the fire scene. METHODS. System and structural analysis, mathematical statistics, descriptive analysis were used. FINDINGS. The authors proposed and analyzed possible schemes for using water sources located at a distance of more than 200 m from protected objects; analytical characteristics of pump-hose systems for the proposed schemes were obtained; the operating characteristics of a fire pump and pump-hose systems were constructed in the same coordinate systems. RESEARCH APPLICATION FIELD. The results can be used when making a final decision to increase the operating range of fire tanks and fire reservoirs. CONCLUSIONS. With working radius of buildings and structures with fire tanks and artificial reservoirs, the value of which will exceed 200 m, the time of emergency rescue services deployment and free fire development will increase; operational capabilities of fire department to solve related tasks will be reduced; water supply for firefighting will be provided by at least two medium or heavy class fire trucks; organizing water supply into pumping with sequential operation of fire truck pumps will be required.
APA, Harvard, Vancouver, ISO, and other styles
45

Cricri, Francesco, Kostadin Dabov, Mikko J. Roininen, Sujeet Mate, Igor D. D. Curcio, and Moncef Gabbouj. "Multimodal Semantics Extraction from User-Generated Videos." Advances in Multimedia 2012 (2012): 1–17. http://dx.doi.org/10.1155/2012/292064.

Full text
Abstract:
User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events) being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium), genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Sihuan, Maosen Shao, Sifan Wu, Zhilin He, Hui Wang, Jinxiu Zhang, and Yue You. "Design and Demonstration of a Tandem Dual-Rotor Aerial–Aquatic Vehicle." Drones 8, no. 3 (March 15, 2024): 100. http://dx.doi.org/10.3390/drones8030100.

Full text
Abstract:
Aerial–aquatic vehicles (AAVs) hold great promise for marine applications, offering adaptability to diverse environments by seamlessly transitioning between underwater and aerial operations. Nevertheless, the design of AAVs poses inherent challenges, owing to the distinct characteristics of different fluid media. This article introduces a novel solution in the form of a tandem dual-rotor aerial–aquatic vehicle, strategically engineered to overcome these challenges. The proposed vehicle boasts a slender and streamlined body, enhancing its underwater mobility while utilizing a tandem rotor for aerial maneuvers. Outdoor scene tests were conducted to assess the tandem dual-rotor AAV’s diverse capabilities, including flying, hovering, and executing repeated cross-media locomotion. Notably, its versatility was further demonstrated through swift surface swimming on water. In addition to aerial evaluations, an underwater experiment was undertaken to evaluate the AAV’s ability to traverse narrow underwater passages. This capability was successfully validated through the creation of a narrow underwater gap. The comprehensive exploration of the tandem dual-rotor AAV’s potential is presented in this article, encompassing its foundational principles, overall design, simulation analysis, and avionics system design. The preliminary research and design outlined herein offer a proof of concept for the tandem dual-rotor AAV, establishing a robust foundation for AAVs seeking optimal performance in both water and air environments. This contribution serves as a valuable reference solution for the advancement of AAV technology.
APA, Harvard, Vancouver, ISO, and other styles
47

Hao, Yu, Fan Yang, Hao Huang, Shuaihang Yuan, Sundeep Rangan, John-Ross Rizzo, Yao Wang, and Yi Fang. "A Multi-Modal Foundation Model to Assist People with Blindness and Low Vision in Environmental Interaction." Journal of Imaging 10, no. 5 (April 26, 2024): 103. http://dx.doi.org/10.3390/jimaging10050103.

Full text
Abstract:
People with blindness and low vision (pBLV) encounter substantial challenges when it comes to comprehensive scene recognition and precise object identification in unfamiliar environments. Additionally, due to the vision loss, pBLV have difficulty in accessing and identifying potential tripping hazards independently. Previous assistive technologies for the visually impaired often struggle in real-world scenarios due to the need for constant training and lack of robustness, which limits their effectiveness, especially in dynamic and unfamiliar environments, where accurate and efficient perception is crucial. Therefore, we frame our research question in this paper as: How can we assist pBLV in recognizing scenes, identifying objects, and detecting potential tripping hazards in unfamiliar environments, where existing assistive technologies often falter due to their lack of robustness? We hypothesize that by leveraging large pretrained foundation models and prompt engineering, we can create a system that effectively addresses the challenges faced by pBLV in unfamiliar environments. Motivated by the prevalence of large pretrained foundation models, particularly in assistive robotics applications, due to their accurate perception and robust contextual understanding in real-world scenarios induced by extensive pretraining, we present a pioneering approach that leverages foundation models to enhance visual perception for pBLV, offering detailed and comprehensive descriptions of the surrounding environment and providing warnings about potential risks. Specifically, our method begins by leveraging a large-image tagging model (i.e., Recognize Anything Model (RAM)) to identify all common objects present in the captured images. The recognition results and user query are then integrated into a prompt, tailored specifically for pBLV, using prompt engineering. By combining the prompt and input image, a vision-language foundation model (i.e., InstructBLIP) generates detailed and comprehensive descriptions of the environment and identifies potential risks in the environment by analyzing environmental objects and scenic landmarks, relevant to the prompt. We evaluate our approach through experiments conducted on both indoor and outdoor datasets. Our results demonstrate that our method can recognize objects accurately and provide insightful descriptions and analysis of the environment for pBLV.
APA, Harvard, Vancouver, ISO, and other styles
48

De Cesarei, Andrea, Shari Cavicchi, Antonia Micucci, and Maurizio Codispoti. "Categorization Goals Modulate the Use of Natural Scene Statistics." Journal of Cognitive Neuroscience 31, no. 1 (January 2019): 109–25. http://dx.doi.org/10.1162/jocn_a_01333.

Full text
Abstract:
Understanding natural scenes involves the contribution of bottom–up analysis and top–down modulatory processes. However, the interaction of these processes during the categorization of natural scenes is not well understood. In the current study, we approached this issue using ERPs and behavioral and computational data. We presented pictures of natural scenes and asked participants to categorize them in response to different questions (Is it an animal/vehicle? Is it indoors/outdoors? Are there one/two foreground elements?). ERPs for target scenes requiring a “yes” response began to differ from those of nontarget scenes, beginning at 250 msec from picture onset, and this ERP difference was unmodulated by the categorization questions. Earlier ERPs showed category-specific differences (e.g., between animals and vehicles), which were associated with the processing of scene statistics. From 180 msec after scene onset, these category-specific ERP differences were modulated by the categorization question that was asked. Categorization goals do not modulate only later stages associated with target/nontarget decision but also earlier perceptual stages, which are involved in the processing of scene statistics.
APA, Harvard, Vancouver, ISO, and other styles
49

Samba, Faye, Thiaw Cheikh, LO Mamadou, Ndiaye Bou, Mbengue Malick, Sow Demba, Diome Toffène, and Sembène Mbacké. "Evaluation of the Causes of Collective Food Poisoning (CFP) in University Campuses in Senegal Relating to a Lack of Qualification of University Restaurant Staff." Journal of Advances in Microbiology 23, no. 5 (April 25, 2023): 16–25. http://dx.doi.org/10.9734/jamb/2023/v23i5722.

Full text
Abstract:
In Senegal, public universities namely: UGB, UCAD, UIT, UADB and UASZ are often the scene of violent mood movements of students following the occurrence of a Collective Food Poisoning (CFP). These diseases are caused by the consumption of dangerous meals usually prepared by unskilled actors. This study aims to identify the shortcomings related to the lack of qualification of those involved in the catering industry and the causes of CFP, by specifically determining their levels: behaviour, qualification, risk in CFP outbreaks, bacteriological contamination of surfaces and food to which students are exposed. To do this, a questionnaire was developed and a team of investigators was formed. From 2012 to 2017, a retrospective survey was conducted. The choice of targets focused on students, restaurateurs, food vendors, residence chefs and medical workers. A system has been set up for the collection and analysis of food samples (processed fish, hot meals, unpasteurized juices and sandwiches) and surfaces (trays and hands of divers and waitresses) at the UCAD ESEA restaurant under aseptic conditions. Data processing was carried out using an Excel spreadsheet and XLStat software. We have respectively in University Restaurants, University Residences, Fast Foods and Outdoor Restaurants: (17.47; 43.24; 22.21 and 17.08), (21.05; 33.05; 14.02 and 31.88) and (65.4; 21.16; 10.96 and 2.48) for the level of actors' behaviour, qualification and percentage risk. The level of contamination of trays, divers, waitresses, processed fish, meals, unpasteurized juice and sandwiches is 50%, 48%, 55%, 74%, 100%, 94% and 91.5% respectively. In the analysis of these results, it can be said that some causes of CFP are related to the lack of qualification of the staff and therefore, the training of these actors is a priority on these university campuses. To ensure safe meals and student safety, highly qualified staff must be recruited and continuously trained in good hygiene practices and HACCP.
APA, Harvard, Vancouver, ISO, and other styles
50

Luvidi, Loredana, Fernanda Prestileo, Michela De Paoli, Cristiano Riminesi, Rachele Manganelli Del Fà, Donata Magrini, and Fabio Fratini. "Diagnostics and Monitoring to Preserve a Hypogeum Site: The Case of the Mithraeum of Marino Laziale (Rome)." Heritage 4, no. 4 (November 9, 2021): 4264–88. http://dx.doi.org/10.3390/heritage4040235.

Full text
Abstract:
Conservation of hypogea and their accessibility by the visitors is a hard question, due to the interaction of different factors such as the intrinsic characteristics of the hypogeal environments and the presence of public. A particular case is represented by the Mithraeum of Marino Laziale, located a few kilometers away from Rome and accidentally discovered in the 1960s. The uniqueness of the discovery was the presence of a well-preserved painting of the Mithraic scene (II century A.D.) probably due to the oblivion of the place of worship over the centuries as well as the isolation from the outdoor environment. Unfortunately, despite a recent complete restoration and recovery of the archaeological area, which ended in 2015, the area was never open to the visitors and only two years after completing the works it was no longer safe to use. Hence, the need for a new planning of interventions starting from the deep knowledge of this cultural heritage and from the analysis of past incorrect actions to arrive at the opening—without any risk for the archaeological findings and visitors—and management of this site, never exposed to the public. Therefore, since 2018 a diagnostic campaign and microclimate monitoring have been started. The data collected during the two years of investigations have been fundamental to assess the conservation state of the hypogeal environment and the potential risks for the preservation of the three paintings (the Mithraic scene and two dadophores). Long-term monitoring of indoor environmental conditions assumes the role of an essential tool for the planning of preventive conservation strategies but also for the control of the site after its opening to the visitors. Furthermore, the characterization of the microclimate is non-invasive, sufficiently economical and accurate. In this paper, the characterization of surfaces in the Mithraic gallery through optical microscopy, UV fluorescence/imaging techniques, FT-IR spectroscopy, XRD and the microclimatic parameters variation in the presence or absence of visitors are used to define the strategies for the opening and fruition of the Mithraeum. The strategies for the sustainable fruition of this unique archaeological site have been defined through a conservation protocol approved by the Italian Ministry of Cultural Heritage and necessary for the site managers and curators of the Municipality of Marino Laziale to finally support its opening.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography