Gotowa bibliografia na temat „Outdoor scene analysis”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Outdoor scene analysis”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Outdoor scene analysis"

1

Ruan, Ling, Ling Zhang, Yi Long i Fei Cheng. "Coordinate references for the indoor/outdoor seamless positioning". Proceedings of the ICA 1 (16.05.2018): 1–7. http://dx.doi.org/10.5194/ica-proc-1-97-2018.

Pełny tekst źródła
Streszczenie:
Indoor positioning technologies are being developed rapidly, and seamless positioning which connected indoor and outdoor space is a new trend. The indoor and outdoor positioning are not applying the same coordinate system and different indoor positioning scenes uses different indoor local coordinate reference systems. A specific and unified coordinate reference frame is needed as the space basis and premise in seamless positioning application. Trajectory analysis of indoor and outdoor integration also requires a uniform coordinate reference. However, the coordinate reference frame in seamless positioning which can applied to various complex scenarios is lacking of research for a long time. In this paper, we proposed a universal coordinate reference frame in indoor/outdoor seamless positioning. The research focus on analysis and classify the indoor positioning scenes and put forward the coordinate reference system establishment and coordinate transformation methods in each scene. And, through some experiments, the calibration method feasibility was verified.
Style APA, Harvard, Vancouver, ISO itp.
2

Kilawati, Andi, i Rosmalah Yanti. "Hubungan Outdor Activity dengan Scene Setting dalam Skill Mengajar Pembelajaran IPA Mahasiswa PGSD Universitas Cokroaminoto Palopo". Cokroaminoto Journal of Primary Education 2, nr 1 (30.04.2019): 6–10. http://dx.doi.org/10.30605/cjpe.122019.100.

Pełny tekst źródła
Streszczenie:
This study generally aims to determine the relationship of outdoor activity with Scene Settings in Elementary School Science Learning PGSD University Students Cokroaminoto Palopo. This research is a quantitative research. The research model used in this study is the correlational model. This research hypothesis was tested using two-way analysis of variance (ANAVA). The sample of this study was PGSD University Cokroaminoto Palopo University. The results of this study indicate that: (1) there is a positive relationship between outdoor activities and scene settings (2) there is a positive positive relationship between outdoor activities and scene settings in science learning of PGSD students at Cokroaminoto Palopo University.
Style APA, Harvard, Vancouver, ISO itp.
3

Musel, Benoit, Louise Kauffmann, Stephen Ramanoël, Coralie Giavarini, Nathalie Guyader, Alan Chauvin i Carole Peyrin. "Coarse-to-fine Categorization of Visual Scenes in Scene-selective Cortex". Journal of Cognitive Neuroscience 26, nr 10 (październik 2014): 2287–97. http://dx.doi.org/10.1162/jocn_a_00643.

Pełny tekst źródła
Streszczenie:
Neurophysiological, behavioral, and computational data indicate that visual analysis may start with the parallel extraction of different elementary attributes at different spatial frequencies and follows a predominantly coarse-to-fine (CtF) processing sequence (low spatial frequencies [LSF] are extracted first, followed by high spatial frequencies [HSF]). Evidence for CtF processing within scene-selective cortical regions is, however, still lacking. In the present fMRI study, we tested whether such processing occurs in three scene-selective cortical regions: the parahippocampal place area (PPA), the retrosplenial cortex, and the occipital place area. Fourteen participants were subjected to functional scans during which they performed a categorization task of indoor versus outdoor scenes using dynamic scene stimuli. Dynamic scenes were composed of six filtered images of the same scene, from LSF to HSF or from HSF to LSF, allowing us to mimic a CtF or the reverse fine-to-coarse (FtC) sequence. Results showed that only the PPA was more activated for CtF than FtC sequences. Equivalent activations were observed for both sequences in the retrosplenial cortex and occipital place area. This study suggests for the first time that CtF sequence processing constitutes the predominant strategy for scene categorization in the PPA.
Style APA, Harvard, Vancouver, ISO itp.
4

CHAN, K. L. "VIDEO-BASED GAIT ANALYSIS BY SILHOUETTE CHAMFER DISTANCE AND KALMAN FILTER". International Journal of Image and Graphics 08, nr 03 (lipiec 2008): 383–418. http://dx.doi.org/10.1142/s0219467808003155.

Pełny tekst źródła
Streszczenie:
A markerless human gait analysis system using uncalibrated monocular video is developed. The background model is trained for extracting the subject silhouette, whether in static scene or dynamic scene, in each video frame. Generic 3D human model is manually fit to the subject silhouette in the first video frame. We propose the silhouette chamfer, which contains the chamfer distance of silhouette and region information, as one matching feature. This, combined dynamically with the model gradient, is used to search for the best fit between subject silhouette and 3D model. Finally, we use the discrete Kalman filter to predict and correct the pose of the walking subject in each video frame. We propose a quantitative measure that can be used to identify tracking faults automatically. Errors in the joint angle trajectories can then be corrected and the walking cycle is interpreted. Experiments have been carried out on video captured in static indoor as well as outdoor scenes.
Style APA, Harvard, Vancouver, ISO itp.
5

Tylecek, Radim, i Robert Fisher. "Consistent Semantic Annotation of Outdoor Datasets via 2D/3D Label Transfer". Sensors 18, nr 7 (12.07.2018): 2249. http://dx.doi.org/10.3390/s18072249.

Pełny tekst źródła
Streszczenie:
The advance of scene understanding methods based on machine learning relies on the availability of large ground truth datasets, which are essential for their training and evaluation. Construction of such datasets with imagery from real sensor data however typically requires much manual annotation of semantic regions in the data, delivered by substantial human labour. To speed up this process, we propose a framework for semantic annotation of scenes captured by moving camera(s), e.g., mounted on a vehicle or robot. It makes use of an available 3D model of the traversed scene to project segmented 3D objects into each camera frame to obtain an initial annotation of the associated 2D image, which is followed by manual refinement by the user. The refined annotation can be transferred to the next consecutive frame using optical flow estimation. We have evaluated the efficiency of the proposed framework during the production of a labelled outdoor dataset. The analysis of annotation times shows that up to 43% less effort is required on average, and the consistency of the labelling is also improved.
Style APA, Harvard, Vancouver, ISO itp.
6

Bielecki, Andrzej, i Piotr Śmigielski. "Three-Dimensional Outdoor Analysis of Single Synthetic Building Structures by an Unmanned Flying Agent Using Monocular Vision". Sensors 21, nr 21 (1.11.2021): 7270. http://dx.doi.org/10.3390/s21217270.

Pełny tekst źródła
Streszczenie:
An algorithm designed for analysis and understanding a 3D urban-type environment by an autonomous flying agent, equipped only with a monocular vision, is presented. The algorithm is hierarchical and is based on the structural representation of the analyzed scene. Firstly, the robot observes the scene from a high altitude to build a 2D representation of a single object and a graph representation of the 2D scene. The 3D representation of each object arises as a consequence of the robot’s actions, as a result of which it projects the object’s solid on different planes. The robot assigns the obtained representations to the corresponding vertex of the created graph. The algorithm was tested by using the embodied robot operating on the real scene. The tests showed that the robot equipped with the algorithm was able not only to localize the predefined object, but also to perform safe, collision-free maneuvers close to the structures in the scene.
Style APA, Harvard, Vancouver, ISO itp.
7

Kuhnert, Lars, i Klaus-Dieter Kuhnert. "Sensor-Fusion Based Real-Time 3D Outdoor Scene Reconstruction and Analysis on a Moving Mobile Outdoor Robot". KI - Künstliche Intelligenz 25, nr 2 (9.02.2011): 117–23. http://dx.doi.org/10.1007/s13218-011-0093-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Oo, Thanda, Hiroshi Kawasaki, Yutaka Ohsawa i Katsushi Ikeuchi. "Separation of Reflection and Transparency Based on Spatiotemporal Analysis for Outdoor Scene". IPSJ Digital Courier 2 (2006): 428–40. http://dx.doi.org/10.2197/ipsjdc.2.428.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

He, Wei. "Research on Outdoor Garden Scene Reconstruction Based on PMVS Algorithm". Scientific Programming 2021 (24.11.2021): 1–10. http://dx.doi.org/10.1155/2021/4491382.

Pełny tekst źródła
Streszczenie:
The three-dimensional reconstruction of outdoor landscape is of great significance for the construction of digital city. With the rapid development of big data and Internet of things technology, when using the traditional image-based 3D reconstruction method to restore the 3D information of objects in the image, there will be a large number of redundant points in the point cloud and the density of the point cloud is insufficient. Based on the analysis of the existing three-dimensional reconstruction technology, combined with the characteristics of outdoor garden scene, this paper gives the detection and extraction methods of relevant feature points and adopts feature matching and repairing the holes generated by point cloud meshing. By adopting the candidate strategy of feature points and adding the mesh subdivision processing method, an improved PMVS algorithm is proposed and the problem of sparse point cloud in 3D reconstruction is solved. Experimental results show that the proposed method not only effectively realizes the three-dimensional reconstruction of outdoor garden scene, but also improves the execution efficiency of the algorithm on the premise of ensuring the reconstruction effect.
Style APA, Harvard, Vancouver, ISO itp.
10

Angsuwatanakul, Thanate, Jamie O’Reilly, Kajornvut Ounjai, Boonserm Kaewkamnerdpong i Keiji Iramina. "Multiscale Entropy as a New Feature for EEG and fNIRS Analysis". Entropy 22, nr 2 (7.02.2020): 189. http://dx.doi.org/10.3390/e22020189.

Pełny tekst źródła
Streszczenie:
The present study aims to apply multiscale entropy (MSE) to analyse brain activity in terms of brain complexity levels and to use simultaneous electroencephalogram and functional near-infrared spectroscopy (EEG/fNIRS) recordings for brain functional analysis. A memory task was selected to demonstrate the potential of this multimodality approach since memory is a highly complex neurocognitive process, and the mechanisms governing selective retention of memories are not fully understood by other approaches. In this study, 15 healthy participants with normal colour vision participated in the visual memory task, which involved the making the executive decision of remembering or forgetting the visual stimuli based on his/her own will. In a continuous stimulus set, 250 indoor/outdoor scenes were presented at random, between periods of fixation on a black background. The participants were instructed to make a binary choice indicating whether they wished to remember or forget the image; both stimulus and response times were stored for analysis. The participants then performed a scene recognition test to confirm whether or not they remembered the images. The results revealed that the participants intentionally memorising a visual scene demonstrate significantly greater brain complexity levels in the prefrontal and frontal lobe than when purposefully forgetting a scene; p < 0.05 (two-tailed). This suggests that simultaneous EEG and fNIRS can be used for brain functional analysis, and MSE might be the potential indicator for this multimodality approach.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Outdoor scene analysis"

1

Hu, Sijie. "Deep multimodal visual data fusion for outdoor scenes analysis in challenging weather conditions". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST121.

Pełny tekst źródła
Streszczenie:
Les données visuelles multimodales peuvent fournir des informations différentes sur la même scène, améliorant ainsi la précision et la robustesse de l'analyse de scènes. Cette thèse se concentre principalement sur la façon d'utiliser efficacement les données visuelles multimodales telles que les images en couleur, les images infrarouges et les images de profondeur, et sur la façon de fusionner ces données visuelles pour une compréhension plus complète de l'environnement. Nous avons choisi la segmentation sémantique et la détection d'objets, deux tâches représentatives de la vision par ordinateur, pour évaluer et valider différentes méthodes de fusion de données visuelles multimodales. Ensuite, nous proposons un schéma de fusion RGB-D basé sur l'attention additive, considérant la carte de profondeur comme une modalité auxiliaire pour fournir des indices géométriques supplémentaires, et résolvant le coût élevé associé à l'auto-attention. Compte tenu de la complexité de la perception de scènes en conditions de faible luminosité, nous avons conçu un module de fusion croisée qui utilise l'attention de canal et spatiale pour explorer les informations complémentaires des paires d'images visible-infrarouge, améliorant ainsi la perception de l'environnement par le système. Enfin, nous avons également abordé l'application des données visuelles multimodales dans l'adaptation de domaine non supervisée. Nous proposons d'utiliser des indices de profondeur pour guider le modèle à apprendre la représentation de caractéristiques invariables au domaine. Les nombreux résultats expérimentaux indiquent que les méthodes proposées surpassent les autres méthodes sur plusieurs bases de données multimodales disponibles publiquement et peuvent être étendues à différents types de modèles, démontrant ainsi davantage la robustesse et les capacités de généralisation de nos méthodes dans les tâches de perception de scènes en extérieur
Multi-modal visual data can provide different information about the same scene, thus enhancing the accuracy and robustness of scene analysis. This thesis mainly focuses on how to effectively utilize multi-modal visual data such as color images, infrared images, and depth images, and how to fuse these visual data for a more comprehensive understanding of the environment. Semantic segmentation and object detection, two representative computer vision tasks, were selected for investigating and verifying different multi-modal visual data fusion methods. Then, we propose an additive-attention-based RGB-D fusion scheme, considering the depth map as an auxiliary modality to provide additional geometric clues, and solving the high cost associated with self-attention. Considering the complexity of scene perception under low-light conditions, we designed a cross-fusion module that uses channel and spatial attention to explore the complementary information of visible-infrared image pairs, enhancing the system's perception of the environment. Additionally, we also researched the application of multi-modal visual data in unsupervised domain adaptation. We proposed to leverage depth cues to guide the model to learn domain-invariant feature representation. Extensive research results indicate that the proposed methods outperform others on multiple publicly available multi-modal datasets and can be extended to different types of models, which further demonstrating the robustness and generalization capabilities of our methods in outdoor scene perception tasks
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Outdoor scene analysis"

1

Dellaert, Frank, Jan-Michael Frahm, Marc Pollefeys, Laura Leal-Taixé i Bodo Rosenhahn, red. Outdoor and Large-Scale Real-World Scene Analysis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34091-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Jan-Michael, Frahm, Pollefeys Marc, Leal-Taixé Laura, Rosenhahn Bodo i SpringerLink (Online service), red. Outdoor and Large-Scale Real-World Scene Analysis: 15th International Workshop on Theoretical Foundations of Computer Vision, Dagstuhl Castle, Germany, June 26 - July 1, 2011. Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Pollefeys, Marc, Frank Dellaert, Bodo Rosenhahn, Laura Leal-Taixé i Jan-Michael Frahm. Outdoor and Large-Scale Real-World Scene Analysis: 15th International Workshop on Theoretical Foundations of Computer Vision, Dagstuhl Castle, Germany, June 26 - July 1, 2011. Revised Selected Papers. Springer Berlin / Heidelberg, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Outdoor scene analysis"

1

Payne, Andrew, i Sameer Singh. "A Benchmark for Indoor/Outdoor Scene Classification". W Pattern Recognition and Image Analysis, 711–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11552499_78.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Pillai, Ignazio, Riccardo Satta, Giorgio Fumera i Fabio Roli. "Exploiting Depth Information for Indoor-Outdoor Scene Classification". W Image Analysis and Processing – ICIAP 2011, 130–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24088-1_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sato, Tomokazu, Masayuki Kanbara i Naokazu Yokoya. "3-D Modeling of an Outdoor Scene from Multiple Image Sequences by Estimating Camera Motion Parameters". W Image Analysis, 717–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45103-x_95.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Trinh, Hoang-Hon, Dae-Nyeon Kim, Suk-Ju Kang i Kang-Hyun Jo. "Building-Based Structural Data for Core Functions of Outdoor Scene Analysis". W Emerging Intelligent Computing Technology and Applications, 625–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04070-2_68.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ancuti, Codruta Orniana, Cosmin Ancuti i Philippe Bekaert. "Single Image Restoration of Outdoor Scenes". W Computer Analysis of Images and Patterns, 245–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23678-5_28.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Outdoor scene analysis"

1

Seon Joo Kim, Jan-Michael Frahm i Marc Pollefeys. "Radiometric calibration with illumination change for outdoor scene analysis". W 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2008. http://dx.doi.org/10.1109/cvpr.2008.4587648.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Lawton, Teri B. "Dynamic object-oriented scene analysis based on computational neurobiology". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.wj5.

Pełny tekst źródła
Streszczenie:
The construction of robust object-oriented depth maps is fundamental to understanding the topography and motion of objects located within a given terrain. A computational vision system that is particularly novel in its approach has been developed; it creates object maps by using algorithms based on biological models. The success of this method is demonstrated by the speed and robustness of the results when the input consists of natural outdoor scenes, where the effects of terrain, shadows, scene illumination, reference landmarks, and scene complexity can be systematically explored. The performance of a dynamic object-oriented computational vision system based on the layered neural network architecture used for primate depth perception is presented. The cortical architecture indicated by studies in neurobiology, encompassing multiple areas of the brain, and its embodiment in the computational vision system, is described. The role of visual landmarks, multiresolution texture, shape from shading, boundary completion, region content filling, motion parallax, structure from motion, occlusion information, and Hebbian learning in human and robotic vision is discussed.
Style APA, Harvard, Vancouver, ISO itp.
3

Haga, T., K. Sumi i Y. Yagi. "Human detection in outdoor scene using spatio-temporal motion analysis". W Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1333770.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Jinghua Ding, Sumana Das, Syed A. A. Kazmi, Navrati Saxena i Syed Faraz Hasan. "An outdoor assessment of scene analysis for Wi-Fi based positioning". W 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE). IEEE, 2014. http://dx.doi.org/10.1109/gcce.2014.7031095.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Okutani, Keita, Takami Yoshida, Keisuke Nakamura i Kazuhiro Nakadai. "Outdoor auditory scene analysis using a moving microphone array embedded in a quadrocopter". W 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012). IEEE, 2012. http://dx.doi.org/10.1109/iros.2012.6385994.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Sun, Xiaotian. "The smell of the scene - Mapping the digital smell of scenes around Beijing". W 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001777.

Pełny tekst źródła
Streszczenie:
Smell is an sensation underrated in our life.In the culture dominated by vision, it is common to try to hide and deny the true smell of things.But smell may be the strongest and most interesting sensations humans possess: it is primitive, instinctive, sensual, and uncontrollable. We are surrounded by smells, they process through the air, and we cannot avoid perceiving them.We perceive the quality of things through smell, and we get thousands of messages from a large number of small particles that reach our nostrils . It communicates with people directly and exchanges information.Smell can reach the marginal regions of the brain before we feel any other sensory stimuli, because this is the most primitive part of human experience related to the strongest emotions.In order to explore digital smell,I designed a “scentgraphy” in my previous work, and establishing the relationship between color and smell is my main job.In this paper,I'll started to establish the relationship between the smell and scene to make the digital smell more accurate.and it is expected in the next research that the relationship between smell and scene will be applied the scentgraphy4.0 based on computer vision recognition .Therefore, my research aim in this article is to use common scene in our everyday life or the nature to map with their corresponding smell. However, due to the different geographical and cultural gap, the smell of the same scene may appear different, because this study will be studied that be selected in different scene of a city.In this paper, my research aims to use the common scene in our everyday life or the nature to map with their corresponding smell in order to establish the relationship between scene and smell. In the research process, take Beijing as an example, and record the smell of 6 kinds of outdoor scenes through on-site perception and photography. The research result will be applied in digital olfactory project related with computer vision recognition.In order to explore the digitization of smell, this paper will explore representative smell from the same kind of scene.By drawing on DrKate McLean's Smell Map method, which was used to study and design urban smell scenes, the participants were able to discover the unique odor from the urban environment. The common scenes in daily life or nature are mapped to their corresponding scents, while this paper focuses on finding the most common scents in the same type of scenes.Participants selected a scene to take pictures feel and analyze the smell of the scene(the smell was divided into different experience values of 1-5),and finally record the time, place and noise value(the noise was also divided into experience values of different values of 1-5).When collecting the questionnaire ,we will Qualitative analysis and quantitative analysis these subjective data to get the unique smell scene in each scene of Beijing. The result are used as the basis for the subsequent smell calculation model and smell database.Due to the differences between regions and cultures, different smell may appear in the same scene, because I chose different scenes(and representative scenes)in Beijing for this study. Finally, the data collected during the experiment were used to explore the connection between scene and smell.The whole experiment was divided into four parts.The experiment started on the internet. Volunteers collect data, recover and process data, analyze and summarize data.
Style APA, Harvard, Vancouver, ISO itp.
7

Vrsnak, Donik, Ilija Domislovic, Marko Subasic i Sven Loncaric. "Illuminant estimation error detection for outdoor scenes using transformers". W 2021 12th International Symposium on Image and Signal Processing and Analysis (ISPA). IEEE, 2021. http://dx.doi.org/10.1109/ispa52656.2021.9552045.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Huang, Hun, Ge Gao, Ziyi Ke, Cheng Peng i Ming Gu. "A Multi-Scenario Crowd Data Synthesis Based On Building Information Modeling". W The 29th EG-ICE International Workshop on Intelligent Computing in Engineering. EG-ICE, 2022. http://dx.doi.org/10.7146/aul.455.c223.

Pełny tekst źródła
Streszczenie:
Deep learning methods have proven to be effective in the field of crowd analysis recently. Nonetheless, the performance of deep learning models is affected by the inadequacy of training datasets. Because of policy implications and privacy restrictions, crowd data is commonly difficult to access. In order to overcome the difficulty of insufficient dataset, the previous work used to synthesize labelled crowd data in outdoor scenes and virtual games. However, these methods perform data synthesis with limited environmental information and inflexible crowd rules, usually in unauthentic environment. In this paper, a tool for synthesizing crowd data in BIM models with multiple scenes is proposed. This tool can make full use of the comprehensive information of real-world buildings, and conduct crowd simulations by setting behavior rules. The synthesized dataset is used for data augmentation for crowd analysis problems and the experimental results clearly confirm the effectiveness of the tool.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii