Journal articles on the topic 'Modèle humain virtuel'

To see the other types of publications on this topic, follow the link: Modèle humain virtuel.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Modèle humain virtuel.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Albessard-Ball, Lou, Sophie Gallas, and Dominique Grimaud-Hervé. "Penser l’évolution du cerveau humain, de l’objet fossile au modèle virtuel." Intellectica. Revue de l'Association pour la Recherche Cognitive 73, no. 2 (2020): 27–55. http://dx.doi.org/10.3406/intel.2020.1963.

Full text
Abstract:
L’une des caractéristiques les plus souvent évoquées lorsque l’on tente de décrire ce qui fait notre humanité est le cerveau humain «moderne» et ses capacités cognitives. La mise en évidence chez les hominines fossiles de comportements que l’on reconnaît comme étant similaires aux nôtres joue donc un rôle majeur dans la compréhension des origines de cette humanité. C’est dans cette perspective, replacée dans le contexte d’une vision naturaliste de l’histoire évolutive de notre espèce, que paléoanthropologues et archéologues étudient l’évolution du cerveau et les origines de la cognition humaine. Ce champ d’études est aussi ardu qu’il est fascinant, en raison de la nature fragmentaire du matériel fossile que nous tentons d’interpréter, mais aussi de la proximité entre l’objet étudié et le scientifique qui cherche à élucider l’évolution de sa propre espèce. Nous abordons ici les méthodes à la disposition des paléoanthropologues pour reconstituer et sans cesse repenser l’évolution du cerveau humain.
APA, Harvard, Vancouver, ISO, and other styles
2

Dong, Xia, Ke Dian Wang, Jun Wei Cao, and Xue Song Mei. "Perception System Design of Virtual Human by Object-Oriented Method." Advanced Materials Research 204-210 (February 2011): 866–71. http://dx.doi.org/10.4028/www.scientific.net/amr.204-210.866.

Full text
Abstract:
The perception module system of the virtual human in dangerous circumstances is studied mainly. The decision-making model based on emotion is created with mathematical method. The visual perception method is designed with geometrical algorithm, which can realize the determination of objects’ visibility. Perception module is designed with object-oriented method, including environment-information notification and virtual human’s responser. Communication between virtual human and nearby objects is communicated based on the perception module. The platform, which can simulate virtual human behaviors, is created with OpenGL technology by Visual C++6.0. The simulation results of virtual human behaviors prove that the perception system design of the virtual human is feasible.
APA, Harvard, Vancouver, ISO, and other styles
3

McNutt, Kathleen. "Research Note: Do Virtual Policy Networks Matter? Tracing Network Structure Online." Canadian Journal of Political Science 39, no. 2 (June 2006): 391–405. http://dx.doi.org/10.1017/s0008423906060161.

Full text
Abstract:
Abstract.The Internet, operating as a technologically embedded laboratory of human activity, provides social scientists with a new set of analytical tools by which to test and replicate models of social and political behaviour, with data extrapolated from the regularities of online activity, organization and information exchange. This research note demonstrates that virtual policy networks, arrangements of public interaction between mutually supporting actors that form around policy activities, exist on the Web. In addition, the note considers whether or not Canadian virtual policy networks are mimicking their respective national policy communities through the application of a methodological approach referred to as link structure analysis. Four sectorally based networks, including Aboriginal policy, agriculture, banking and women-centred policy, are analyzed to assess the extent of virtual policy networks' replication of real world policy dynamics.Résumé.L'Internet, agissant comme laboratoire technologique de l'activité humaine, fournit aux chercheurs un nouvel ensemble d'outils analytiques par lesquels ils peuvent tester et recréer des modèles de comportements sociaux et politiques, à l'aide de données extrapolées à partir d'activités, d'organisations et d'échanges d'information en ligne. Cet essai montre qu'il existe sur Internet des réseaux virtuels d'action politique, à savoir des arrangements d'interaction publique entre différents acteurs sociaux se regroupant autour de certaines idées politiques. En outre, il essaie de déterminer si les réseaux virtuels canadiens imitent leurs communautés politiques nationales respectives, en utilisant une approche méthodologique désignée sous le nom d'analyse de la structure des liens. Quatre réseaux appartenant à des secteurs distincts, soit la politique autochtone, l'agriculture, les opérations bancaires et la condition féminine, sont analysés pour évaluer l'ampleur de la reproduction des dynamiques politiques et sociales du monde réel par les réseaux d'action politique virtuels.
APA, Harvard, Vancouver, ISO, and other styles
4

Maruyama, Tsubasa, Toshio Ueshiba, Mitsunori Tada, Haruki Toda, Yui Endo, Yukiyasu Domae, Yoshihiro Nakabo, Tatsuro Mori, and Kazutsugu Suita. "Digital Twin-Driven Human Robot Collaboration Using a Digital Human." Sensors 21, no. 24 (December 10, 2021): 8266. http://dx.doi.org/10.3390/s21248266.

Full text
Abstract:
Advances are being made in applying digital twin (DT) and human–robot collaboration (HRC) to industrial fields for safe, effective, and flexible manufacturing. Using a DT for human modeling and simulation enables ergonomic assessment during working. In this study, a DT-driven HRC system was developed that measures the motions of a worker and simulates the working progress and physical load based on digital human (DH) technology. The proposed system contains virtual robot, DH, and production management modules that are integrated seamlessly via wireless communication. The virtual robot module contains the robot operating system and enables real-time control of the robot based on simulations in a virtual environment. The DH module measures and simulates the worker’s motion, behavior, and physical load. The production management module performs dynamic scheduling based on the predicted working progress under ergonomic constraints. The proposed system was applied to a parts-picking scenario, and its effectiveness was evaluated in terms of work monitoring, progress prediction, dynamic scheduling, and ergonomic assessment. This study demonstrates a proof-of-concept for introducing DH technology into DT-driven HRC for human-centered production systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Kellam, Hugh, Clare Cook, Deborah L. Smith, and Pam Haight. "The Virtual Community of Practice Facilitation Model." International Journal of Technology and Human Interaction 19, no. 1 (August 18, 2023): 1–14. http://dx.doi.org/10.4018/ijthi.328578.

Full text
Abstract:
This study examines the instructional design, learning experiences, and outcomes of a virtual community of practice (VCoP). In 2019, the Northern Ontario School of Medicine launched a continuing professional development program consisting of an asynchronous online module followed by an optional series of facilitated case-based videoconference workshops, designed as a VCoP. This program evaluation study employed a convergent parallel mixed methods design and combined data sources from participant pre- and post-program surveys and reflections with a content analysis of semi-structured interviews. The paper reports key enablers that contributed to the following outcomes: the value of an online module as a baseline of knowledge; the impact of the shared case studies, experiences, and peer support on reflection and modifications to medical practice; and skill development and patient-centered care as a result of module and VCoP participation. A model for the effective design and delivery of VcoPs is proposed that results in acquisition of new knowledge and skills and promotes patient-centred practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Lafargue, Bernard. "De Blade Runner à A.I. : une machine plus humaine que l’homme." Figures de l'Art. Revue d'études esthétiques 6, no. 1 (2002): 461–67. http://dx.doi.org/10.3406/fdart.2002.1327.

Full text
Abstract:
La perfectibilité est à l’homme ce que l’instinct est à l’animal. Habitant le virtuel, l’homme est destiné à devenir sans fin ce qu’il est. Quel comité de sages pourrait fixer une limite aux métamorphoses d’un être totipotent, protéiforme et prothétique ? L’humanité n’est jamais acquise, mais en “hominiscence”, selon le mot si juste que Michel Serres forge sur le modèle d'“adolescence”. L’immense mérite des films de science-fiction est de nous faire (pré) voir quelques figures possibles de l’homme de demain. De Blade Runner à A. I. tout se passe comme si, la machine virtuelle devenait le modèle de l’homme.
APA, Harvard, Vancouver, ISO, and other styles
7

Coutu, Eric, Alexis Margaritis, and Geneviève Hachez. "Validation of the Anthropometric Data Acquisition Module of Safework." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 38 (July 2000): 840–43. http://dx.doi.org/10.1177/154193120004403843.

Full text
Abstract:
Existing Virtual Reality systems do not often integrate a realistic representation of the user. This lack of an accurate human model within such systems reduces the feeling of immersion for the subject. Therefore, one of the premises when using a virtual environment should be to generate a mannequin with anthropometric characteristics that accurately correspond to the user. Because the anthropometric data usually require a long acquisition process, a new sophisticated measuring technique has been created. This method is capable of grabbing automatically some dimensions of a subject and re-creating the corresponding virtual human. This method uses a motion capture system and requires placing sensors on specific anatomic landmarks in order to measure directly anthropometric variables. In this paper, we present the different steps and results of the validation of this new feature of SAFEWORK®.
APA, Harvard, Vancouver, ISO, and other styles
8

LI, Fu Xing. "The Application Research on Ergonomics Based on the CATIA Software Platform." Applied Mechanics and Materials 651-653 (September 2014): 2050–54. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.2050.

Full text
Abstract:
Through the ergonomic analysis module that is based on the CATIA software platform, the virtual human body model is introduced into the rescue cabin digital module of ambulance and thus the virtual man-machine relationship is established; meanwhile, the theories of ergonomics are employed as guidance for the optimization design of rescue cabin so as to probe into how the CATIA-based ergonomic analysis module is applied in product development and design.
APA, Harvard, Vancouver, ISO, and other styles
9

Han, Chang Hee, Won Il Kim, and Myoung Ho Oh. "A Functional Test Bed for Producing Virtual Human's Human-Like Movement Based on Limited Perception." Applied Mechanics and Materials 284-287 (January 2013): 3251–55. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.3251.

Full text
Abstract:
In this paper, the movement behavior of a virtual human based on realistically limited perception (RLP) is proposed to be human-like. As an interface between perception and movement-path generation, a mapping module is a fundamental component needed by a virtual human. Research of the mapping based on RLP was performed by Hill et al. However, their research was conducted using only a camera’s view point. In this present research, a virtual human’s integration with Hill et al’s mapping and other variables (e.g., enemy emergence) is considered in the context of a reconnaissance mission. The loci of the movement paths that were generated by human subjects and a virtual human based on RLP are compared with each other.
APA, Harvard, Vancouver, ISO, and other styles
10

Göbel, Stefan, Ido Aharon Iurgel, Markus Rössler, Frank Hülsken, and Christian Eckes. "Design and Narrative Structure for the Virtual Human Scenarios." International Journal of Virtual Reality 6, no. 4 (January 1, 2007): 1–10. http://dx.doi.org/10.20870/ijvr.2007.6.4.2703.

Full text
Abstract:
This article describes the design of the two application scenarios of the Virtual Human project and its integration into the Virtual Human system. This includes overall concepts and considerations of the demonstrators for the two application scenarios (learning, edutainment) as well as underlying methodic-didactic aspects for knowledge transmission and narrative concepts for story structure and story control during run-time of the system. Hence, in contrast to traditional learning systems with virtual characters as virtual instructors, an exciting and suspenseful interactive information space has been created. On the one hand, the methodic-didactic methods and VH learning model guarantee learning effects, on the other hand narrative structures and an emotion module provide the ground for a playful and exciting story environment, whereby the users can interact and discuss with a set of virtual characters.
APA, Harvard, Vancouver, ISO, and other styles
11

Zawani Ahmmad, Siti Nor, Eileen Su Lee Ming, Yeong Che Fai, Suneet Sood, Anil Gandhi, Nur Syarafina Mohamed, Hisyam Abdul Rahman, and Etienne Burdet. "Objective assessment of surgeon’s psychomotor skill using virtual reality module." Indonesian Journal of Electrical Engineering and Computer Science 14, no. 3 (June 1, 2019): 1533. http://dx.doi.org/10.11591/ijeecs.v14.i3.pp1533-1543.

Full text
Abstract:
<span>This study aims to identify measurable parameters that could be used as objective assessment parameters to evaluate surgical dexterity using computer-based assessment module. A virtual reality module was developed to measure dynamic and static hand movements in a bimanual experimental setting. The experiment was conducted with sixteen subjects divided into two groups: surgeons (N = 5) and non-surgeons (N = 11). Results showed that surgeons outperformed the non-surgeons in motion path accuracy, motion path precision, economy of movement, motion smoothness, end-point accuracy and end-point precision. The six objective parameters can complement existing assessment methods to better quantify a trainee’s performance. These parameters also could provide information of hand movements that cannot be measured with the human eye. An assessment strategy using appropriate parameters could help trainees learn on computer-based systems, identify their mistakes and improve their skill towards the competency, without relying too much on bench models and cadavers.</span>
APA, Harvard, Vancouver, ISO, and other styles
12

Zhu, Chun. "Design of Athlete’s Running Information Capture System in Space-Time Domain Based on Virtual Reality." Scientific Programming 2022 (February 3, 2022): 1–10. http://dx.doi.org/10.1155/2022/9415286.

Full text
Abstract:
In order to improve the training effect of athletes, aiming at the problems of inaccurate information capture results, poor real-time performance of sports information capture, and inability to effectively suppress noise interference in traditional methods, an athlete space-time running information capture system based on virtual reality is designed. Establish the athlete’s human skeleton, obtain the athlete’s sports joint points, design the overall architecture of the athlete’s space-time running information capture system, and realize the whole link design of sports information capture through RF chip, infrared camera, sports data acquisition module, data transmission module, and human-computer interaction module. Based on virtual reality technology, a virtual reality environment is built to obtain the characteristic parameters of athletes’ sports posture in space-time domain, and the median filtering algorithm is used to filter the original signal to eliminate the impact of noise signal on athletes’ sports information capture. Finally, the activity of the motion region is detected, and the motion information is captured combined with the Gaussian mixture model. The experimental results show that the system designed in this paper has high accuracy and anti-interference and can realize the real-time capture of motion information.
APA, Harvard, Vancouver, ISO, and other styles
13

Shcheglov, B. O., N. I. Bezulenko, S. A. Atashchikov, and S. N. Scheglova. "Virtual Atlas of Personified Human Anatomy “SkiaAtlas” and the Possibility of Its Application." Vestnik NSU. Series: Information Technologies 18, no. 1 (2020): 83–93. http://dx.doi.org/10.25205/1818-7900-2020-18-1-83-93.

Full text
Abstract:
The work is devoted to the description of the structure of the developed SkiaAtlas software, which is focused on working with individual anatomical models of the human body and physiological parameters of the patient. The problem of using mock-up and post-sectional material in teaching medical students, and why the developed information system has advantages over these models, is shown. Virtual anatomical models were obtained from anonymous DICOM images of magnetic resonance imaging (MRI) and computed tomography (CT). The subsystems of the information system are described: a PACS server where all data is stored (server part) and a web application where the user works with data (client part). The information system modules implemented in the form of various software products are described in detail: data import module, anonymization module, DBMS module, visualization module, etc. The operation of these modules is illustrated schematically. It is shown in what programming languages and frameworks this software is implemented, and advantages of choosing these implementation tools relative to software are shown. The process of deleting personal data from DICOM files is described in detail; the process of obtaining the “mask” of the object in the picture, which is then used to obtain three-dimensional models of the patient’s internal organs. The process of user work with the database and the search for pathologies using the system interface tools are clearly described. The possibilities of using this information system in the educational field are shown – an illustration of specific clinical cases in order to search for cause-effect relationships in the pathogenesis of various diseases and the development of clinical thinking in a student. In a specific clinical case, an example is given of how the SkiaAtlas program was used to search for a pathology – a volumetric formation of the left hemisphere of the brain.
APA, Harvard, Vancouver, ISO, and other styles
14

Rudolf, A., Z. Stjepanović, and A. Cupar. "DESIGN OF GARMENTS USING ADAPTABLE DIGITAL BODY MODELS." TEXTEH Proceedings 2021 (October 22, 2021): 9–17. http://dx.doi.org/10.35530/tt.2021.09.

Full text
Abstract:
In recent years, the 3D design software has been mostly used to improve the garment design process by generating virtual 3D garment prototypes. Many researchers have been working on the development of 3D virtual garment prototypes using 3D body models and involving the 3D human body scanning in different postures. The focus of research in this field today relies on generating a kinematic 3D body model for the purposes of developing the individualized garments, the exploration of which is presented in this paper. The discussed area is also implemented in the Erasmus+ project OptimTex - Software tools for textile creatives, which is fully aligned with the new trends propelled by the digitization of the whole textile sector. The Slovenian module focuses on presenting the needs of digitization for the development of individualized garments by using different software tools: 3D Sense, PotPlayer, Meshroom, MeshLab, Blender and OptiTex. The module provides four examples: 3D human body scanning using 3D photogrammetry, 3D human body modelling and reconstruction, construction of a kinematic 3D body model and 3D virtual prototyping of individualized smart garments, and thus displays the entire process for the needs of 3D virtual prototyping of individualized garments. In the OptimTex project, the 3D software Blender was used to demonstrate and teach students how to construct the "armature" of the human body as an object for rigging or the virtual skeleton for a 3D kinematic body model, using the knee as an example.
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Jian, Yanguang Wan, Dexin Zuo, Cuixia Ma, Jian Gu, Ping Tan, Hongan Wang, Xiaoming Deng, and Yinda Zhang. "Efficient Virtual View Selection for 3D Hand Pose Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 419–26. http://dx.doi.org/10.1609/aaai.v36i1.19919.

Full text
Abstract:
3D hand pose estimation from single depth is a fundamental problem in computer vision, and has wide applications. However, the existing methods still can not achieve satisfactory hand pose estimation results due to view variation and occlusion of human hand. In this paper, we propose a new virtual view selection and fusion module for 3D hand pose estimation from single depth. We propose to automatically select multiple virtual viewpoints for pose estimation and fuse the results of all and find this empirically delivers accurate and robust pose estimation. In order to select most effective virtual views for pose fusion, we evaluate the virtual views based on the confidence of virtual views using a light-weight network via network distillation. Experiments on three main benchmark datasets including NYU, ICVL and Hands2019 demonstrate that our method outperforms the state-of-the-arts on NYU and ICVL, and achieves very competitive performance on Hands2019-Task1, and our proposed virtual view selection and fusion module is both effective for 3D hand pose estimation.
APA, Harvard, Vancouver, ISO, and other styles
16

Xia, Zhizhen. "The Design of Virtual Reality Learning Module in Neurolinguistics: Focus on Aphasia." Journal of Education, Humanities and Social Sciences 22 (November 26, 2023): 237–43. http://dx.doi.org/10.54097/ehss.v22i.12427.

Full text
Abstract:
The metaverse, which is essentially a virtual environment parallel to the actual world, is the most recent stage in the evolution of visual immersion technology and is quickly emerging as a testing ground for new social innovations. Since two-dimensional network technology struggles to meet students' demands for immersive learning environments and organic human-computer interactions, meta-universes will change how online education is taught to support students' individualized learning and overall development. In order to push the boundaries of virtual reality technology utilized in various disciplines, this paper will create a virtual reality classroom for neurolinguistics students to learn about aphasia. Different scenarios are built to help students experience communicating with aphasic patients, familiarize themselves with different types of aphasic symptoms and causes, and discuss the rehabilitation program for aphasic patients with peers. Students can complete the whole process of knowledge cognition-experience-construction in a richer and more vivid learning form.
APA, Harvard, Vancouver, ISO, and other styles
17

Kozlovsky, Evgen O., and Hennadiy M. Kravtsov. "Multimedia virtual laboratory for physics in the distance learning." CTE Workshop Proceedings 5 (March 21, 2018): 42–53. http://dx.doi.org/10.55056/cte.134.

Full text
Abstract:
Research goals: the description of technology of software development in Physics Virtual Laboratory for Distance Learning System.Research objectives: the architecture of client and server parts of the lab, the functionality of the system modules, user roles, as well as the principles of virtual laboratory use on a personal computer.Object of research: the distance learning system “Kherson Virtual University”.Subject of research: virtual laboratory for physics in the distance learning.Research methods used: analysis of statistics and publications.Results of the research. The development of the software module “Virtual Lab” in distance learning system “Kherson Virtual University” (DLS KVU) applied to the problems of physics on topics kinematics and dynamics. The information technology design and development, the structure of the virtual laboratory, and its place in the DLS KVU are described. The principal modes of the program module operation in the system and methods for its use in the educational process are described.The main conclusions and recommendations. The use of this software interface allows teachers to create labs and use them in their distance courses. Students, in turn, will be able to conduct research, carrying out virtual laboratory work.
APA, Harvard, Vancouver, ISO, and other styles
18

Chakrabarti, Debkumar, Manoj Deori, Sangeeta Pandit, and T. Ravi. "Virtual Ergonomics Laboratory: Human Body Dimension Relevance." Advanced Engineering Forum 10 (December 2013): 22–27. http://dx.doi.org/10.4028/www.scientific.net/aef.10.22.

Full text
Abstract:
Ergonomics has become an integral part of design education curriculum, where input content demands demonstration through citing and analysing appropriate design experiences. It has come to fore through many academic forum discussions and meetings that to internalise various ergonomics issues relevant hands on experiences on this are necessary. To impart a feel of laboratory experimentation as well as application relevance to a greater number of learners, a virtual environment scenario could go along.A virtual presentation of ergonomics laboratory experiments on physical anthropometry and its design dimension consequences has been tried out. It contains a total of eleven sections. The topic opens with the introductory session where the subject matter and the laboratory experiment methodology in general was considered; this was followed by ten specific topics with flash based self-learning modules and data support on Indian population was provided to have a ready reference. Some of these topic specific experiment sections are also backed with video demonstrations.A whole set of virtual laboratory module under development, for users feedback, has already been uploaded in the net. The experiments are self-explanatory, downloadable and easy to perform. The feedback collected so far (online and also through direct demonstration surveys), confirms its usefulness both by the teachers and student-learners of Ergonomics specialisation and design programmes. This paper reports the salient features with content outline of the educational and free to use virtual anthropometric experiments manual developed which is being fine-tuned at IIT Guwahati.
APA, Harvard, Vancouver, ISO, and other styles
19

Muhammad, Mahathir, Fatchur Rohman, Mimien Henie Irawati, Bagus Priambodo, Farid Akhasani, and Sofia Ery Rahay. "Augmented Reality-assisted Electronic Module for Ecology Student." Journal of Mechanical, Civil and Industrial Engineering 2, no. 2 (November 18, 2021): 17–21. http://dx.doi.org/10.32996/jmcie.2021.2.2.3.

Full text
Abstract:
Education during the pandemic experiences many obstacles that hinder all learning processes. Learning in ecology courses, in particular, has problems because it has one of the activities students have to go to the environment to make observations. The environment referred to here is the Brantas River. Brantas River is the largest river in East Java. However, the river is polluted due to human activities around the river. This phenomenon is intended to be presented as a lesson material for students to be able to improve environmental attitudes. Augmented reality assistance is useful for moving the environment that will be the subject of student observation during virtual learning.
APA, Harvard, Vancouver, ISO, and other styles
20

de la Cruz, Marcos, Gustavo Casañ, Pedro Sanz, and Raúl Marín. "Preliminary Work on a Virtual Reality Interface for the Guidance of Underwater Robots." Robotics 9, no. 4 (October 2, 2020): 81. http://dx.doi.org/10.3390/robotics9040081.

Full text
Abstract:
The need for intervention in underwater environments has increased in recent years but there is still a long way to go before AUVs (Autonomous Underwater Vehicleswill be able to cope with really challenging missions. Nowadays, the solution adopted is mainly based on remote operated vehicle (ROV) technology. These ROVs are controlled from support vessels by using unnecessarily complex human–robot interfaces (HRI). Therefore, it is necessary to reduce the complexity of these systems to make them easier to use and to reduce the stress on the operator. In this paper, and as part of the TWIN roBOTs for the cooperative underwater intervention missions (TWINBOT) project, we present an HRI (Human-Robot Interface) module which includes virtual reality (VR) technology. In fact, this contribution is an improvement on a preliminary study in this field also carried out, by our laboratory. Hence, having made a concerted effort to improve usability, the HRI system designed for robot control tasks presented in this paper is substantially easier to use. In summary, reliability and feasibility of this HRI module have been demonstrated thanks to the usability tests, which include a very complete pilot study, and guarantee much more friendly and intuitive properties in the final HRI-developed module presented here.
APA, Harvard, Vancouver, ISO, and other styles
21

Giachos, Ioannis, Evangelos C. Papakitsos, Petros Savvidis, and Nikolaos Laskaris. "Inquiring Natural Language Processing Capabilities on Robotic Systems through Virtual Assistants: A Systemic Approach." Journal of Computer Science Research 5, no. 2 (April 18, 2023): 28–36. http://dx.doi.org/10.30564/jcsr.v5i2.5537.

Full text
Abstract:
This paper attempts to approach the interface of a robot from the perspective of virtual assistants. Virtual assistants can also be characterized as the mind of a robot, since they manage communication and action with the rest of the world they exist in. Therefore, virtual assistants can also be described as the brain of a robot and they include a Natural Language Processing (NLP) module for conducting communication in their human-robot interface. This work is focused on inquiring and enhancing the capabilities of this module. The problem is that nothing much is revealed about the nature of the human-robot interface of commercial virtual assistants. Therefore, any new attempt of developing such a capability has to start from scratch. Accordingly, to include corresponding capabilities to a developing NLP system of a virtual assistant, a method of systemic semantic modelling is proposed and applied. For this purpose, the paper briefly reviews the evolution of virtual assistants from the first assistant, in the form of a game, to the latest assistant that has significantly elevated their standards. Then there is a reference to the evolution of their services and their continued offerings, as well as future expectations. The paper presents their structure and the technologies used, according to the data provided by the development companies to the public, while an attempt is made to classify virtual assistants, based on their characteristics and capabilities. Consequently, a robotic NLP interface is being developed, based on the communicative power of a proposed systemic conceptual model that may enhance the NLP capabilities of virtual assistants, being tested through a small natural language dictionary in Greek.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhou, Li Bo, Fu Lin Xu, and Zhi Xiong Shen. "Research of Virtual Assembly Simulation System." Applied Mechanics and Materials 628 (September 2014): 421–25. http://dx.doi.org/10.4028/www.scientific.net/amm.628.421.

Full text
Abstract:
Virtual assembly is one of the main contents of the virtual manufacturing; assembly process simulation is an important part to product virtual assembly. The establishment and optimization technology of three-dimensional assembly simulation scene have been studied to develop a virtual machine assembly simulation system, which achieve a virtual simulation of assembly and disassembly of the machine and the integration and improvement of existing collision detection library, design and implement a graphical simulation system collision detection module, which can real-time interference checking to machine parts motion. The system has good human-computer interaction and visual interface, versatility, can be applied to any organization's assembly simulation, and through examples of the validity of the system.
APA, Harvard, Vancouver, ISO, and other styles
23

Sultana, Qudusia, Rashmi Jain, M. H. Shariff, Pranup Roshan Quadras, and Amith Ramos. "The Impact of Simulation-Based Teaching Module Involving Virtual Dissection on Anatomy Curriculum Delivery." International Journal of Anatomy and Research 10, no. 4 (December 5, 2022): 8476–81. http://dx.doi.org/10.16965/ijar.2022.219.

Full text
Abstract:
Background: Knowledge of anatomy, one of the core preclinical subjects, is very important for medical undergraduates to have a thorough understanding of various clinical conditions. The traditional method of learning anatomy involves dissection of human cadavers. Medical education system is entering an era in which the traditional teaching methods are being supplemented by newer technological teaching techniques. Simulation based teaching like virtual dissection table “Anatomage” can enhance the understanding and retaining capacity of the subject. The aim of the study is to determine the perception of virtual dissection, among students and staff and to compare the knowledge acquired through simulation based teaching and traditional teaching method. Material and Method: The study comprised of 150 first-year MBBS students who attended regular theory class on ‘joints of musculoskeletal system’ and answered pre-test. The students were divided into two groups, based on teaching method, one which involved the use of a virtual dissection table, and the other, involving the use of cadaveric dissection. The students were made to attempt the post-test. The students were then assessed based on their responses to the pre- and post-tests. Feedback on the overall utility of the table from both students and staff was taken. Results: The mean post-test scores were significantly higher than the mean pre-test scores, irrespective of the teaching method used. (p<0.001) However, the students who were exposed to the virtual dissection table scored comparatively better in the post-test than those exposed to cadaveric dissection. (p<0.001) 100% of the faculty and 93.3% of the students agreed that three-dimensional visualization improves understanding of anatomical structures. Conclusion: The findings of this study suggest that though cadaveric dissection and virtual dissection enhance learning, the students tend to perform better with virtual dissection. The incorporation of simulation-based teaching into the Anatomy curriculum is essential to supplement traditional cadaveric dissection and ensure engaging as well as high impact delivery of the curriculum. KEY WORDS: Simulation, Virtual dissection, Musculoskeletal, Anatomage, Cadaver, Dissection, Anatomy, MBBS, Teaching Methodologies.
APA, Harvard, Vancouver, ISO, and other styles
24

Cubas, Carlos, and Antonio Carlos Sementille. "A modular framework for performance-based facial animation." Journal on Interactive Systems 9, no. 2 (August 29, 2018): 1. http://dx.doi.org/10.5753/jis.2018.697.

Full text
Abstract:
In recent decades, interest in capturing human face movements and identifying expressions for the purpose of generating realistic facial animations has increased in both the scientific community and the entertainment industry. We present a modular framework for testing algorithms used in performance-based facial animation. The framework includes the modules used in pipelines found in the literature as a module for creating datasets of blendshapes which are, facial models, where the vectors represent individual facial expressions, an algorithm processing module for identification of weights and, finally, a redirection module that creates a virtual face based on blendshapes. The framework uses a RGB-D camera, the RealSense F200 camera from Intel.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhou, Yuan, Jiejun Hou, Qi Liu, Xu Chao, Nan Wang, Yu Chen, Jianjun Guan, Qi Zhang, and Yongchang Diwu. "VR/AR Technology in Human Anatomy Teaching and Operation Training." Journal of Healthcare Engineering 2021 (June 7, 2021): 1–13. http://dx.doi.org/10.1155/2021/9998427.

Full text
Abstract:
AR/VR technology can fuse the clinical imaging data and information to build an anatomical environment combining virtual and real, which is helpful to improve the interest of teaching and the learning initiative of medical students, and then improve the effect of clinical teaching. This paper studies the application and learning effect of the VR/AR system in human anatomy surgery teaching. This paper first shows the learning environment and platform of the VR/AR system, then explains the interface and operation of the system, and evaluates the teaching situation. This paper takes the VR/AR operation simulation system of an Irish company as an example and evaluates the learning effect of 41 students in our hospital. Research shows that the introduction of the feature reweighting module in the VR/AR surgery simulation system improves the accuracy of bone structure segmentation (IOU value increases from 79.62% to 83.56%). For real human ultrasound image data, the IOU value increases from 80.21% to 82.23% after the feature reweighting module is introduced. Therefore, the dense convolution module and feature reweighting module improve the learning ability of the network for bone structure features in ultrasound images from two aspects of feature connection and importance understanding and effectively improve the performance of bone structure segmentation.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Zhi Wei, Lei Chen, and Bo Feng Ren. "Design of Insulation Resistance Test for Cables in Intelligent Munition." Key Engineering Materials 474-476 (April 2011): 361–64. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.361.

Full text
Abstract:
In the test system of intelligent munition, a test module of insulation resistance has been designed using the test resource of the system. The test module used virtual instrument technology,database technology and modularization method.It optimizes allocation of resources and simplifies hardware design. It has a good human-computer conversational enviroment and it is easy to operate and record for users. Introduced measuring methods of insulation resistance,designed hardware and measuring circuit,developed test software and realized the test data storage and management by using database technology.It realized automatic test.
APA, Harvard, Vancouver, ISO, and other styles
27

He, Qichang, Shiguang Qiu, Xiumin Fan, and Keyan Liu. "An interactive virtual lighting maintenance environment for human factors evaluation." Assembly Automation 36, no. 1 (February 1, 2016): 1–11. http://dx.doi.org/10.1108/aa-04-2015-029.

Full text
Abstract:
Purpose – The paper aims to establish a virtual lighting maintenance environment (VLME), and to analyze the visibility-related human factors (HFs) during maintenance operations through interactive simulations. Design/methodology/approach – First, an accurate task lighting modeling method was developed, which includes lighting information modeling and illuminant parameters calibration. Then, the real-time interaction between the task lighting and three-dimensional virtual human was modeled. After that, the attenuation coefficient of visibility was determined. Also, the HFs’ analysis process in VLME was described in detail. Findings – A case study of power supply module replacement of radar equipment was performed in VLME. The HFs’ analysis demonstrated that the task lighting significantly affects the visibility, which causes indirect impact on posture comfort and operation safety. Practical implications – Through evaluating maintenance operation processing in lighting environment, engineers can better analyze and validate the maintainability design for complex equipment, and some potential ergonomics and safety issues can be found and dealt earlier. Originality/value – An VLME was built for interactive “human-in-loop” maintenance operation simulation, which can support HFs’ evaluation in lighting environment accurately and effectively.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Changyang, and Lap-Fai Yu. "Generating Activity Snippets by Learning Human-Scene Interactions." ACM Transactions on Graphics 42, no. 4 (July 26, 2023): 1–15. http://dx.doi.org/10.1145/3592096.

Full text
Abstract:
We present an approach to generate virtual activity snippets, which comprise sequenced keyframes of multi-character, multi-object interaction scenarios in 3D environments, by learning from recordings of human-scene interactions. The generation consists of two stages. First, we use a sequential deep graph generative model with a temporal module to iteratively generate keyframe descriptions, which represent abstract interactions using graphs, while preserving spatial-temporal relations through the activities. Second, we devise an optimization framework to instantiate the activity snippets in virtual 3D environments guided by the generated keyframe descriptions. Our approach optimizes the poses of character and object instances encoded by the graph nodes to satisfy the relations and constraints encoded by the graph edges. The instantiation process includes a coarse 2D optimization followed by a fine 3D optimization to effectively explore the complex solution space for placing and posing the instances. Through experiments and a perceptual study, we applied our approach to generate plausible activity snippets under different settings.
APA, Harvard, Vancouver, ISO, and other styles
29

Hernández Mejía, Ricardo, Francisco Javier Ibarra Villegas, and Caín Pérez Wences. "Voice communication module for automotive instrument panel indicators based on virtual assistant open-source solution - Mycroft AI." REVISTA DE CIENCIAS TECNOLÓGICAS 6, no. 4 (October 4, 2023): e328. http://dx.doi.org/10.37636/recit.v6n4e328.

Full text
Abstract:
This work was originated from the increasing interest in several industries to implement voice based virtual assistant solutions powered by the Natural Language Processing field of study. This work is focused on the automotive industry Human Machine Interface related products, specifically the Instrument Panel. Nowadays people are constantly using virtual assistants like Google Assistant, Alexa, Cortana or Siri on their electronic devices. Furthermore, 31% of cars have a built-in virtual assistant, for example Ford uses Alexa, Merced­es-Benz and Hyundai use Google Assistant, BMW and Nissan use Cortana, GM uses IBM Watson, Honda uses Hana and Toyota uses YUI. Apart from the proprietary solutions described earlier, there are also contemporary open-source generic solutions available on the market, such as Mycroft AI which stands out from other technologies due to ready to deploy, well documented, simple installation on a Linux PC or RPI SoC, and simple execution. This paper presents a way to use Mycroft AI as an alternative to add artificial intelligence-based voice assistance to applications in the automotive domain. The voice communication module presented here drives notifications related to three different entities: seat belt, fuel level and battery level, all of them are telltales present in any automotive Instrument Panel. Since the Mycroft AI design approach is based on Human Centered Design (HCD), the voice communication module presented here provides real user experience (UX) based design. As a conclusion, Mycroft AI demonstrates great potential as an alternative to add voice assistance to automotive industry Human Machine Interface related products. About future work, due to the fact that Mycroft AI is based on Python, there are many possibilities for connecting and expanding the voice communication module by using countless Python libraries in order to import and process any type of information, in any format or source, for example the information from communication technologies like CAN, LIN, Ethernet, MOST, GPS or any other device or technology in order to create comprehensive automotive solutions.
APA, Harvard, Vancouver, ISO, and other styles
30

Mukthineni, Venkat, Rahul Mukthineni, Onkar Sharma, and Swathi Jamjala Narayanan. "Face Authenticated Hand Gesture Based Human Computer Interaction for Desktops." Cybernetics and Information Technologies 20, no. 4 (November 1, 2020): 74–89. http://dx.doi.org/10.2478/cait-2020-0048.

Full text
Abstract:
AbstractHand gesture detection and recognition is a cutting-edge technology that is getting progressively applicable in several applications, including the recent trends namely Virtual Reality and Augmented Reality. It is a key part of Human-Computer Interaction which gives an approach to two-way interaction between the computer and the user. Currently, this technology is limited to expensive and highly specialized equipment and gadgets such as Kinect and the Oculus Rift. In this paper, various technologies and methodologies of implementing a gesture detection and recognition system are discussed. The paper also includes the implementation of a face recognition module using the Viola-Jones Algorithm for authentication of the system followed by hand gesture recognition using CNN to perform basic operations on the laptop. Any type of user can use gesture control as an alternative and interesting way to control their laptop. Furthermore, this can be used as a prototype for future implementations in the field of virtual reality as well as augmented reality.
APA, Harvard, Vancouver, ISO, and other styles
31

Sharma, Sharad, Sri Teja Bodempudi, David Scribner, and Peter Grazaitis. "Active Shooter response training environment for a building evacuation in a collaborative virtual environment." Electronic Imaging 2020, no. 13 (January 26, 2020): 223–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.13.ervr-223.

Full text
Abstract:
During active shooter events or emergencies, the ability of security personnel to respond appropriately to the situation is driven by pre-existing knowledge and skills, but also depends upon their state of mind and familiarity with similar scenarios. Human behavior becomes unpredictable when it comes to making a decision in emergency situations. The cost and risk of determining these human behavior characteristics in emergency situations is very high. This paper presents an immersive collaborative virtual reality (VR) environment for performing virtual building evacuation drills and active shooter training scenarios using Oculus Rift headmounted displays. The collaborative immersive environment is implemented in Unity 3D and is based on run, hide, and fight mode for emergency response. The immersive collaborative VR environment also offers a unique method for training in emergencies for campus safety. The participant can enter the collaborative VR environment setup on the cloud and participate in the active shooter response training environment, which leads to considerable cost advantages over large-scale real-life exercises. A presence questionnaire in the user study was used to evaluate the effectiveness of our immersive training module. The results show that a majority of users agreed that their sense of presence was increased when using the immersive emergency response training module for a building evacuation environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Mikhaylyuk, Mikhail, Boris Kryuchkov, and Vitaly Usov. "Virtual prototyping of the lunar module landing system to improve cosmonauts’ spatial and situational awareness." Robotics and Technical Cybernetics 9, no. 3 (September 30, 2021): 225–33. http://dx.doi.org/10.31776/rtcj.9308.

Full text
Abstract:
Relevance. The transition to manned flights after the launches of the automatic stations of the «Luna-Globus» series will require studying issues of the crew safety in lunar missions. First of all, it will be necessary to clarify the role and capabilities of cosmonauts when landing the lunar module in complicated conditions. The subject of the study is the means of modeling and visualizing the progress of the flight operation observed by the operator. The area of study is the issues of ensuring the safety of the automatic landing of the lunar descent module with the possibility of switching to manual control mode after a decision was made by a human to change the landing site of the lunar module. The provision of spatial and situational awareness is considered in this context as a prerequisite for the timely response of the operator to the occurrence of a non-standard situation. Objective. The goal of the work is to present the virtual prototyping of the Moon landing stage for studying details of information support for cosmonauts during the conditions of visual control complication. Methodology. The analysis of ways to maintain spatial and situational awareness for a timely assessment by the operator of the suitability of the predicted site in the landing area is a key condition for deciding whether to switch from automatic mode to manual mode. At the same time, the quality of preparation and decision-making in a short time frame significantly depends on the accepted methods of visualizing the landing on the surface of the Moon. Results and Discussion. The directions of application of modeling and visualization tools for virtual prototyping of the landing of the descent lunar module are formulated. It is shown that a person's decision-making in conditions of time scarcity and possible visual interference when monitoring the external environment requires special means of information support. The issues of organizing the visual environment in accordance with the information needs of a person are studied taking into account the prototypes in the classes of manned and unmanned vertical take-off and landing vehicles, for which similar types of operator activities are described. This made it possible to formulate basic approaches to modeling the landing of vehicles in conditions of problems with visual perceptions. Taking into account the increased requirements, we consider promising approaches based on the synthesis of 2D and 3D-visual dynamic scenes, as well as precedents for the use of synthetic vision systems in the described conditions. The scope of application of the obtained results is not limited to the tasks of designing complex human-technical systems, but may have applications in the field of building computer simulators for the training of cosmonauts. This practice compares favorably with the options for human training on hardware-in-the-loop simulation models and on real helicopter-type vehicles in terms of safety and flexibility of modification. Conclusion. The use of virtual prototyping of the moon landing makes it possible to expand the search for options for improving the cosmonaut's spatial and situational awareness. The general conclusion in the context of this goal pursuing is the feasibility of using simulation methods to build a virtual environment that recreates the conditions for a cosmonaut to make a decision in an emergency situation when landing a lunar module, as one of the most critical flight operations for the safety of lunar missions. Key words Lunar exploration, lunar lander, landing simulation and visualization, virtual activity environment, unmanned and manned vertical take-off and landing vehicles, spatial and situational awareness, synthetic vision systems.
APA, Harvard, Vancouver, ISO, and other styles
33

Ishikawa, Shudai, and Takumi Ikenaga. "Image-based virtual try-on system with clothing extraction module that adapts to any posture." Computers & Graphics 106 (August 2022): 161–73. http://dx.doi.org/10.1016/j.cag.2022.06.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Zhiwei. "Task Image Setting of 3D Animation Based on Virtual Reality and Artificial Intelligence." Mobile Information Systems 2022 (September 7, 2022): 1–8. http://dx.doi.org/10.1155/2022/5233362.

Full text
Abstract:
In order to make the expression and action of animated virtual characters more realistic, this paper proposes a virtual character expression and action system based on 3D animation. The system hardware module is used to complete the collection and processing of image data and human bone data. Then, the positioning of human skeleton points and the mapping relationship between joint points and moving skeleton points are constructed, and finally the virtual character model is constructed. On this basis, this paper completes the correspondence of feature points through facial feature point mapping and completes the face model alignment by aligning the video face with the 3D animation virtual face. The Laplace coordinate recovery model is used to reconstruct the facial expression action, complete the simulation of 3D animation virtual character expression action and realize the design of the virtual character expression action system based on 3D animation. The experimental results show that in the aspect of expression movement, the system in this paper has a better effect than the real-time motion capture technology system and the 2D animation expression movement system with a 95.40% simulation degree. The fidelity of skin texture processing in this system is 97.60%. Conclusion. The designed system can effectively simulate the facial expression in the character image and integrate it into the three-dimensional animation to make the virtual character more vivid. After rendering and skin texture processing on the Unity3D platform of the system, the authenticity of the virtual character is enhanced.
APA, Harvard, Vancouver, ISO, and other styles
35

Qiu, Shiguang, Yunfei Yang, Xiumin Fan, and Qichang He. "Human factors automatic evaluation for entire maintenance processes in virtual environment." Assembly Automation 34, no. 4 (September 9, 2014): 357–69. http://dx.doi.org/10.1108/aa-04-2014-028.

Full text
Abstract:
Purpose – The paper aims to propose a systematic approach for human factors (HFs) automatic evaluation for entire maintenance processes in virtual environment. Design/methodology/approach – First, a maintenance process information model is constructed to map real maintenance processes into computer environment. Next, based on this information model, the automatic evaluation methods for visibility, operation comfort and reachability are presented. All evaluation results are weighted and added up to establish a comprehensive HFs evaluation model. Then, the methods mentioned above are realized as an HFs evaluation module, which is integrated into virtual maintenance simulation platform, software developed by our lab. Findings – An application in HFs evaluation of repairing hydraulic motor on container spreader is implemented, and an on-site survey is carried out. The comparison between the result from the survey and the result we get using the presented methods shows that our solution can support HFs fast assessment accurately and effectively. Practical implications – Through evaluating maintenance operation processes, engineers can better analyze and validate the maintainability design of complex equipment, and some potential ergonomic issues can be found and dealt earlier. Originality/value – The paper contributes to present a systematic approach to achieve HFs fast and accurate evaluation for entire maintenance processes, rather than for a few maintenance postures.
APA, Harvard, Vancouver, ISO, and other styles
36

Stytz, Martin R., Elizabeth Block, and Brian Soltz. "Providing Situation Awareness Assistance to Users of Large-Scale, Dynamic, Complex Virtual Environments." Presence: Teleoperators and Virtual Environments 2, no. 4 (January 1993): 297–313. http://dx.doi.org/10.1162/pres.1993.2.4.297.

Full text
Abstract:
As virtual environments grow in complexity, size, and scope users will be increasingly challenged in assessing the situation in them. This will occur because of the difficulty in determining where to focus attention and in assimilating and assessing the information as it floods in. One technique for providing this type of assistance is to provide the user with a first-person, immersive, synthetic environment observation post, an observatory, that permits unobtrusive observation of the environment without interfering with the activity in the environment. However, for large, complex synthetic environments this type of support is not sufficient because the mere portrayal of raw, unanalyzed data about the objects in the virtual space can overwhelm the user with information. To address this problem, which exists in both real and virtual environments, we are investigating the forms of situation awareness assistance needed by users of large-scale virtual environments and the ways in which a virtual environment can be used to improve situation awareness of real-world environments. A technique that we have developed is to allow a user to place analysis modules throughout the virtual environment. Each module provides summary information concerning the importance of the activity in its portion of the virtual environment to the user. Our prototype system, called the Sentinel, is embedded within a virtual environment observatory and provides situation awareness assistance for users within a large virtual environment.
APA, Harvard, Vancouver, ISO, and other styles
37

Refsland, Scot Thrane, Takeo Ojika, and Robert Berry. "Enhanced Environments: Large-Scale, Real-Time Ecosystems." Presence: Teleoperators and Virtual Environments 11, no. 3 (June 2002): 221–46. http://dx.doi.org/10.1162/105474602317473196.

Full text
Abstract:
This research proposes a new method for using real-time information to support large-scale, climatic virtual environments that exhibit natural eco-behavioral conditions. The purpose of this research is to support a real-time virtual ecosystem created by live weather information and GIS terrain data, and delivered through a common multimedia PC/Internet network. For this research experiment, we customized available GIS satellite, terrain, and photography data to construct a highly accurate, large-scale, virtual environment. Next, a Web-based climatic collection system was developed to persistently collect real-time weather information for the physical area being modeled. Finally, an enhanced environment module was created and added to a popular game engine to support a “living” virtual ecosystem with real-time climatic conditions. This type of enhanced environment lays the foundation for creating emergent, dynamic environments that integrate the behavioral patterns of climate, artificial life, user interactions, and their complex interrelationships within a dynamic virtual world. In the sections that follow, the issues and problems of constructing, supporting, and maintaining a new style of virtual environment are explored, discussed, and analyzed. Finally, a conclusion is presented, including future uses and potentials of this research.
APA, Harvard, Vancouver, ISO, and other styles
38

Petruse, Radu Emanuil, and Silvana Maria Vlad. "Optimizing Manual Assembly Operations: Ergonomics Analysis and Human-Robot Collaboration." Acta Universitatis Cibiniensis. Technical Series 75, no. 1 (December 1, 2023): 1–13. http://dx.doi.org/10.2478/aucts-2023-0001.

Full text
Abstract:
Abstract This paper aims to provide a comprehensive overview of the field of ergonomics, with a specific focus on the application of virtual simulation in the context of manual assembly stations assisted by collaborative robots. The theoretical part of this paper presents a concise introduction to the origins and key figures in the field of ergonomics, highlighting the importance and relevance of this discipline. It also discusses the current methods of ergonomic evaluation and the necessary steps involved in conducting such assessments. Furthermore, it delves into the existing standards, associations, and organizations related to ergonomics, as well as the software solutions available for ergonomic analysis. A case study is presented which demonstrates how to perform an ergonomic analysis using the Ergonomics Evaluation module within the 3D Experience platform. The methodology follows a systematic approach, starting with a physical environment simulation to identify key positions for virtual evaluation. These positions are then simulated using a representative 3D model of the assembly station and selected manikins. The chosen ergonomic analysis method is tailored to the specific movements involved in the assembly activity.
APA, Harvard, Vancouver, ISO, and other styles
39

You, Lingli. "Digital Empowerment Intangible Heritage -CLO 3D Virtual Fashion Design for Custom Clothing of Blue Clamp-Resist Dyeing." Learning & Education 10, no. 7 (June 7, 2022): 62. http://dx.doi.org/10.18282/l-e.v10i7.2951.

Full text
Abstract:
As the intangible cultural heritage of Wenzhou, blue clamp-resist dyeing has important cultural and economic value, but with the continuous progress of science and technology and people’s concept, the development of Wenzhou blue clamp-resist dyeing technology has encountered a bottleneck. In this paper, CLO 3D virtual simulation technology is combined with Wenzhou blue clamp-resist dyeing, digital empowerment of intangible cultural heritage, and the realization of 3D human model establishment, 2D model module combination, virtual sewing, virtual pattern design, virtual 3D display and other research and development processes, so as to innovate The form of custom clothing design and display of blue clamp-resist dyeing is introduced, which reduces the product development cost of clothing enterprises and improves the efficiency of product design. At the same time, it also provides new ideas for the inheritance, development and publicity and display of Wenzhou blue clamp-resist dyeing.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Jing, Shan Shan Bai, Yi Long Wang, Ping Xi, and Bing Yi Li. "Design of Virtual Machine Assembly Simulation System in Single-Channel Immersion." Key Engineering Materials 620 (August 2014): 556–62. http://dx.doi.org/10.4028/www.scientific.net/kem.620.556.

Full text
Abstract:
The virtual reality technology has been widely applied in virtual manufacturing; virtual environment which integrated with visual, auditory and tactile can be established. In this paper, a single - channel immersion machine assembly simulation system was built by EON Studio of virtual reality software and virtual peripherals. For building immersion virtual environment, three dimensional entity model of machine was established by the CATIA software, rendering and coloring it through the 3ds Max, inputting interface of EON convert file. Researching on interactive virtual assembly technology of machine was done, two interactive methods were developed through the keyboard ,mouse and other input devices or data glove ,(Flock and Bird) and other virtual peripherals respectively: one was to realize the interaction based on keyboard by the triggering sensor node, event driven and routing mechanism ; another was established the virtual hand, achieving the spatial position of manpower through driving virtual hand by data glove and (Flock and Bird) , and converted it into the position of virtual hand in virtual space, to complete the grasping, moving, releasing the object operation in immersion virtual environment, so that completing the virtual machine assembly, assembly trajectory visualization was realized, the basis was provided for the path of assembly analysis. The human-computer interface of assembly simulation system of machine develop, each module integration was realized by the transmission of information between EON Studio and machine, and reasonable guidelines for machine assembly training was provided.
APA, Harvard, Vancouver, ISO, and other styles
41

Miao, Lu, Shu Yuan Shang, and Chen Xi Cai. "Research on Image Binding Mechanism Based on Kinect Skeletal Tracking in Virtual Fitting System." Applied Mechanics and Materials 376 (August 2013): 437–40. http://dx.doi.org/10.4028/www.scientific.net/amm.376.437.

Full text
Abstract:
On the basis of kinect skeleton track data module, proposing a skeleton track-bound algorithm, you can implement static clothing pictures real time interaction with dynamic characters, combined with WPF in Visual Studio 2010, the corresponding hardware and software resources, a set of accessible and language programming in c# virtual fitting system. Kinect human key joint points coordinates of the corresponding parameter, a corresponding operation, bound to the clothes in the image, makes the image size can be based on different size fitting character change, achieving the desired effect of fitting.
APA, Harvard, Vancouver, ISO, and other styles
42

Anselma, Luca, and Alessandro Mazzei. "Building a Persuasive Virtual Dietitian." Informatics 7, no. 3 (July 30, 2020): 27. http://dx.doi.org/10.3390/informatics7030027.

Full text
Abstract:
This paper describes the Multimedia Application for Diet Management (MADiMan), a system that supports users in managing their diets while admitting diet transgressions. MADiMan consists of a numerical reasoner that takes into account users’ dietary constraints and automatically adapts the users’ diet, and of a natural language generation (NLG) system that automatically creates textual messages for explaining the results provided by the reasoner with the aim of persuading users to stick to a healthy diet. In the first part of the paper, we introduce the MADiMan system and, in particular, the basic mechanisms related to reasoning, data interpretation and content selection for a numeric data-to-text NLG system. We also discuss a number of factors influencing the design of the textual messages produced. In particular, we describe in detail the design of the sentence-aggregation procedure, which determines the compactness of the final message by applying two aggregation strategies. In the second part of the paper, we present the app that we developed, CheckYourMeal!, and the results of two human-based quantitative evaluations of the NLG module conducted using CheckYourMeal! in a simulation. The first evaluation, conducted with twenty users, ascertained both the perceived usefulness of graphics/text and the appeal, easiness and persuasiveness of the textual messages. The second evaluation, conducted with thirty-nine users, ascertained their persuasive power. The evaluations were based on the analysis of questionnaires and of logged data of users’ behaviour. Both evaluations showed significant results.
APA, Harvard, Vancouver, ISO, and other styles
43

Tsai, Yu-Hsiang, Yung-Jhe Yan, Meng-Hsin Hsiao, Tzu-Yi Yu, and Mang Ou-Yang. "Real-Time Information Fusion System Implementation Based on ARM-Based FPGA." Applied Sciences 13, no. 14 (July 23, 2023): 8497. http://dx.doi.org/10.3390/app13148497.

Full text
Abstract:
In this study, an information fusion system displayed fusion information on a transparent display by considering the relationships among the display, background exhibit, and user’s gaze direction. We used an ARM-based field-programmable gate array (FPGA) to perform virtual–real fusion of this system as well as evaluated the virtual–real fusion execution speed. The ARM-based FPGA used Intel® RealsenseTM D435i depth cameras to capture depth and color images of an observer and exhibit. The image data was received by the ARM side and fed to the FPGA side for real-time object detection. The FPGA accelerated the computation of the convolution neural networks to recognize observers and exhibits. In addition, a module performed by the FPGA was developed for rapid registration between the color and depth images. The module calculated the size and position of the information displayed on a transparent display according to the pixel coordinates and depth values of the human eye and exhibit. A personal computer with GPU RTX2060 performed information fusion in ~47 ms, whereas the ARM-based FPGA accomplished it in 25 ms. Thus, the fusion speed of the ARM-based FPGA was 1.8 times faster than on the computer.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Hyunsoo, and Woo Chang Cha. "Virtual Reality-Based Ergonomic Modeling and Evaluation Framework for Nuclear Power Plant Operation and Control." Sustainability 11, no. 9 (May 7, 2019): 2630. http://dx.doi.org/10.3390/su11092630.

Full text
Abstract:
The purpose of this study is to introduce a new and efficient virtual model-based ergonomic simulation framework utilizing recent anthropometric data for a digitalized main control room in an advanced nuclear power plant. The system interface of the main control room has been undergoing digitalization via various information and control consoles. Console operators often face human–computer interactive problems due to inappropriate console design. Computational models with a process of visual perception and variables of anthropometric data are developed for designing and evaluating operator consoles with the requirements of human factor guidelines. From the 3D computational model and simulation application, console dimensions and a designing test module, which would be used for designing suitable consoles with safety concerns in a nuclear plant, are proposed. To efficiently carry out console design and evaluation feedback, an intelligent design review system comprising a virtual modeling and simulation framework is developed. The proposed automated and virtual design review system provides console design efficiency and evaluation effectiveness. This study may influence methods of employing suitable design concepts with various anthropometric data in many areas with safety concerns and may show a feasible solution to designing and evaluating the main control room.
APA, Harvard, Vancouver, ISO, and other styles
45

Adiani, Deeksha, Aaron Itzkovitz, Dayi Bian, Harrison Katz, Michael Breen, Spencer Hunt, Amy Swanson, Timothy J. Vogus, Joshua Wade, and Nilanjan Sarkar. "Career Interview Readiness in Virtual Reality (CIRVR): A Platform for Simulated Interview Training for Autistic Individuals and Their Employers." ACM Transactions on Accessible Computing 15, no. 1 (March 31, 2022): 1–28. http://dx.doi.org/10.1145/3505560.

Full text
Abstract:
Employment outcomes for autistic 1 individuals are often poorer relative to their neurotypical (NT) peers, resulting in a greater need for other forms of financial and social support. While a great deal of work has focused on developing interventions for autistic children, relatively less attention has been paid to directly addressing the employment challenges faced by autistic adults. One key impediment to autistic individuals securing employment is the job interview. Autistic individuals often experience anxiety in interview situations, particularly with open-ended questions and unexpected interruptions. They also exhibit atypical gaze patterns that may be perceived as, but not necessarily indicative of, disinterest or inattention. In response, we developed a closed-loop adaptive virtual reality (VR)–based job interview training platform, which we have named Career Interview Readiness in VR (CIRVR). CIRVR is designed to provide an engaging, adaptive, and individualized experience to practice and refine interviewing skills in a less anxiety-inducing virtual context. CIRVR contains a real-time physiology-based stress detection module, as well as a real-time gaze detection module, to permit individualized adaptation. We also present the first prototype of the CIRVR Dashboard, which provides visualizations of data to help autistic individuals as well as potential employers and job coaches make sense of the data gathered from interview sessions. We conducted a feasibility study with 9 autistic and 8 NT individuals to assess the preliminary usability and feasibility of CIRVR. Results showed differences in perceived usability of the system between autistic and NT participants, and higher levels of stress in autistic individuals during interviews. Participants across both groups reported satisfaction with CIRVR and the structure of the interview. These findings and feedback will support future work in improving CIRVR’s features in hopes for it to be a valuable tool to support autistic job candidates as well as their potential employers.
APA, Harvard, Vancouver, ISO, and other styles
46

Fiderek, Paweł, Tomasz Jaworski, Robert Banasiak, Jacek Nowakowski, Jacek Kucharski, and Radosław Wajman. "INTELLIGENT SYSTEM FOR THE TWO-PHASE FLOWS DIAGNOSIS AND CONTROL ON THE BASIS OF RAW 3D ECT DATA." Informatics Control Measurement in Economy and Environment Protection 7, no. 1 (March 30, 2017): 17–23. http://dx.doi.org/10.5604/01.3001.0010.4576.

Full text
Abstract:
In this paper the new intelligent system for two-phase flows diagnosis and control is presented. The authors developed a fuzzy inference system for two phase flows recognition based on the raw 3D ECT data statistical analysis and fuzzy classification which identify the flow structure in real-time mode. The non-invasive three-dimensional monitoring is possible to conduct even in non-transparent and non-accessible parts of the pipeline. Presented system is also equipped with the two phase gas-liquid flows installation control module based on fuzzy inference which includes the feedback information from the recognition module. The intelligent control module working in a feed-back loop keep the sets of required flow regime. Presented in this paper fuzzy algorithms allow to recognize the two phase processes similar to the human expert and to control the process in the same, very intuitively way. Using of the artificial intelligence in the industrial applications allows to avoid any random errors as well as breakdowns and human mistakes suffer from lack of objectivity. An additional feature of the system is a universal multi-touched monitoring-control panel which is an alternative for commercial solution and gives the opportunity to build user own virtual model of the flow rig to efficiently monitor and control the process.
APA, Harvard, Vancouver, ISO, and other styles
47

Castro, Elisa C., and Ricardo R. Gudwin. "A Scene-Based Episodic Memory System for a Simulated Autonomous Creature." International Journal of Synthetic Emotions 4, no. 1 (January 2013): 32–64. http://dx.doi.org/10.4018/jse.2013010102.

Full text
Abstract:
In this paper the authors present the development of a scene-based episodic memory module for the cognitive architecture controlling an autonomous virtual creature, in a simulated 3D environment. The scene-based episodic memory has the role of improving the creature’s navigation system, by evoking the objects to be considered in planning, according to episodic remembrance of earlier scenes testified by the creature where these objects were present in the past. They introduce the main background on human memory systems and episodic memory study, and provide the main ideas behind the experiment.
APA, Harvard, Vancouver, ISO, and other styles
48

Johnson, Leif, Brian Sullivan, Mary Hayhoe, and Dana Ballard. "Predicting human visuomotor behaviour in a driving task." Philosophical Transactions of the Royal Society B: Biological Sciences 369, no. 1636 (February 19, 2014): 20130044. http://dx.doi.org/10.1098/rstb.2013.0044.

Full text
Abstract:
The sequential deployment of gaze to regions of interest is an integral part of human visual function. Owing to its central importance, decades of research have focused on predicting gaze locations, but there has been relatively little formal attempt to predict the temporal aspects of gaze deployment in natural multi-tasking situations. We approach this problem by decomposing complex visual behaviour into individual task modules that require independent sources of visual information for control, in order to model human gaze deployment on different task-relevant objects. We introduce a softmax barrier model for gaze selection that uses two key elements: a priority parameter that represents task importance per module, and noise estimates that allow modules to represent uncertainty about the state of task-relevant visual information. Comparisons with human gaze data gathered in a virtual driving environment show that the model closely approximates human performance.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Wei, Xingxing Wu, An He, and Zeqiang Chen. "Modelling and Visualizing Holographic 3D Geographical Scenes with Timely Data Based on the HoloLens." ISPRS International Journal of Geo-Information 8, no. 12 (November 28, 2019): 539. http://dx.doi.org/10.3390/ijgi8120539.

Full text
Abstract:
Commonly, a three-dimensional (3D) geographic information system (GIS) is based on a two-dimensional (2D) visualization platform, hindering the understanding and expression of the real world in 3D space that further limits user cognition and understanding of 3D geographic information. Mixed reality (MR) adopts 3D display technology, which enables users to recognize and understand a computer-generated world from the perspective of 3D glasses and solves the problem that users are restricted to the perspective of a 2D screen, with a broad application foreground. However, there is a gap, especially dynamically, in modelling and visualizing a holographic 3D geographical Scene with GIS data/information under the development mechanism of a mixed reality system (e.g., the Microsoft HoloLens). This paper attempts to propose a design architecture (HoloDym3DGeoSce) to model and visualize holographic 3D geographical scenes with timely data based on mixed reality technology and the Microsoft HoloLens. The HoloDym3DGeoSce includes two modules, 3D geographic scene modelling with timely data and HoloDym3DGeoSce interactive design. 3D geographic scene modelling with timely data dynamically creates 3D geographic scenes based on Web services, providing materials and content for the HoloDym3DGeoSce system. The HoloDym3DGeoSce interaction module includes two methods: Human–computer physical interaction and human–computer virtual–real interaction. The human–computer physical interaction method provides an interface for users to interact with virtual geographic scenes. The human–computer virtual–real interaction method maps virtual geographic scenes to physical space to achieve virtual and real fusion. According to the proposed architecture design scheme, OpenStreetMap data and the BingMap Server are used as experimental data to realize the application of mixed reality technology to the modelling, rendering, and interacting of 3D geographic scenes, providing users with a stronger and more realistic 3D geographic information experience, and more natural human–computer GIS interactions. The experimental results show that the feasibility and practicability of the scheme have good prospects for further development.
APA, Harvard, Vancouver, ISO, and other styles
50

Sun, Youxia. "Interactive Clothing Optimization Design Model Based on 3D Printing." Wireless Communications and Mobile Computing 2023 (April 21, 2023): 1–8. http://dx.doi.org/10.1155/2023/1319959.

Full text
Abstract:
In order to solve the problem of improving the matching degree between the designed clothing and the user’s body shape, the author proposes an interactive clothing optimization design model based on 3D printing. The user logs in to the system through the user layer, and the display layer uses 3D scanning technology to scan the human head, torso, and other information to collect human body data, and project the collected human body data to obtain the human body outline, select the human body contour line obtained by the denoising process of the interpolation algorithm, extract the feature points of the human body contour line after the denoising process through the corner point detection method, and use the final feature points to obtain the final contour line, the 3D human body data file is generated and sent to the interface layer, the interface layer designs clothing through style design, adding colors and patterns, and uses the global optimization method to adaptively adjust the clothing pieces to achieve clothing design, fit the designed clothing through the virtual fitting module until it meets the user’s needs. Experimental results show that: The system collects user size with an accuracy of higher than 99.5%, and the number of iterations for adaptive adjustment of human body size is less than 30 times. Conclusion: The designed clothing virtual design system can effectively make up for the blank of clothing e-commerce, with high innovation and high practicability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography