Dissertations / Theses on the topic 'Procedural simulation'

To see the other types of publications on this topic, follow the link: Procedural simulation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Procedural simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sen, Mahasweta. "A procedural comparison of combat tactics: a simulation approach." Thesis, Virginia Polytechnic Institute and State University, 1989. http://hdl.handle.net/10919/53245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Naval operational planning is complex because it is dependent on uncertain future combat conditions. Effective decision making is directly related to the validity of assumptions regarding future combat scenarios and the appropriateness of selected procedures. A two phase procedure for comparing multiple combat tactics is developed in this thesis. In the first phase of the procedure, simulation models are developed to replicate combat under each tactic. The models generate data for performance measures that are relevant for comparing a wide range of tactics. In the second phase of the procedure, the data for the performance measures are analyzed using the sign test or the Wilcoxon signed ranked test. The applicability of simulation modeling, appropriateness of the performance measures, and the use of sign test to compare combat tactics with any degree of complexity is established in this study. An application of this procedure is illustrated using two hypothetical tactics.
Master of Science
2

Tarantilis, Georgios E. "Simulating clouds with procedural texturing techniques using the GPU." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FTarantilis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S. in Modeling, Virtual Environments and Simulations (MOVES))--Naval Postgraduate School, Sept. 2004.
Thesis Advisor(s): Rudy Darken, Joe Sullivan. Includes bibliographical references (p. 53). Also available online.
3

Johannesson, Eva. "Learning manual and procedural clinical skills through simulation in health care education." Licentiate thesis, Linköpings universitet, Sjukgymnastik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The general aim of this thesis was to contribute to a deeper understanding of students’ perceptions of learning in simulation skills training in relation to the educational design of the skills training. Two studies were conducted to investigate learning features, what clinical skills nursing students learn through simulation, and how. Undergraduate nursing students were chosen in both studies. Study I was conducted in semester three, and study II in semester six, the last semester. Twenty-two students in study I practised intravenous catheterisation in pairs in the regular curriculum with an additional option of using two CathSim® simulators. In study II, ten students practised urethral catheterisation in pairs, using the UrecathVision™ simulator. This session was offered outside the curriculum, one pair at a time. In study I, three questionnaires were answered - before the skills training, after the skills training and the third after the skills examination but before the students’ clinical practice. The questions were both closed and open and the answers were analysed with quantitative and qualitative methods. The results showed that the simulator was valuable as a complement to arm models. Some disadvantages were expressed by the students, namely that there was no arm model to hold and into which to insert the needle and that they missed a holistic perspective. The most prominent learning features were motivation, variation, realism, meaningfulness, and feedback. Other important features mentioned were a safe environment, repeated practice, active and independent learning, interactive multimedia and a simulation device that was easy to use. In study II the students were video-recorded during the skills training. Afterwards, besides open questions, the video was used for individual interviews as stimulated recall. The interview data were analysed with qualitative content analysis. Three themes were identified: what the students learn, how the students learn, and how the simulator can contribute to the students’ learning. When learning clinical skills through simulation, motivation, meaningfulness and confidence were expressed as important factors to take into account from a student perspective. The students learned manual and procedural skills and also professional behaviour by preparing, watching, practising and reflecting. From an educational perspective, variation, realism, feedback and reflection were seen as valuable features to be aware of in organising curricula with simulators. Providing a safe environment, giving repeated practice, ensuring active and independent learning, using interactive multimedia, and providing a simulation tool that is easy to use were factors to take into account. The simulator contributed by providing opportunities to prepare for skills training, to see the anatomy, to feel resistance to catheter insertion, and to become aware of performance ability. Learning features, revealed from the students’ thoughts and experiences in these studies, are probably general to some extent but may be used to understand and design clinical skills training in all health care educations. In transferring these results it is important to take the actual educational context into account.
4

Morkel, Chantelle. "Non-interactive modeling tools and support environment for procedural geometry generation." Thesis, Rhodes University, 2006. http://eprints.ru.ac.za/242/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sowndararajan, Ajith. "Quantifying the Benefits of Immersion for Procedural Training." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/34017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Training is one of the most important and widely-used applications of immersive Virtual Reality (VR). Research has shown that Immersive Virtual Environments (IVEs) are beneficial for training motor activities and spatial activities, but it is unclear whether immersive VEs are beneficial for purely mental activities, such as memorizing a procedure. In this thesis, we present two experiments to identify benefits of immersion for a procedural training process. The first experiment is a between-subjects experiment comparing two levels of immersion in a procedural training task. For the higher level of immersion, we used a large L-shaped projection display. We used a typical laptop display for the lower level of immersion. We asked participants to memorize two procedures: one simple and the other complex. We found that the higher level of immersion resulted in significantly faster task performance and reduced error for the complex procedure. As result of the first experiment we performed a controlled second experiment. We compared two within-subjects variables namely environment and location under various treatments formed by combination of three between-subject variables namely Software Field Of View (SFOV), Physical FOV, Field Of Regard (FOR). We found that SFOV is the most essential component for learning a procedure efficiently using IVEs. We hypothesize that the higher level of immersion helped users to memorize the complex procedure by providing enhanced spatial cues, leading to the development of an accurate mental map that could be used as a memory aid.
Master of Science
6

Cura, Rémi. "Inverse procedural Street Modelling : from interactive to automatic reconstruction." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1034/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La population mondiale augmente rapidement, et avec elle, le nombre de citadins, ce qui rend d'autant plus importantes la planification et la gestion des villes.La gestion "intelligente" de ces villes et les nombreuses applications (gestion, tourisme virtuel, simulation de trafic, etc.) nécessitent plus de données réunies dans des modèles virtuels de villes.En milieu urbain, les rues et routes sont essentielles par leur rôle d'interface entre les espaces publics et privés, et entre ces différents usages.Il est difficile de modéliser les rues (ou de les reconstruire virtuellement) car celles-ci sont très diverses (de par leur forme, fonction, morphologie), et contiennent des objets très divers (mobilier, marquages, panneaux).Ce travail de thèse propose une méthode (semi-) automatique pour reconstruire des rues en utilisant le paradigme de la modélisation procédurale inverse dont le principe est de générer un modèle procéduralement, puis de l'adapter à des observations de la réalité.Notre méthode génère un premier modèle approximatif - à partir de très peu d'informations (un réseau d'axes routiers + attributs associés) - assez largement disponible.Ce modèle est ensuite adapté à des observations de façon interactive (interaction en base compatible avec les logiciels SIG communs) et (semi-) automatique (optimisation).L'adaptation (semi-) automatique déforme le modèle de route de façon à ce qu'il corresponde à des observations (bords de trottoir, objets urbains) extraites d'images et de nuages de points.La génération (StreetGen) et l'édition interactive se font dans un serveur de base de données ; de même que la gestion des milliards de points Lidar (Point Cloud Server).La génération de toutes les rues de la ville de Paris prends quelques minutes, l'édition multi-utilisateurs est interactive (<0.3 s). Les premiers résultats de l'adaptation (semi-) automatique (qq minute) sont prometteurs (la distance moyenne à la vérité terrain passe de 2.0 m à 0.5 m).Cette méthode, combinée avec d'autres telles que la reconstruction de bâtiment, de végétation, etc., pourrait permettre rapidement et semi automatiquement la création de modèles précis et à jour de ville
World urban population is growing fast, and so are cities, inducing an urgent need for city planning and management.Increasing amounts of data are required as cities are becoming larger, "Smarter", and as more related applications necessitate those data (planning, virtual tourism, traffic simulation, etc.).Data related to cities then become larger and are integrated into more complex city model.Roads and streets are an essential part of the city, being the interface between public and private space, and between urban usages.Modelling streets (or street reconstruction) is difficult because streets can be very different from each other (in layout, functions, morphology) and contain widely varying urban features (furniture, markings, traffic signs), at different scales.In this thesis, we propose an automatic and semi-automatic framework to model and reconstruct streets using the inverse procedural modelling paradigm.The main guiding principle is to generate a procedural generic model and then to adapt it to reality using observations.In our framework, a "best guess" road model is first generated from very little information (road axis network and associated attributes), that is available in most of national databases.This road model is then fitted to observations by combining in-base interactive user edition (using common GIS software as graphical interface) with semi-automated optimisation.The optimisation approach adapts the road model so it fits observations of urban features extracted from diverse sensing data.Both street generation (StreetGen) and interactions happen in a database server, as well as the management of large amount of street Lidar data (sensing data) as the observations using a Point Cloud Server.We test our methods on the entire Paris city, whose streets are generated in a few minutes, can be edited interactively (<0.3 s) by several concurrent users.Automatic fitting (few m) shows promising results (average distance to ground truth reduced from 2.0 m to 0.5m).In the future, this method could be mixed with others dedicated to reconstruction of buildings, vegetation, etc., so an affordable, precise, and up to date City model can be obtained quickly and semi-automatically.This will also allow to such models to be used in other application areas.Indeed, the possibility to have common, more generic, city models is an important challenge given the cost an complexity of their construction
7

Abdul, Karim Ahmad. "Procedural locomotion of multi-legged characters in complex dynamic environments : real-time applications." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10181/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les créatures à n-pattes, comme les quadrupèdes, les arachnides ou les reptiles, sont une partie essentielle de n’importe quelle simulation et ils participent à rendre les mondes virtuels plus crédibles et réalistes. Ces créatures à n-pattes doivent être capables de se déplacer librement vers les points d’intérêt de façon réaliste, afin d’offrir une meilleure expérience immersive aux utilisateurs. Ces animations de locomotion sont complexes en raison d’une grande variété de morphologies et de modes de déplacement. Il convient d’ajouter à cette problématique la complexité des environnements où ils naviguent. Un autre défi lors de la modélisation de tels mouvements vient de la difficulté à obtenir des données sources. Dans cette thèse nous présentons un système capable de générer de manière procédurale des animations de locomotion pour des dizaines de créatures à n-pattes, en temps réel, sans aucune donnée de mouvement préexistante. Notre système est générique et contrôlable. Il est capable d’animer des morphologies différentes, tout en adaptant les animations générées à un environnement dynamique complexe, en temps réel, ce qui donne une grande liberté de déplacement aux créatures à n-pattes simulées. De plus, notre système permet à l’utilisateur de contrôler totalement l’animation produite et donc le style de locomotion
Multi-legged characters like quadrupeds, arachnids, reptiles, etc. are an essential part of any simulation and they greatly participate in making virtual worlds more life-like. These multi-legged characters should be capable of moving freely and in a believable way in order to convey a better immersive experience for the users. But these locomotion animations are quite rich due to the complexity of the navigated environments and the variety of the animated morphologies, gaits, body sizes and proportions, etc. Another challenge when modeling such animations arises from the lack of motion data inherent to either the difficulty to obtain them or the impossibility to capture them.This thesis addresses these challenges by presenting a system capable of procedurally generating locomotion animations fordozens of multi-legged characters in real-time and without anymotion data. Our system is quite generic thanks to the chosen Procedural-Based techniques and it is capable of animating different multi-legged morphologies. On top of that, the simulated characters have more freedom while moving, as we adapt the generated animations to the dynamic complex environments in real-time. Themain focus is plausible movements that are, at the same time,believable and fully controllable. This controllability is one of the forces of our system as it gives the user the possibility to control all aspects of the generated animation thus producing the needed style of locomotion
8

Nikfetrat, Nima. "Video-based Fire Analysis and Animation Using Eigenfires." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We introduce new approaches of modeling and synthesizing realistic-looking 2D fire animations using video-based techniques and statistical analysis. Our approaches are based on real footage of various small-scale fire samples with customized motions that we captured for this research, and the final results can be utilized as a sequence of images in video games, motion graphics and cinematic visual effects. Instead of conventional physically-based simulation, we utilize example-based principal component analysis (PCA) and take it to a new level by introducing “Eigenfires”, as a new way to represent the main features of various real fire samples. The visualization of Eigenfires helps animators to design the fire interactively through a more meaningful and convenient way in comparison to known procedural approaches or other video-based synthesis models. Our system enables artists to control real-life fire videos through motion transitions and loops by selecting any desired ranges of any video clips and then the system takes care of the remaining part that best represent a smooth transition. Instead of tricking the eyes with a basic blending only between similar shapes, our flexible fire transitions are capable of connecting various fire styles. Our techniques are also effective for data compressions, they can deliver real-time interactive recognition for high resolution images, very easy to implement, and requires little parameter tuning.
9

Elkins, Ethan B. "Simulating Destruction Effects in SideFX Houdini." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/honors/524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As movies, television shows, and other forms of media have progressed over the last century, the use of destruction sequences as a form of entertainment have seemingly grown exponentially. From ginormous explosions to cities collapsing, more destruction sequences have drawn people’s attention in ways that are quite captivating. However, as content producers continue to push the limit of what is possible, the reliance on practical effects starts to dwindle in comparison to the usage of computer generated scenes. This thesis acknowledges the trend and dissects the entire process of how a general destruction sequence is made, from the research and planning process to the actual simulation of the effects. Various methods are discussed in how to attempt the creation of destruction with a singular project in mind. The goal is to not only to complete the sequence, but to do so in an efficient manner that can rival a professional workflow.
10

Culbertson, Greg S. "Investigating methods of conditioning fresh vegetables in retail establishments and exploring procedural modifications that improve product quality and safety." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397488227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hallros, Per, and Niklas Pålsson. "SIMULATING A SYSTEM : Using video games as tools to promote self-directed learning." Thesis, Uppsala universitet, Institutionen för speldesign, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As a response to finding innovative ways of using games as tools for learning we explore the design process of creating a game system meant to promote self-directed learning. This thesis explores what design pillars a game system needs to follow when making a game that is meant to promote self-directed learning through reflection on cause and effect relations. We use a theoretical framework based on procedural rhetoric and self-directed learning in video games to inform our design process when creating an eco-adapted game system that provides experimentation opportunities. We adapt an ecosystem as a simulated real life context for our game environment and identify the major design pillars that video games looking to promote self-directed learning needs to consist of. The major pillars we found most important were; 1, activate participation, to engage the player by allowing them to experiment with different perspectives and the game state. 2, avoid correlating rhetorical arguments, to not influence players as they set their own goals when playing in an informal setting. 3, provide observational clarity, to let players learn how the actions they perform affect the actors and events in the game system. 4, enable trial and error, to give players time to explore multiple approaches in a safe environment where they can fail and try again without penalties. This thesis focuses primarily on the design process and documentation around the creation of a game system that adapts self-directed learning principles as a central design directive. In our design documentation we provide an open discussion of our design process around the decisions, findings, and implementations that make our simulation.
Som ett svar på att hitta innovativa sätt att använda spel som verktyg för lärande undersöker vi designprocessen för skapandet av ett spelsystem som är avsett att främja självstyrd inlärning. Denna uppsats undersöker vilka designpelare ett spelsystem behöver följa när man skapar ett spel som är avsett att främja självstyrd inlärning genom reflektion över orsaks- och påverkansrelationer. Vi använder ett teoretisk ramverk baserat på procedurell retorik och självstyrd inlärning i datorspel för att informera vår designprocess när vi skapar ett eko-adapterat spelsystem som ger experimenteringsmöjligheter. Vi anpassar ett ekosystem som en simulerad verklig omgivning till vår spelmiljö och identifierar viktiga designpelare som datorspel som vill främja självstyrd inlärning behöver bestå av. De huvudsakliga pelarna som vi fann viktigast är; 1, aktivera deltagande, för att engagera spelarna genom att låta dem experimentera med olika perspektiv och spelets tillstånd. 2, undvik korrelerande retoriska argument, för att inte påverka spelarna när de sätter sina egna mål medan de spelar i en informell miljö. 3, ge observationsklarhet, så att spelarna lär sig hur handlingarna de utför påverkar aktörerna och händelserna i spelsystemet. 4, möjliggör försök och misstag, för att ge spelarna tid att utforska flera tillvägagångssätt i en säker miljö där de kan misslyckas och försöka igen utan straff. Denna uppsats fokuserar främst på designprocessen och dokumentationen kring skapandet av ett spelsystem som tillämpar självstyrda inlärningsprinciper som ett centralt designdirektiv. I vår designdokumentation ger vi en öppen diskussion om vår designprocess kring de beslut, resultat och implementeringar som utgör vår simulering.
12

Ferreira, Lucas Nascimento. "Uma abordagem evolutiva para geração procedural de níveis em jogos de quebra-cabeças baseados em física." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08012016-093518/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Na última década diversos algoritmos baseados em busca foram desenvolvidos para a geração de níveis em diferentes tipos de jogos. O espaço de busca para geração de níveis geralmente possui restrições, uma vez que a mecânica de um jogo define regras de factibilidade para os níveis. Em alguns métodos, a avaliação de factibilidade requer uma simulação com um agente inteligente que controla o jogo. Esse processo de avaliação geralmente possui ruído, causado por componentes aleatórios no simulador ou na estratégia do agente. Diversos trabalhos têm utilizado simulação como forma de avaliação de conteúdo, no entanto, nenhum deles discutiu profundamente a presença de ruído neste tipo de abordagem. Assim, esse trabalho apresenta um algoritmo genético capaz de gerar níveis factíveis que são avaliados por um agente inteligente em uma simulação ruidosa. O algoritmo foi aplicado a jogos de quebra-cabeças baseados em física com a mecânica do Angry Birds. Uma representação dos níveis em forma de indivíduos é introduzida, a qual permite que o algoritmo genético os evolua com características diferenciadas. O ruído na função de aptidão é tratado por uma nova abordagem, baseada em uma sistema de cache, que auxilia o algoritmo genético a encontrar boas soluções candidatas. Três conjuntos de experimentos foram realizados para avaliar o algoritmo. O primeiro compara o método de cache proposto com outros métodos de redução de ruído da literatura. O segundo mede a expressividade do algoritmo genético considerando as características estruturais dos níveis gerados. O último avalia os níveis gerados considerando aspectos de design (como dificuldade, imersão e diversão), os quais são medidos por meio de questionários respondidos por jogadores humanos via Internet. Os resultados mostraram que o algoritmo genético foi capaz de gerar níveis distintos que são tão imersíveis quanto níveis produzidos manualmente. Além disso, a abordagem de cache lidou apropriadamente com o ruído nos cálculos de aptidão, permitindo uma correta evolução elitista.
In the last decade several search-based algorithms have been developed for generating levels in different types of games. The search space for level generation is typically constrained once the game mechanics define feasibility rules for the levels. In some methods, evaluating level feasibility requires a simulation with an intelligent agent which plays the game. This evaluation process usually has noise, caused by random components in the simulator or in the agent strategy. Several works have used a simulation for content evaluation, however, none of them have deeply discussed the presence of noise in this kind of approach. Thus, this paper presents a genetic algorithm capable of generating feasible levels that are evaluated by an intelligent agent in a noisy simulation. The algorithm was applied to physics-based puzzle games with the Angry Birds mechanics. A level representation in the form of individuals is introduced, which allows the genetic algorithm to evolve them with distinct characteristics. The fitness function noise is handled by a new approach, based on a cache system, which helps the genetic algorithm finding good candidate solutions. Three sets of experiments were conducted to evaluate the algorithm. The first one compares the proposed cache approach with other noise reduction methods of the literature. The second one measures the expressivity of the genetic algorithm considering the structural characteristics of the levels. The last one evaluates design aspects (such as difficulty, immersion and fun) of the generated levels using questionnaires answered by human players via Internet. Results showed the genetic algorithm was capable of generating distinct levels that are as immersive as levels manually designed. Moreover, the cache approach handled properly the noise in the fitness calculations, allowing a correct elitist evolution.
13

Bouthors, Charlie. "Etude de pédagogie médicale sur la simulation procédurale en chirurgie orthopédique et traumatologique pour les étudiants en 2ème et 3ème cycle des études médicales." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASW001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La simulation procédurale est en pleine expansion en chirurgie orthopédique et traumatologique (COT) mais en France, son utilisation pour les étudiants de 3ème cycle n'a pas encore été répertoriée. L'amélioration de la formation des gestes techniques enseignés en simulation aux étudiants du 2ème cycle implique le développement de nouvelles méthodes d'enseignement et de simulateur procéduraux.Une enquête nationale sur le déploiement de la simulation procédurale a été réalisée auprès des enseignants universitaires et des étudiants de 3ème cycle en COT. Cet état des lieux a montré que le déploiement n'était pas encore à son maximum. Les principales raisons évoquées étaient un manque de financement et de temps. L'intérêt de la simulation était malgré tout reconnu par les enseignants et les étudiants. L'uniformisation et les collaborations à l'échelle régional et nationale pourrait améliorer cette situation.L'enseignement traditionnel d'une procédure implique une démonstration ininterrompue de l'ensemble de la procédure à l'apprenant qui doit ensuite être capable de la réaliser. La décomposition de la procédure en plusieurs étapes clés montrées séquentiellement à l'apprenant selon une méthode dite par « micro-tâches », pourrait améliorer l'apprentissage du geste technique. Parmi une population d'étudiants en médecine du 2ème cycle formés à la manchette plâtrée en simulation, immédiatement après la formation, la performance évaluée par des grilles d'évaluation était meilleure chez ceux formés par la technique par « micro-tâche » par comparaison à la méthode traditionnelle. Six mois après la formation, le niveau de performance était diminué et identique entre les deux groupes. L'unique facteur indépendant lié à une bonne performance était le fait d'avoir effectué un stage de COT. Pour être efficace, le processus de formation en simulation implique donc un enseignement qui doit être répété et poursuivi en pratique clinique.Le développement d'un simulateur physique procédural pourrait améliorer la formation à la pose et à l'ablation de la manchette plâtrée. Un membre supérieur en taille réelle a été modélisé et construit par impression en trois dimensions (3D). Afin d'enregistrer de manière objective les comportements de l'opérateur, différents capteurs ont été incorporés au simulateur (pression, mobilité articulaire et « fracturaire », température, vibrations de la lame de scie, contact scie-peau). Le simulateur a été utilisé par deux groupes de participants au niveau d'expertise différent (novices et experts). Bien que le réalisme ait été jugé satisfaisant dans les deux groupes, son aspect est encore relativement éloigné d'un humain, notamment au niveau des parties molles. Le simulateur a permis d'enregistrer les comportements des opérateurs en lien avec leur gestuelle et semble pouvoir différencier deux niveaux d'expertises. L'intérêt pédagogique du simulateur sera évalué dans un autre travail
Procedural simulation is expending in orthopaedic and trauma surgery (OT) but in France its implementation amongst residents has not yet been reported. To enhance procedural training for medical students implies development of new teaching methods and simulators.A national survey was conducted amongst academic teachers and residents in OT. Results showed maximal potential was not reached. Main reasons were lack of funding and time. Both teachers and residents acknowledged the advantages of simulation.The traditional method to teach a procedure implies a continuous and uninterrupted demonstration of the entire procedure to the learner who is then expected to replicate it. To deconstruct the procedure into several key steps showed sequentially (micro-task method) could enhance learning of technical skills. Amongst a population of medical students undergoing simulation training on below elbow cast, immediately after the training session students trained by micro-task method demonstrated higher performance than by traditional method according various grading scales. Six months after the training, performance was decreased and equal in both groups. The only independent factor linked to better performance was a rotation in OT. To be effective, simulation training requires repeated practice and bedside teaching.The development of a procedural simulator for below elbow cast application and removal could enhance this training. A real size upper limb was modelized and constructed through three-dimensional printing. To objectively monitor the operator's gesture, different captors (pression, fracture and wrist mobility, temperature, cast saw vibrations, cast saw skin touch) were integrated to the simulator. Participants with different levels of expertise (novices and experts) tested the simulator. Although realism was deemed satisfactory in both groups, it did not mimic human's aspect perfectly, notably the soft tissues. The simulator appropriately recorded the participants' gesture and seemed to differentiate different levels of expertise. Its pedagogical interest remains to be evaluated
14

Zhu, Wenhua. "3D modeling of city building and lifecycle simulation." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2344/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Avec la construction et le développement de la ville intelligente, la façon de construire le modèle 3D réaliste des grands bâtiments de la ville rapidement et efficacement devient le hotspot de recherche. Dans cette thèse, une méthode procédurale de modélisation intelligente est proposée pour construire rapidement et efficacement un modèle de construction de ville 3D à grande échelle basé sur la modélisation de la forme de la façade et de la grammaire de forme. La technologie de l'information du bâtiment (BIM) est un moyen technique important pour améliorer l'industrie de la construction, pour la conception du bâtiment de la ville et la construction de la meilleure recherche et l'application de la technologie BIM est la clé, de gérer efficacement les informations du cycle de vie du bâtiment et de réaliser le partage et l'échange. Cette thèse a étudié l'acquisition et le traitement des données de modélisation. Google Earth et le logiciel ArcGIS sont principalement utilisés pour acquérir et traiter des données d'images-cartes et des données de cartes d'élévation de la zone cible, ces deux types de correspondance et de superposition de données, qui peuvent générer des données de terrain urbain 3D avec des informations de localisation géographique. Ensuite OpenStreetMap est utilisé pour acquérir les données routières de la zone cible, et il peut être optimisé pour le réseau routier nécessaire par le logiciel JOSM. La technologie de balayage laser 3D est utilisée pour collecter des images de texture de surface de bâtiment et pour créer le modèle de nuages de points de la modélisation d'architecture cible afin d'obtenir les dimensions de modélisation par mesure. Sur cette base, cette thèse a principalement étudié le principe et le processus de la règle CGA pour créer des modèles de construction, et étudié la méthode qui peut séparer les éléments architecturaux en utilisant la segmentation d'image pour générer automatiquement la règle CGA et de créer ensuite le modèle de construction. Ainsi, des modèles de construction 3D ont été établis dans le logiciel CityEngine en utilisant les règles CGA et la technologie de segmentation des façades. Cette thèse a construit le modèle d'information intégré au bâtiment urbain (CBIIM) basé sur BIM. L'information sur la construction de la ville est classée et intégrée, et le bâtiment et la composante ont été décrits avec la norme IFC, afin de gérer efficacement les informations du cycle de vie du bâtiment. Cette thèse étudie la technologie du modèle d'association d'information intégrée, qui permet de réaliser une conception standardisée des composants avec des caractéristiques associées et une conception intelligente des bâtiments avec des paramètres associés dans les règles de connaissances combinées avec l'IFC. La technologie de simulation de la construction de visualisation est étudiée. Les règles de connaissance dans le modèle d'information intégré fournissent une référence fiable pour la simulation de construction, et la scène de simulation est créée en invoquant le modèle d'information intégré, ainsi le processus de simulation est terminé. En prenant le campus Baoshan de l'Université de Shanghai comme exemple, le processus de modélisation de la scène entière est illustré, et les étapes de modélisation de toutes sortes d'objets 3D sont décrites en détail pour résoudre les problèmes spécifiques dans le processus de modélisation réelle. Ainsi, la faisabilité et la validité de la méthode de modélisation intelligente procédurale sont vérifiées. Prenant comme exemple le dortoir de l'Université de Shanghai, une simulation et le modèle de simulation ont été créés par les informations intégrées, combinées aux informations de construction pertinentes, la simulation de construction a été complétée par le programme. Ainsi, la faisabilité et la validité du CBIIM sont vérifiées
With the construction and development of the smart city, how to construct the realistic 3D model of the large-scale city buildings quickly and efficiently which becomes the research hotspot. In this thesis, a novel 3D modeling approach is proposed to quickly and efficiently build 3D model of large-scale city buildings based on shape grammar and facade rule modeling. Building Information Model (BIM) is an important technical means to enhance the construction industry, for the city building design and construction, how to better research and application of BIM technology which is the key, in this thesis City Building Integrated Information Model (CBIIM) is specified to manage the information of building lifecycle effectively and realize the information sharing and exchanging. This thesis has studied the acquisition and processing of the modeling data. Google Earth and ArcGIS software are mainly used to acquire and process image-maps data and elevation-maps data of the target area, these two kinds of data match and overlay, which can generate 3D city terrain data with geographic location information. Then OpenStreetMap is used to acquire road data of the target area, and it can be optimal processed to the necessary road network by JOSM software. 3D laser scanning technology is used to collect building surface texture images and create the point clouds model of the target architecture modeling so as to get the modeling dimensions by measurement. On this basis, this thesis mainly has studied the principle and the process of CGA rule to create building models, and studied the method that can separate architectural elements using image segmentation to generate CGA rule automatically and to create building model furtherly. Thus 3D building models have been established in the CityEngine software using CGA rules and facade modeling technology. This thesis has specified the City Building Integrated Information Model (CBIIM) based on BIM. The city building information are classified and integrated, and the building and component was described with the IFC standard, in order to manage the informations of building lifecycle effectively. This thesis studies the integrated information association model technology, that it can realize standardized component design with associated features and intelligent building design with associated parameters in knowledge rules combined with IFC. The construction simulation technology is studied. The knowledge rules in the integrated information model provide a reliable reference for the construction simulation, and the simulation scene is created through the invoking the integrated information model, thus the construction simulation process is completed by the program. Taking Baoshan Campus of Shanghai University as an example, the modeling process of the whole scene is illustrated, and the modeling steps of all kinds of 3D objects are described in detail to solve the specific problems in the actual modeling process. Thus the feasibility and validity of the procedural intelligent modeling approach are verified. Taking the dormitory of Shanghai University as an example, a simulation scene and the simulation model were created by the integrated informations, combined with the relevant construction information the construction simulation was completed by the program. Thus the feasibility and validity of the CBIIM are verified
15

Fita, López Josep Lluis. "Temporal evolution of ancient buildings." Doctoral thesis, Universitat de Girona, 2019. http://hdl.handle.net/10803/668980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nowadays, the improvement in Computer Graphics has benefited fields such as Cultural Heritage, where the main efforts have focused on the digital preservation of historic buildings or urban structures. In this thesis, we have developed a technique to procedurally model ancient stone buildings combined with structural simulation and we have demonstrated its viability based on non-specialized tools designed for cultural heritage users. On the other hand, some historical events involving natural phenomena, such as earthquakes, determined the evolution of the city urban infrastructure. In this thesis, we present a low-cost tool that allows the reproduction of an earthquake on old stone buildings. Furthermore, in this thesis, we have also designed a virtual reality pipeline compatible with low-cost smart-phones that allows the recreation of historical events
Avui dia, les millores en Gràfics per Computador ha beneficiat camps com el Patrimoni Cultural, on els esforços principals s’han centrat en la preservació digital d’edificis històrics o estructures urbanes. En aquesta tesi hem desenvolupat una tècnica per modelar de manera procedural edificis antics, combinant-la amb simulació estructural, i hem demostrat la seva viabilitat basada en eines no especialitzades dissenyades per a usuaris de patrimoni cultural. D’altra banda, alguns esdeveniments històrics relacionats amb fenòmens naturals, com terratrèmols, van determinar l’evolució urbana d’una ciutat. En aquesta tesi presentem una eina de baix cost que permet la reproducció d’un terratrèmol en edificis antics. A més a més, en aquesta tesi hem dissenyat un sistema de realitat virtual adequat i compatible amb telèfons intel·ligents de baix cost que permet la recreació d’esdeveniments històrics
16

Grosbellet, Francois. "Génération de détails dans les mondes procéduraux." Thesis, Limoges, 2015. http://www.theses.fr/2015LIMO0110/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La génération de mondes virtuels est un domaine de recherche très actif en informatique graphique : la modélisation de plantes, d’arbres, de bâtiments, de villes ou de terrains, et les simulations de vieillissement sont des domaines très explorés. Dans ce contexte, les changements d’apparences constituent également un domaine de recherche important, de part leur impact majeur dans le réalisme des scènes virtuelles produites. Ces travaux se concentrent sur la mise au point d’approches procédurales permettant de représenter les changements d’apparence sous la forme de décorations géométriques (accumulation de neige ou formation de glace, dépôt de feuilles mortes, etc.) à la fois à grande échelle et avec un très haut niveau de détail. Nous proposons d’abord un modèle d’organisation hiérarchique de scènes qui repose sur un arbre de construction dont les feuilles sont des objets environnementaux, des objets qui génèrent eux mêmes leurs décorations géométriques. Nous présentons ensuite un formalisme implicite pour définir l’environnement, qui contient l’ensemble des informations guidant la génération des décorations. Finalement, nous détaillons quatre méthodes de génération procédurale pour la création des décorations géométriques (neige, glace, herbes, feuilles) des objets environnementaux
Procedural modeling of virtual worlds is an active research field in computer science. A large amount of methods have been published in this field : modeling of plants, trees, buildings, cities or terrains, and aging and weathering simulations. In this context, changes of appearance are a very active research field too, due to the way they impact the realism of produced virtual scenes. This research focuses on a procedural method that can represent the changes of appearance as geometrical decorations (snowfall, ice growth, leaves deposits, etc.) on very big scenes with a high level of details. We first propose a hierarchical scene design based on a construction tree whose leaves are environmental objects, a new kind of objects that generate their own geometrical decorations. We then present an implicit formalism to define the environment that contains all the information needed to guide decorations generation. Finally, we detail four procedural methods for generating the geometrical decorations (snow, ice, grass, leaves) of the environmental objects
17

Jannin, Leslie. "Approche psycho-ergonomique de l'usage de la simulation en e-learning pour l'apprentissage de procédures : le cas du point de suture Atomized or delayed execution? An alternative paradigm for the study of procedural learning, in Journal of Educational Psychology 111(8), 2019." Thesis, Brest, 2020. http://www.theses.fr/2020BRES0027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’apprentissage de gestes chirurgicaux, est un élément majeur de la formation des professions médicales. Un impératif éthique impose désormais que l’apprentissage de ces gestes s’effectue en simulation. Le but de cette thèse était de déterminer les facteurs psychologiques et pédagogiques permettant d’optimiser l’apprentissage procédural chez des étudiants en Médecine en combinant e-learning et simulation. Pour répondre à cet objectif nous avons mis en place 5 études. La première cherchait à vérifier que les apprenants réalisaient une atomisation de l’action en début d’apprentissage. La deuxième comparait l’utilisation d’un paradigme permettant l’atomisation de l’action et d’un paradigme de réalisation différée. Les deux études suivantes s’intéressaient à l’ergonomie des instructions et plus particulièrement au point de vue de présentation, en tenant compte des aptitudes visuo-spatiales des apprenants. La dernière étude visait à vérifier la validité d’une situation d’apprentissage en blended learning, en comparant deux organisations pédagogiques. Les apports de cette thèse se situent à 3 niveaux. Au plan du déroulement de l'apprentissage procédural, les apprenants réalisent une atomisation de l’action lors de la première phase de l’apprentissage. Au plan méthodologique, il est donc essentiel que le paradigme d’étude utilisé prenne en compte ce processus, ainsi que les nombreuses répétitions nécessaires à l’apprentissage procédural. Au plan pédagogique, le point de vue égocentré serait le plus profitable, quelles que soient les aptitudes des apprenants. Enfin, l’utilisation d’une combinaison de e-learning et de simulation semble efficace pour l’apprentissage de procédures
Learning surgical gestures is an important part of training for medical profession. An ethical imperative now requires that these gestures must be learned in a simulation situation. The objective of this thesis was to determine the psychological and pedagogical factors for optimizing procedural learning among medical students by combining e-learning and simulation. To meet this objective, we have implemented 5 studies. The first study sought to confirm that learners did realize an action atomization process at the beginning of learning. The second study compared the use of a methodological paradigm allowing action atomization and a paradigm of delayed execution. The next two studies were concerned with the instructions design and more particularly with their perspective, taking into account the visuospatial abilities of the learners.The last study investigated the validity of a blended learning course by comparing two educational organizations. The contributions of this thesis fall into 3 areas. In terms of the procedural learning process, the learners atomize the action during the first phase of procedural learning. Methodologically, it is therefore essential that the study paradigm takes into account this process, as well as the many repetitions necessary for procedural learning. From an educational point of view, the self-centered point of view would be the most profitable, whatever the visuo-spatial abilities of the learners. Finally, using a combination of e-learning and face-to-face simulation seems to be effective for procedural learning
18

McDonald, Joseph Douglas. "A behavioral intervention for reducing post-completion errors in a safety-critical system." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A widespread and persistent memory error that people commit on a daily basis is the post-completion error (PCE; i.e., forgetting to complete the final step of a procedural task). PCEs occur in the railroad industry when a locomotive conductor changes the direction of a rail switch but fails to report this change. This particular error could contribute to unsafe conditions as another train traveling on the same track could derail. Although training can help reduce some of the factors leading to unsafe conditions on the rail, research has demonstrated that PCEs are different from other errors of omission in that they cannot be eliminated through training, which makes them a difficult problem to address. Therefore, there is a need to explore new remedial actions designed to reduce PCEs. The current study investigated the effectiveness of a theoretically motivated intervention at reducing PCEs in trainyard operations, where making these errors could be life-threatening. Twenty-eight undergraduates completed trainyard tasks within a high-fidelity simulator. Each participant received the behavioral intervention in one block and no intervention in another. Specifically, participants were required to perform an additional task designed to remind participants of the post-completion (PC) step. The intervention significantly reduced PCE rates in the context of trainyard operations, on average, by 65%. We discuss implications of these results on reducing trainyard accidents, and how this outcome can contribute to the literature on the cause of PCEs.
19

Bernhardt, Adrien. "Modèles pour la création interactive intuitive." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00875519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur la création interactive intuitive de formes 3D, justifié pour les artistes par un besoin d'efficacité et d'expressivité et pour le grand public par la démocratisation d'usages de la modélisation numérique pour les jeux vidéo ou pour les imprimantes 3D. Dans ce cadre, pour trouver des nouvelles formes d'interactions, nous avons développé des modeleurs et des techniques de modélisation : un modeleur de formes libres 3D par métaphore de peinture, un modeleur vectoriel temps réel de paysages ainsi qu'un modeleur de paysages par croquis vu de la première personne. Les contributions scientifiques vont d'un opérateur de mélange implicite qui garanti la localité du mélange, à l'utilisation qu'une formulation bi-harmonique pour la modélisation de terrains permettant une modélisation fine, intuitive et temps réel de terrains ; en passant par la modélisation de paysages à partir de croquis composé de silhouettes vus de la première personne.
20

Efremov, Semen. "Croissance paramétrée et bruit procédural pour la conception de métamatériaux mécaniques." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Avec le développement constant des technologies, les capacités de calcul et de fabrication augmentent, les méthodes de production évoluent et de nouvelles techniques apparaissent. Par conséquent, le besoin de nouveaux matériaux aux propriétés adaptées et optimisées pour différentes applications se fait sentir. Les composites périodiques avec une topologie de microstructure adaptée, appelés métamatériaux cellulaires, sont largement étudiés dans ce contexte. Ces structures sont connues pour leurs propriétés mécaniques remarquables, notamment une résistance élevée, un poids réduit et une absorption d'énergie accrue. L'utilisation de ces matériaux permet d'obtenir des propriétés physiques améliorées ou des caractéristiques fonctionnelles spécifiques et apporte un gain économique et un bénéfice écologique. Cette thèse est consacrée au développement et à l'analyse de méthodes de conception assistée par ordinateur de matériaux aux propriétés mécaniques adaptées. Les métamatériaux mécaniques ont été étudiés à travers deux approches différentes : la modélisation de structures périodiques par un modèle de croissance paramétré et des fonctions de bruit procédurales. Pour relever le défi d'obtenir des microstructures quasi régulières avec des propriétés variant progressivement, j'ai proposé et étudié un matériau cellulaire engendré par un processus de croissance. La croissance est paramétrée par un ensemble d'étoiles 3D à chaque point du réseau, définissant la géométrie qui apparaîtra autour. Des tuiles individuelles peuvent être calculées et utilisées dans un treillis périodique, ou une structure globale peut être produite par gradation spatiale, en changeant l'ensemble paramétrique en forme d'étoile à chaque emplacement du treillis. Au-delà de la gradation spatiale libre, un avantage important de cette approche est que les symétries élastiques peuvent être intrinsèquement renforcées. Nous montrons dans ce travail comment les symétries partagées entre le réseau et l'ensemble étoilé se traduisent directement en symétries de la réponse élastique des structures périodiques. Ainsi, l'approche permet de restreindre la symétrie des réponses élastiques - monoclinique, orthorhombique, trigonale, etc. - tout en explorant librement un large espace de géométries et de topologies possibles. Je fournis une étude complète de l'espace de symétries et de larges combinaisons de paramètres de processus de croissance. De plus, je démontre par des résultats numériques et expérimentaux les réponses attendues déclenchées par les structures obtenues. La deuxième contribution de cette thèse est une nouvelle technique de synthèse procédurale de motifs. Cette approche présente des propriétés souhaitables pour la modélisation de motifs très contrastés, qui sont bien adaptés pour produire des détails de surface et de microstructure. Cette approche définit un champ de phase lisse stochastique - un bruit de phase - qui est ensuite introduit dans une fonction périodique (par exemple une onde sinusoïdale), produisant un champ oscillant avec des fréquences principales prescrites et des oscillations de contraste préservées. Je présente dans cette thèse un modèle mathématique qui repose sur une reformulation du bruit de Gabor en termes de champ phasor qui permet une séparation claire entre l'intensité locale et la phase. En particulier, j'étudie le comportement du bruit phasor en termes de spectre de puissance. Ainsi, une étude théorique comparative du bruit en phase est réalisée afin de comprendre les liens entre ses propriétés et ses paramètres
With constant development of technologies, computational and manufacturing capabilities increase, production methods evolve, and new techniques appear. As a result, the need for new materials with tailored, optimized properties for different applications arises. Periodic composites with tailored microstructure topology, called cellular metamaterials are extensively studied in this context. These structures are known for their remarkable mechanical properties, including high strength, lower weight, and increased energy absorption. The use of these materials allows to achieve improved physical properties or specific functional features and provides economical gain and ecological benefit.This thesis is dedicated to the development and analysis of methods for computer-aided design of materials with tailored mechanical properties. The mechanical metamaterials were studied through two different approaches: modelling periodic structures through a parameterized growth model and procedural noise functions. To tackle the challenge of obtaining near-regular microstructures with progressively varying properties, I proposed and studied a cellular material spawned by a growth process. The growth is parameterized by a 3D star-shaped set at each lattice point, defining the geometry that will appear around it. Individual tiles may be computed and used in a periodic lattice, or a global structure may be produced under spatial gradations, changing the parametric star-shaped set at each lattice location. Beyond free spatial gradation, an important advantage of this approach is that elastic symmetries can be intrinsically enforced. It is shown in this work how shared symmetries between the lattice and the star-shaped set directly translate into symmetries of the periodic structures' elastic response. Thus, the approach enables restricting the symmetry of the elastic responses -- monoclinic, orthorhombic, trigonal, and so on -- while freely exploring a wide space of possible geometries and topologies. I provide a comprehensive study of the space of symmetries and broad combinations of growth process parameters. Furthermore, I demonstrate through numerical and experimental results the expected responses triggered by the obtained structures.The second contribution of this thesis is a novel procedural pattern synthesis technique. This approach exhibits desirable properties for modeling highly contrasted patterns, that are well suited to produce surface and microstructure details. This approach defines a stochastic smooth phase field –- a phasor noise –- that is then fed into a periodic function (e.g. a sine wave), producing an oscillating field with prescribed main frequencies and preserved contrast oscillations. I present in this thesis a mathematical model, that builds upon a reformulation of Gabor noise in terms of a phasor field that affords for a clear separation between local intensity and phase. In particular, I study the behavior of phasor noise in terms of its power spectrum. Hence, a comparative theoretical study of phasor noise was performed in order to gain understanding of links between its properties and parameters
21

Matejcik, Frank J. "Heteroscedastic multiple comparison procedures for computer simulation /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487780393268038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Van, der Merwe Andre. "Simulation procedure for marker and camera placement." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MScEng) -- Stellenbosch University, 2003.
INTRODUCTION: The Medical Radiation department at iThemba LABS provides proton beam therapy facilities for irradiation of intracranial, head and neck lesions. Proton radiation treatment offers a number of advantages over alternative radiation therapy modal- ities. The most significant advantage is the ability to localize the dose to the lesion or target volume [16]. Lesions are located by means of medical imaging processes, such as Computer Tomography (CT) or Magnetic Resonance Imaging (MRI) scans. Patient treatment commences at the existing treatment facility of iThemba LABS. The patient positioning system that is currently in use at this facility was designed for only one horizontal beam delivery system and a limited number of treatment positions. The possibility of acquiring an additional beam delivery system and im- proving the utilization of the system resulted in plans to expand the current proton therapy capabilities. These plans resulted in the development of a new treatment vault, complete with a new patient positioning system. The new vault will cater for two beam delivery systems and expand current treatment positions.
23

Turley, Carole. "Calibration Procedure for a Microscopic Traffic Simulation Model." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1747.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Prieto, Bernal Juan Carlos. "Multiparametric organ modeling for shape statistics and simulation procedures." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0010/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La modélisation géométrique a été l'un des sujets les plus étudiés pour la représentation des structures anatomiques dans le domaine médical. Aujourd'hui, il n'y a toujours pas de méthode bien établie pour modéliser la forme d'un organe. Cependant, il y a plusieurs types d'approches disponibles et chaque approche a ses forces et ses faiblesses. La plupart des méthodes de pointe utilisent uniquement l'information surfacique mais un besoin croissant de modéliser l'information volumique des objets apparaît. En plus de la description géométrique, il faut pouvoir différencier les objets d'une population selon leur forme. Cela nécessite de disposer des statistiques sur la forme dans organe dans une population donné. Dans ce travail de thèse, on utilise une représentation capable de modéliser les caractéristiques surfaciques et internes d'un objet. La représentation choisie (s-rep) a en plus l'avantage de permettre de déterminer les statistiques de forme pour une population d'objets. En s'appuyant sur cette représentation, une procédure pour modéliser le cortex cérébral humain est proposée. Cette nouvelle modélisation offre de nouvelles possibilités pour analyser les lésions corticales et calculer des statistiques de forme sur le cortex. La deuxième partie de ce travail propose une méthodologie pour décrire de manière paramétrique l'intérieur d'un objet. La méthode est flexible et peut améliorer l'aspect visuel ou la description des propriétés physiques d'un objet. La modélisation géométrique enrichie avec des paramètres physiques volumiques est utilisée pour la simulation d'image par résonance magnétique pour produire des simulations plus réalistes. Cette approche de simulation d'images est validée en analysant le comportement et les performances des méthodes de segmentations classiquement utilisées pour traiter des images réelles du cerveau
Geometric modeling has been one of the most researched areas in the medical domain. Today, there is not a well established methodology to model the shape of an organ. There are many approaches available and each one of them have different strengths and weaknesses. Most state of the art methods to model shape use surface information only. There is an increasing need for techniques to support volumetric information. Besides shape characterization, a technique to differentiate objects by shape is needed. This requires computing statistics on shape. The current challenge of research in life sciences is to create models to represent the surface, the interior of an object, and give statistical differences based on shape. In this work, we use a technique for shape modeling that is able to model surface and internal features, and is suited to compute shape statistics. Using this technique (s-rep), a procedure to model the human cerebral cortex is proposed. This novel representation offers new possibilities to analyze cortical lesions and compute shape statistics on the cortex. The second part of this work proposes a methodology to parameterize the interior of an object. The method is flexible and can enhance the visual aspect or the description of physical properties of an object. The geometric modeling enhanced with physical parameters is used to produce simulated magnetic resonance images. This image simulation approach is validated by analyzing the behavior and performance of classic segmentation algorithms for real images
25

Boiardi, Andrea. "Study of a Procedure for Unit Load Transport Simulation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Accurate transport simulation is necessary to determine the probability that a unitary load might be disassembled or subject to severe structural changes while travelling to its destination. With the evaluations that come from accurate transport simulation companies might be able, for example, to optimize for the amount of plastic to use in the wrapping process of a unitary load. Other than the obvious immense cost reduction given by a lower incidence of product-ruining accidents during transport, optimizing plastic usage would also help avoid over-wrapping, resulting in better environmental outcomes. Currently, multi-axial simulation is more and more being adopted by companies in the hopes of representing more closely the stresses a load is subject to during transport. However, there is no international standard outlining a procedure that one might follow in order to do so, yet. This work starts from the identification of the best methods to record simulation-oriented transport data precisely and reliably. Some recordings of different types of transport have been conducted to highlight their different nature and to show an example of how to use a sensor in different contexts. Then, the main existing methods for simulating transport are analyzed and used to create a new procedure for the selection of the most suitable one, mainly depending on the availability of technological resources and transport data. Lastly, considerations about possible future improvements on this work are presented.
26

Miller, David Michael. "Developing a procedure to identify parameters for calibration of a vissim model." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Paksarsawan, Sompong. "The development of queuing simulation procedures for traffic in Bangkok." Thesis, University of Leeds, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Matias, Teresa do Rosario Senos. "Approximate procedures for simulation and synthesis of nonideal separation systems." Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/15283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Simulation and synthesis of nonideal separation systems is computationally intensive. The main reasons for this are the time used in the calculation of physical properties, which cannot be assumed constant throughout the calculation, and the elaborate methods required for the full rigorous simulation and design of distillation units. The present work looks at two different ways of reducing computing time in steady state simulation and in synthesis of nonideal separation systems: • Use of approximate models for physical property calculation. • Use of 'shortcut' procedures, which are thermodynamically rigorous, in simulation and synthesis of nonideal distillation. Approximate models are derived for the liquid activity coefficient and for relative volatilities within a simplified flash unit. Liquid activity coefficient models include a Margules-like equation generalised for a multicomponent mixture and other equations of the form of rational functions. They are tested with several nonideal ternary mixtures and it is shown how their behaviour changes across the ternary composition diagram. The development of simplified flash units with approximate physical properties is done in a dual level flowsheeting environment. One level is used to solve the material balance assuming given fixed relative volatilities. The other level approximates the physical property values based on rigorous bubble point data obtained from a rigorous physical property package, using an 'ideal' correction to calculate the vapour liquid equilibrium conditions. It is shown how the two levels can be used in different arrangements, by converging them simultaneously or one within the other. The performance of the dual level flowsheeting arrangements is tested using the Cavett problem structure for several mixtures and compared against the conventional method where the flash is performed directly by the rigorous physical property package. Finally a rigorous shortcut procedure has been developed for designing nonideal distillation processes. The procedure is based on a nonideal variation of Fenske equation with rigorous physical properties using an iterative method. The procedure is implemented in a package for automated synthesis incorporating heat integration. An example case is studied and the results obtained in the synthesis are compared will a full rigorous simulation of the same process. It is shown for the first time how a rigorous shortcut procedure can be used in synthesis to produce results that consider heat integration in the initial stages of design, within a reasonable amount of time.
29

Malone, Gwendolyn Joy. "Ranking and Selection Procedures for Bernoulli and Multinomial Data." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/7603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ranking and Selection procedures have been designed to select the best system from a number of alternatives, where the best system is defined by the given problem. The primary focus of this thesis is on experiments where the data are from simulated systems. In simulation ranking and selection procedures, four classes of comparison problems are typically encountered. We focus on two of them: Bernoulli and multinomial selection. Therefore, we wish to select the best system from a number of simulated alternatives where the best system is defined as either the one with the largest probability of success (Bernoulli selection) or the one with the greatest probability of being the best performer (multinomial selection). We focus on procedures that are sequential and use an indifference-zone formulation wherein the user specifies the smallest practical difference he wishes to detect between the best system and other contenders. We apply fully sequential procedures due to Kim and Nelson (2004) to Bernoulli data for terminating simulations, employing common random numbers. We find that significant savings in total observations can be realized for two to five systems when we wish to detect small differences between competing systems. We also study the multinomial selection problem. We offer a Monte Carlo simulation of the Bechhofer and Kulkarni (1984) MBK multinomial procedure and provide extended tables of results. In addition, we introduce a multi-factor extension of the MBK procedure. This procedure allows for multiple independent factors of interest to be tested simultaneously from one data source (e.g., one person will answer multiple independent surveys) with significant savings in total observations compared to the factors being tested in independent experiments (each survey is run with separate focus groups and results are combined after the experiment). Another multi-factor multinomial procedure is also introduced, which is an extension to the MBG procedure due to Bechhofer and Goldsman (1985, 1986). This procedure performs better that any other procedure to date for the multi-factor multinomial selection problem and should always be used whenever table values for the truncation point are available.
30

Duong, Duc Quang. "Simulation of the error control procedures in the Xpress Transport Protocol." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq26013.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cardoso, Marco Antônio. "Development and application of reduced-order modeling procedures for reservoir simulation /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kiekhaefer, Andrew Paul. "Simulation ranking and selection procedures and applications in network reliability design." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents three novel contributions to the application as well as development of ranking and selection procedures. Ranking and selection is an important topic in the discrete event simulation literature concerned with the use of statistical approaches to select the best or set of best systems from a set of simulated alternatives. Ranking and selection is comprised of three different approaches: subset selection, indifference zone selection, and multiple comparisons. The methodology addressed in this thesis focuses primarily on the first two approaches: subset selection and indifference zone selection. Our first contribution regards the application of existing ranking and selection procedures to an important body of literature known as system reliability design. If we are capable of modeling a system via a network of arcs and nodes, then the difficult problem of determining the most reliable network configuration, given a set of design constraints, is an optimization problem that we refer to as the network reliability design problem. In this thesis, we first present a novel solution approach for one type of network reliability design optimization problem where total enumeration of the solution space is feasible and desirable. This approach focuses on improving the efficiency of the evaluation of system reliabilities as well as quantifying the probability of correctly selecting the true best design based on the estimation of the expected system reliabilities through the use of ranking and selection procedures, both of which are novel ideas in the system reliability design literature. Altogether, this method eliminates the guess work that was previously associated with this design problem and maintains significant runtime improvements over the existing methodology. Our second contribution regards the development of a new optimization framework for the network reliability design problem that is applicable to any topological and terminal configuration as well as solution sets of any sizes. This framework focuses on improving the efficiency of the evaluation and comparison of system reliabilities, while providing a more robust performance and user-friendly procedure in terms of the input parameter level selection. This is accomplished through the introduction of two novel statistical sampling procedures based on the concepts of ranking and selection: Sequential Selection of the Best Subset and Duplicate Generation. Altogether, this framework achieves the same convergence and solution quality as the baseline cross-entropy approach, but achieves runtime and sample size improvements on the order of 450% to 1500% over the example networks tested. Our final contribution regards the development and extension of the general ranking and selection literature with novel procedures for the problem concerned with the selection of the -best systems, where system means and variances are unknown and potentially unequal. We present three new ranking and selection procedures: a subset selection procedure, an indifference zone selection procedure, and a combined two stage subset selection and indifference zone selection procedure. All procedures are backed by proofs of the theoretical guarantees as well as empirical results on the probability of correct selection. We also investigate the effect of various parameters on each procedure's overall performance.
33

Sarpong, Abeam Danso. "Tolerance intervals for variance component models using a Bayesian simulation procedure." Thesis, Nelson Mandela Metropolitan University, 2013. http://hdl.handle.net/10948/d1021025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The estimation of variance components serves as an integral part of the evaluation of variation, and is of interest and required in a variety of applications (Hugo, 2012). Estimation of the among-group variance components is often desired for quantifying the variability and effectively understanding these measurements (Van Der Rijst, 2006). The methodology for determining Bayesian tolerance intervals for the one – way random effects model has originally been proposed by Wolfinger (1998) using both informative and non-informative prior distributions (Hugo, 2012). Wolfinger (1998) also provided relationships with frequentist methodologies. From a Bayesian point of view, it is important to investigate and compare the effect on coverage probabilities if negative variance components are either replaced by zero, or completely disregarded from the simulation process. This research presents a simulation-based approach for determining Bayesian tolerance intervals in variance component models when negative variance components are either replaced by zero, or completely disregarded from the simulation process. This approach handles different kinds of tolerance intervals in a straightforward fashion. It makes use of a computer-generated sample (Monte Carlo process) from the joint posterior distribution of the mean and variance parameters to construct a sample from other relevant posterior distributions. This research makes use of only non-informative Jeffreys‟ prior distributions and uses three Bayesian simulation methods. Comparative results of different tolerance intervals obtained using a method where negative variance components are either replaced by zero or completely disregarded from the simulation process, is investigated and discussed in this research.
34

Donovan, Marty Edwin. "AN AUTOMATED PROCEDURE FOR STOCHASTIC SIMULATION INPUT MODELING WITH BEZIER DISTRIBUTIONS." NCSU, 1998. http://www.lib.ncsu.edu/theses/available/etd-19980908-212432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

As a means of handling the problem of input modeling forstochastic simulation experiments, we build upon previous workof Wagner and Wilson using Bézier distributions. Wagner andWilson proposed a likelihood ratio test to determine how manycontrol points (that is, parameters) a Bézier distributionshould have to adequately model sample data. In this thesis, weextend this input-modeling methodology in two directions. First,we establish the asymptotic properties of the Likelihood RatioTest (LRT) as the sample size tends to infinity. The asymptoticanalysis applies only to maximum likelihood estimation withknown endpoints and not to any other parameter estimationprocedure, nor to situations in which the endpoints of thetarget distribution are unknown. Second, we perform acomprehensive Monte Carlo evaluation of this procedure forfitting data together with other estimation procedures based onleast squares and minimum L norm estimation. In the MonteCarlo performance evaluation, several different goodness-of-fitmeasures are formulated and used to evaluate how well the fittedcumulative distribution function (CDF) compares to theempirical CDF and to the actual CDF from which the samplescame. The Monte Carlo experiments show that in addition toworking well with the method of maximum likelihood when theendpoints of the target distribution are known, the LRT alsoworks well with minimum L norm estimation and least squaresestimation; moreover, the LRT works well with suitablyconstrained versions of these three estimation methods when theendpoints are unknown and must also be estimated.

35

Stanley, Clifford R. "Comparison of data classification procedures in applied geochemistry using Monte Carlo simulation." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/29430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In geochemical applications, data classification commonly involves 'mapping' continuous variables into discrete descriptive categories, and often is achieved using thresholds to define specific ranges of data as separate groups which then can be compared with other categorical variables. This study compares several classification methods used in applied geochemistry to select thresholds and discriminate between populations or to recognize anomalous observations. The comparisons were made using monte carlo simulation to evaluate how well different techniques perform using different data set structures. A comparison of maximum likelihood parameter estimates of a mixture of normal distributions using class interval frequencies versus raw data was undertaken to study the quality of the corresponding results. The more time consuming raw data approach produces optimal parameter estimates while the more rapid class interval approach is the approach in common use. Results show that provided there are greater than 50 observations per distribution and (on average) 10 observations per class interval, the maximum likelihood parameter estimates by the two methods are practically indistinguishable. Univariate classification techniques evaluated in this study include the 'mean plus 2 standard deviations', the '95th percentile', the gap statistic and probability plots. Results show that the 'mean plus 2 standard deviations' and '95th percentile' approaches are inappropriate for most geochemical data sets. The probability plot technique classifies mixtures of normal distributions better than the gap statistic; however, the gap statistic may be used as a discordancy test to reveal the presence of outliers. Multivariate classification using the background characterization approach was simulated using several different functions to describe the variation in the background distribution. Comparisons of principal components, ordinary least squares regression and reduced major axis regression indicate that reduced major axis regression and principal components are not only consistent with assumptions about geochemical data, but are less sensitive to varying degrees of data set truncation than is ordinary least squares regression. Furthermore, correcting the descriptive statistics of a truncated data set and calculating the background functions using these statistics produces residuals and scores which are predictable and thus can be distinguished easily from residuals and scores calculated for data from another distribution.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
36

Nitsche, Philippe. "Safety-critical scenarios and virtual testing procedures for automated cars at road intersections." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/34433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis addresses the problem of road intersection safety with regard to a mixed population of automated vehicles and non-automated road users. The work derives and evaluates safety-critical scenarios at road junctions, which can pose a particular safety problem involving automated cars. A simulation and evaluation framework for car-to-car accidents is presented and demonstrated, which allows examining the safety performance of automated driving systems within those scenarios. Given the recent advancements in automated driving functions, one of the main challenges is safe and efficient operation in complex traffic situations such as road junctions. There is a need for comprehensive testing, either in virtual testing environments or on real-world test tracks. Since it is unrealistic to cover all possible combinations of traffic situations and environment conditions, the challenge is to find the key driving situations to be evaluated at junctions. Against this background, a novel method to derive critical pre-crash scenarios from historical car accident data is presented. It employs k-medoids to cluster historical junction crash data into distinct partitions and then applies the association rules algorithm to each cluster to specify the driving scenarios in more detail. The dataset used consists of 1,056 junction crashes in the UK, which were exported from the in-depth On-the-Spot database. The study resulted in thirteen crash clusters for T-junctions, and six crash clusters for crossroads. Association rules revealed common crash characteristics, which were the basis for the scenario descriptions. As a follow-up to the scenario generation, the thesis further presents a novel, modular framework to transfer the derived collision scenarios to a sub-microscopic traffic simulation environment. The software CarMaker is used with MATLAB/Simulink to simulate realistic models of vehicles, sensors and road environments and is combined with an advanced Monte Carlo method to obtain a representative set of parameter combinations. The analysis of different safety performance indicators computed from the simulation outputs reveals collision and near-miss probabilities for selected scenarios. The usefulness and applicability of the simulation and evaluation framework is demonstrated for a selected junction scenario, where the safety performance of different in-vehicle collision avoidance systems is studied. The results show that the number of collisions and conflicts were reduced to a tenth when adding a crossing and turning assistant to a basic forward collision avoidance system. Due to its modular architecture, the presented framework can be adapted to the individual needs of future users and may be enhanced with customised simulation models. Ultimately, the thesis leads to more efficient workflows when virtually testing automated driving at intersections, as a complement to field operational tests on public roads.
37

Campana, Riccardo. "NB-IoT synchronization procedure analysis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22583/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Third Generation Partnership Project (3GPP), recognizing the importance of IoT, introduced a number of key features to supporting it since Release 13. In particular, since 2017 the so called NB-IoT has been launched, providing progressively improved support for Low Power Wide Area Networks (LPWAN). While terrestrial technologies will play a key role in the provision of the NB-IoT service, satellite networks can have a complementary role thanks to their very wide coverage area and short service deployment time. Within the aforementioned framework, the aim of this thesis is to analyze the feasibility of integrating the NB-IoT technology with satellite communication (SatCom) systems, focusing in particular in the assessment of the downlink synchronization procedure in the NB-IoT SatCom systems. For this reason, this work investigates the issues introduced by the integration between the NB-IoT terrestrial network and Non Terrestrial Networks (NTN). Furthermore, in order to find possible solutions to harmonize their coexistence, the state of the art of the satellite channel effect mitigation techniques is analyzed. After that, the implementation of a MATLAB simulator for the cell synchronization procedure is presented, as a first step for the understanding of the whole NB-IoT procedures.
38

SALAS, EDGARD UBALDO GUILLEN. "A COMPUTER-BASED PROCEDURE FOR THE ANALYSIS AND SIMULATION OF BOND GRAPHS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2000. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=7478@1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Neste trabalho é desenvolvido um procedimento para a geração por inspeção das equações de estado e funções de transferência associadas a um grafo de ligação por meio de análises literal e numérica. O procedimento consiste na identificação de caminhos causais e a determinação de suas contribuições para as equações. É analisada a formulação matemática e discutida a implementação do procedimento em um código computacional. A aplicação do procedimento é ilustrada com exemplos, descreve-se detalhadamente as rotinas de entrada de dados, variáveis auxiliares, identificação dos caminhos e malhas causais, assim como a forma de apresentação dos resultados.
In this work a procedure for the computer generation by inspection of the state equations and transfer functions, both in literal and numerical form, associated to a bond graph is discussed. The procedure consists of the identification of causal paths and the determination of its contributions for the equations. The mathematical background and the implementation of the procedure in a computational code are, also, presented. The application of the procedure is illustrated with examples; the routines for data entry, causal path and mesh identification and the used variables are described, as well as the form of presentation of the results.
39

Lee, Feng-ling. "A Comparison of Rank and Bootstrap Procedures for Completely Randomized Designs with Jittering." DigitalCommons@USU, 1987. https://digitalcommons.usu.edu/etd/7019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper discusses results of a computer simulation to investigate the effect of jittering to simulate measurement error. In addition, the classical F ratio, the bootstrap F and the F for ranked data are compared. Empirical powers and p-values suggest the bootstrap is a good and robust procedure and the rank procedure seems to be too liberal when compared to the classical F ratio.
40

Hull, Stephen Robert. "The improvement of an automatic procedure for the digital simulation of hydraulic systems." Thesis, University of Bath, 1986. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.377780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ramakrishnan, Vijaya. "Use of Simulation for Tracheostomy Care, a Low Volume, High Risk Nursing Procedure." ScholarWorks, 2018. https://scholarworks.waldenu.edu/dissertations/4981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Often, education regarding low volume and high-risk procedures, like tracheostomy, are ignored. Lack of experience, skills, and human resources can lead to decreases in confidence levels, diminished quality of care, and potentially an adverse event. The purpose of this DNP project was to prepare simulation-based education on the tracheostomy procedure and provide hands-on education to bedside nurses. The project answered the question: To what extent will a simulation-based teaching method adequately prepare staff nurses in a post-acute surgical unit to perform this high risk low volume procedure? The Johns Hopkins evidence-based model method was used to assist in translation of the practice change process. The International Nursing Association for Clinical Simulation and Learning standards were used to design simulation scenarios. Surgical acute care nurses (n = 35) including day and night shift nurses, new graduates, and experienced nurses participated. Groups of five to eight nurses participated in a two-hour simulation session at hospital simulation center. Pre- and post-surveys on confidence level data, and National League of Nursing evaluation tool data on educational practices and simulation designs were collected from all participants. Paired t-test statistics showed a significant increase in confidence level from pre to post education (p < .001). Because of the significant impact on patient care due to preventing complications and by improving nursing staff's level of confidence, the project may contribute to positive social change.
42

Wu, Jin. "CRASHWORTHINESS SIMULATION OF ROADSIDE SAFETY STRUCTURES WITH DEVELOPMENT OF MATERIAL MODEL AND 3-D FRACTURE PROCEDURE." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin971273656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lee, Eunjung. "Equating multidimensional tests under a random groups design: a comparison of various equating procedures." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/5011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of this research was to compare the equating performance of various equating procedures for the multidimensional tests. To examine the various equating procedures, simulated data sets were used that were generated based on a multidimensional item response theory (MIRT) framework. Various equating procedures were examined, including both unidimensional and the multidimensional equating procedures based on an IRT framework in addition to traditional equating procedures. Specifically, the performance of the following six equating procedures under the random groups design was compared: (1) unidimensional IRT observed score equating, (2) unidimensional IRT true score equating, (3) full MIRT observed score equating, (4) unidimensionalized MIRT observed score equating, (5) unidimensionalized MIRT true score equating, and (6) equipercentile equating. A total of four factors (test length, sample size, form difficulty differences, and correlations between dimensions) were expected to impact equating performance, and their impacts were investigated by creating two conditions per each factor: long vs. short test, large vs. small sample size, some vs. no form differences, and high vs. low correlation between dimensions. This simulation study over 50 replications yielded several patterns of equating performance of the six procedures across the simulation conditions. The following six findings are notable: (1) the full MIRT procedure provided more accurate equating results (i.e., less degree of error) than other equating procedures especially when the correlation between dimensions was low; (2) the equipercentile procedure was more likely than the IRT methods to yield a larger amount of random error and overall error across all the conditions; (3) equating for multidimensional tests was more accurate when form differences were small, sample size was large, and test length was long; (4) even when multidimensional tests were used (i.e., the unidimensionality assumptions were violated), still the unidimensional IRT procedures were found to yield quite accurate equating results; and (5) whether an equating procedure is an observed or a true score procedure did not seem to yield any differences in equating results. Building upon these findings, some theoretical and practical implications are discussed, and future research directions are suggested to strengthen the generalizability of the current findings. Given that only a handful of studies have been conducted in the MIRT literature, such research is expected to examine the various specific conditions where these findings are likely to be hold, thereby leading to practical guidelines that can be used in various operational testing situations.
44

Van, Rensburg Johann Francois. "Developing ESCO procedures for large telecommunication facilities using novel simulation techniques / J.F. van Rensburg." Thesis, North-West University, 2006. http://hdl.handle.net/10394/1693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zou, Feng 1977. "Real-time trajectory optimization and air traffic control simulation for noise abatement approach procedures." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2004.
Includes bibliographical references (p. 89-90).
Aircraft noise is a major obstacle to the growth of aviation. This thesis presents an adaptive onboard real-time optimization algorithm and an Air Traffic Control simulation model that can minimize the aircraft approach noise and meet air traffic control targets and restrictions. The adaptive real-time optimization algorithm uses dynamic programming, nonlinear optimization, and receding horizon control to generate approach procedures. The resulting noise abatement trajectories compensate for environmental uncertainties, provide more flexibility to air traffic controllers and pilots, and improve airport efficiency while lowering community noise. The Air Traffic Control simulation model simulates a fleet approach with noise abatement approaches. Three different status displays are tested and compared in the simulation, and the optimal displays for controller are explored in the thesis.
by Feng Zou.
S.M.
46

Zelený, Jan. "Realistická krajina s vegetací." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
There is enough rendering power to draw more than only simple indoor scenes today and it can produce very realistic images of landscape with vegetation. Moreover, there are new sophisticated methods for generating of such landscape and simulation of plants ecosystem. This text explains few algorithms for generating and methods for interactive rendering of landscape and vegetation.
47

Bosché, Kerry N. "An empirical evaluation of a factor effects screening procedure for exploring complex simulation models." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Jun%5FBosche.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S. in Applied Science (Operations Research))--Naval Postgraduate School, June 2006.
Thesis Advisor(s): Susan M. Sanchez. "June 2006." Includes bibliographical references (p. 33-34). Also available in print.
48

Bosché, Kerry N. "An empirical evaluation of a factor effects screening procedure for exploring complex simulation models." Thesis, Monterey California. Naval Postgraduate School, 2006. http://hdl.handle.net/10945/2788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Screening experiments are procedures designed to identify the most important factors in simulation models. Previously proposed one-stage procedures such as sequential bifurcation (SB) and controlled sequential bifurcation (CSB) require factor effects to be arranged according to estimated sign or magnitude prior to screening. FF-CSB is a two-stage screening procedure for simulation experiments proposed by Sanchez et al. (2005) which uses an efficient fractional factorial experiment to estimate factor effects automatically, removing the need for pre-estimation. Empirical results show that FF-CSB classifies factor effects as well as CSB in fewer runs when factors are only grouped by their sign (positive or negative). In theory, the procedure can achieve more efficient run times when factors are also sorted by estimated effect after the first stage. This analysis tests the efficiency and performance characteristics of a sorted FF-CSB procedure under a variety of conditions and finds that the procedure classifies factors as well as unsorted FF-CSB with significant improvement in run times. Additionally, various model- and user-determined scenarios are tested in an initial attempt to parameterize run times against parameters known or controlled by the modeler. Further experimentation is also suggested.
Ensign, United States Navy
49

Vásquez, Chicata Luis Fernando Gonzalo. "Computational procedure for the estimation of pile capacity including simulation of the installation process /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cherif, Mohamed Raouf. "Theories et procedures numeriques visant a ameliorer l'efficacite du calcul par elements finis." Paris 6, 1986. http://www.theses.fr/1986PA066627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le travail que nous presentons consiste a evaluer des theories et des procedures numeriques afin d'ameliorer le calcul par elements finis des problemes poses par les singularites geometriques (problemes numeriques de precision et de convergence). Dans la premiere partie, on a etudie les possibilites d'une modelisation plus fine (par rapport aux modelisations classiques) construite sur une formulation de second ordre permettant de pallier les problemes signales ci-dessus. Dans la deuxieme partie, on s'est interesse a developper des elements de raccordements differents en forme et en raffinement bases sur l'emploi d'une methode de transformation des matrices de rigidite. Dans la troisieme partie, on a teste une procedure de modification de la matrice de rigidite globale assemblee pour traiter les problemes de diversite de profils sur une meme structure standart. Dans la quatrieme partie, nous avons effectue des analyses theorique et numerique confrontees avec une analyse experimentale d'un modele industriel pratique d'une filiere a divers profils de filage en flexion

To the bibliography