To see the other types of publications on this topic, follow the link: Human Simulations.

Dissertations / Theses on the topic 'Human Simulations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Human Simulations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mufti, H. (Haseeb). "Human body communication performance simulations." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201606092482.

Full text
Abstract:
Human Body Communication (HBC) is a novel communication method between devices which use human body as a transmission medium. This idea is mostly based on the concept of wireless biomedical monitoring system. The on-body sensor nodes can monitor vital signs of a human body and use the body as a transmission medium. This technology is convenient for long durations of clinical monitoring with the option of more mobility and freedom for the user. In this thesis, IEEE 802.15.6-2012 physical (PHY) layer for the HBC was simulated. Simulation model is following the standard’s requirements and processes. The human body was taken as a transmission medium and simulations, which follow the HBC standard, have been carried out. For the purpose of simulations, MATLAB is used as a platform to test and run the simulations. The constants and variables used in the simulations are taken from the IEEE 802.15 working group for wireless personal area networks (WPANs). The transmitter model and the receiver model have been taken from the standard, with changes done in it for performing the simulations on the PHY layer only. The simulations were done keeping in mind the dielectric properties of the outer layer of a human body, i.e., the dielectric values for human skin are noted and their corresponding values were used in the mathematical calculations. The work done here presents a transmitter and receiver architecture for the human body communication. The minimum data rate being 164 kbps and the transmitter being designed around the 21 MHz center frequency has achieved some outputs which are worth looking. The channel models used in this simulator are HBC channel and AWGN (additive white Gaussian noise) channel. It was observed that when signal was passed through AWGN channel, noise was added uniformly over the signal, while in the HBC channel signal strength is directly proportional to the transceiver ground sizes. In conclusion, the size of the ground terminals plays a critical role for the signal quality in the HBC simulator. The results in this thesis show that pathloss has certain linearity with the distance. The pathloss is calculated for different parts of the body with higher loss for structure with higher amount of bone, and vice versa. It is observed that in the HBC channel there are four factors with high impact on the system. These are the distances between the transceiver in air and on body while the other two are the sizes of the transceiver grounds. The size of the transmitter ground has been deemed very significant for the HBC from the simulations results. The four factors show high impact on the HBC channel. The signal strength is highly effected with the change in these four characteristics. From the simulation results it is evident that the HBC channel show a 15 to 20 dB deviation when compared to AWGN channel. The Eb⁄N0 for BER level at 10^(-3) for AWGN channel is 10 to 11 dB while for HBC it is around 27 dB showing a significant difference in the results.
APA, Harvard, Vancouver, ISO, and other styles
2

Engmo, Vidar. "Representation of Human Behavior in Military Simulations." Thesis, Norwegian University of Science and Technology, Department of Telematics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9798.

Full text
Abstract:

The purpose of this master thesis was to investigate the psychological and computational basis for human behavior representation (HBR) in military simulations and identify problem areas of existent software agent frameworks that provide computer generated forces (CGF) with human like cognitive abilities. The master thesis identifies psychological properties that influence human cognition in an operational environment through a theoretical study of operational and cognitive psychology. The psychological properties of human cognition are then connected to artificial intelligence through a theoretical study of agents and multi-agent systems and form the foundation for identifying general HBR properties. The HBR properties are used as evaluation markers that constitute the basis for constructing an evaluation of relevant agent frameworks thereby visualizing their strengths and weaknesses. The problem areas of incorporating artificial intelligence into CGF are further concretized by the development of a demonstrator that interacts with a synthetic environment. The demonstrator is an implementation of a tank platoon in the agent framework Jadex. The synthetic environment is provided by VR-Forces which is a product by MÄK technologies. The thesis makes a distinction between the conceptual structure of agent frameworks and their actual implementation. According to this master thesis it is the output of the agent framework that is the most important feature not how the output came into being. Producing the correct output requires the selection of the correct tools for the job. The selection of an agent framework should be taken on the background of an evaluation of the simulation requirements. A large portion of the development time is consumed by the development of application and communication interfaces. The problem is a result of lacking standardization and that most cognitive agent frameworks are experimental in nature. In addition the artificial intelligence (AI) in such simulations is often dived into levels, where the synthetic environment takes care of low-level AI and the agent framework the high-level AI. Tight synchronization between low and high-level AI is important if one wishes to create sensible behavior. The purpose of an agent framework in conjunction with CGF is thereby ensuring rapid development and testing of behavior models.

APA, Harvard, Vancouver, ISO, and other styles
3

Starling, James Kendall. "Prioritizing unaided human search in military simulations." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5622.

Full text
Abstract:
Approved for public release; distribution is unlimited.
Search and Target Acquisition (STA) in military simulations is the process of first identifying targets in a particular setting, then determining the probability of detection. This study will focus on the search aspect in STA, particularly with unaided vision. Current algorithms in combat models use an antiquated windshield wiper search pattern when conducting search. The studies used to determine these patterns used aided vision, such as binoculars or night vision devices. Very little research has been conducted for unaided vision and particularly not in urban environments. This study will use a data set taken from an earlier study in Fort Benning, GA, which captured the fixation points of 27 participants in simulated urban environments. This study achieved strong results showing that search is driven by salient scene information and is not random, using a series of nonparametric tests. The proposed algorithm, using points of interest (POIs) for the salient scene information, showed promising results for predicting the initial direction of search from the empirical data. However, the best results were realized when breaking the field of regard (FOR) into a small number of fields of view (FOVs).
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Meghendra. "Human Behavior Modeling and Calibration in Epidemic Simulations." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/87050.

Full text
Abstract:
Human behavior plays an important role in infectious disease epidemics. The choice of preventive actions taken by individuals can completely change the epidemic outcome. Computational epidemiologists usually employ large-scale agent-based simulations of human populations to study disease outbreaks and assess intervention strategies. Such simulations rarely take into account the decision-making process of human beings when it comes to preventive behaviors. Absence of realistic agent behavior can undermine the reliability of insights generated by such simulations and might make them ill-suited for informing public health policies. In this thesis, we address this problem by developing a methodology to create and calibrate an agent decision-making model for a large multi-agent simulation, in a data driven way. Our method optimizes a cost vector associated with the various behaviors to match the behavior distributions observed in a detailed survey of human behaviors during influenza outbreaks. Our approach is a data-driven way of incorporating decision making for agents in large-scale epidemic simulations.
Master of Science
In the real world, individuals can decide to adopt certain behaviors that reduce their chances of contracting a disease. For example, using hand sanitizers can reduce an individual‘s chances of getting infected by influenza. These behavioral decisions, when taken by many individuals in the population, can completely change the course of the disease. Such behavioral decision-making is generally not considered during in-silico simulations of infectious diseases. In this thesis, we address this problem by developing a methodology to create and calibrate a decision making model that can be used by agents (i.e., synthetic representations of humans in simulations) in a data driven way. Our method also finds a cost associated with such behaviors and matches the distribution of behavior observed in the real world with that observed in a survey. Our approach is a data-driven way of incorporating decision making for agents in large-scale epidemic simulations.
APA, Harvard, Vancouver, ISO, and other styles
5

Kaphle, Manindra. "Simulations of human movements through temporal discretization and optimization." Licentiate thesis, KTH, Mechanics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4585.

Full text
Abstract:

Study of physical phenomena by means of mathematical models is common in various branches of engineering and science. In biomechanics, modelling often involves studying human motion by treating the body as a mechanical system made of interconnected rigid links. Robotics deals with similar cases as robots are often designed to imitate human behavior. Modelling human movements is a complicated task and, therefore, requires several simplifications and assumptions. Available computational resources often dictate the nature and the complexity of the models. In spite of all these factors, several meaningful results are still obtained from the simulations.

One common problem form encountered in real life is the movement between known initial and final states in a pre-specified time. This presents a problem of dynamic redundancy as several different trajectories are possible to achieve the target state. Movements are mathematically described by differential equations. So modelling a movement involves solving these differential equations, along with optimization to find a cost effective trajectory and forces or moments required for this purpose.

In this study, an algorithm developed in Matlab is used to study dynamics of several common human movements. The main underlying idea is based upon temporal finite element discretization, together with optimization. The algorithm can deal with mechanical formulations of varying degrees of complexity and allows precise definitions of initial and target states and constraints. Optimization is carried out using different cost functions related to both kinematic and kinetic variables.

Simulations show that generally different optimization criteria give different results. To arrive on a definite conclusion on which criterion is superior over others it is necessary to include more detailed features in the models and incorporate more advanced anatomical and physiological knowledge. Nevertheless, the algorithm and the simplified models present a platform that can be built upon to study more complex and reliable models.

APA, Harvard, Vancouver, ISO, and other styles
6

Kaphle, Manindra. "Simulations of human movements trough temporal descretization and optimization /." Stockholm : Department of Mechanics, Royal Institute of Technology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

He, Xiaoyi. "Numerical simulations of blood flow in human coronary arteries." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/16685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Crawford, Kenneth. "Effect of Safety Factors on Timed Human Egress Simulations." University of Canterbury. Civil Engineering, 1999. http://hdl.handle.net/10092/8261.

Full text
Abstract:
This report covers the effect of safety factors on the time taken for humans to escape a building where fire has initiated. Monte Carlo simulation is used to determine the probability of failure to escape in a given fire scenario. The simulations indicate that the safety factor is very influential upon the probability of failure to escape. The major effects upon egress are ranked in this order of significance; time taken for the occupant to decide to leave the building after hearing the alarm, the time until conditions are too hostile for human survival, and the time until the fire is detected. The occupant's travel speed to leave the building has such a low level of significance that it should be treated deterministically in future studies of this type. Where a safety factor of two is applied there is a reasonable probability of failure.
APA, Harvard, Vancouver, ISO, and other styles
9

Jungkunz, Patrick. "Modeling human visual perception for target detection in military simulations." Monterey, Calif. : Naval Postgraduate School, 2009. http://handle.dtic.mil/100.2/ADA501666.

Full text
Abstract:
Dissertation (Ph.D. in Modeling, Virtual Environments and Simulation (MOVES))--Naval Postgraduate School, June 2009.
Dissertation Advisor(s): Darken, Christian J. "June 2009." Description based on title screen as viewed on July 10, 2009. DTIC Identifiers: Human visual perception, visual attention, eye tracking, human behavior modeling, visual search, semantic relevance, relevance mapa. Author(s) subject terms: Human Visual Perception, Visual Attention, Eye Movements, Eye Tracking, Human Behavior Modeling, Target Detection, Visual Search, Semantic Relevance, Relevance Map. Includes bibliographical references (p. 145-149). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
10

Rivas, Romero Daniela Paz. "Molecular Dynamics Simulations of Human Glucose Transporters and Glutamate Transporters." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25113.

Full text
Abstract:
Molecular dynamics (MD) simulations provide a very useful tool for investigating the function of membrane transporters and understanding their transport mechanism at the atomic level. In this thesis, we use MD simulations to study human glucose and glutamate transporters. Glucose transport in humans is mostly mediated by the facilitative glucose transporters, GLUTs. The crystallisation of GLUT3 has opened the way for performing MD simulations to study the transporter. However, it has been shown that most of the popular force fields underestimate the solvation free energy of simple carbohydrates. This could cause inaccuracies in MD simulations, leading to unreliable results. Therefore, we optimised the GLYCAM06 parameters for glucose by boosting the oxygen charges and showed that these new parameters perform well in MD simulations and allow for an accurate calculation of the binding free energy of glucose to GLUT3. Human glutamate transporters, EAATs, clear excess glutamate from the extracellular space. The determination of the crystal structure for EAAT1 has allowed the computational study of this transporter. We performed MD simulations with the crystal structure but we believe that the many mutations in this structure had an impact on the structure. Therefore, we constructed a homology model of WT EAAT1 and with this model we showed that both Na+/Na+ and K+/Na+ co-binding states are feasible, proposing a new transport mechanism where K+ and Na+ exchange in the binding pocket to allow the K+ to escape. Also, we found that the binding mode for aspartate is identical to the one found in a previous study and in agreement with experimental evidence, and we showed that the protonation of Glu406 is necessary for substrate binding. Finally, we studied the effect of two mutations that cause episodic ataxia and found that both mutants were able to bind the first Na+ ion, however, both failed in maintaining the co-binding state.
APA, Harvard, Vancouver, ISO, and other styles
11

Uricchio, Lawrence Hart. "Models and forward simulations of selection, human demography, and complex traits." Thesis, University of California, San Francisco, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3681226.

Full text
Abstract:

Evolutionary forces such as recombination, demography, and selection can shape patterns of genetic diversity within populations and contribute to phenotypic variation. While theoretical models exist for each of these forces independently, mathematically modeling their joint impact on patterns of genetic diversity remains very challenging. Fortunately, it is possible to perform forward-in-time computer simulations of DNA sequences that incorporate all of these forces simultaneously. Here, I show that there are trade-offs between computational efficiency and accuracy for simulations of a widely investigated model of recurrent positive selection. I develop a theoretical model to explain this trade-off, and a simple algorithm that obtains the best possible computational performance for a given error tolerance. I then pivot to develop a framework for simulations of human DNA sequences and genetically complex phenotypes, incorporating recently inferred demographic models of human continental groups and selection on genes and non-coding elements. I use these simulations to investigate the power of rare variant association tests in the context of rampant selection and non-equilibrium demography. I show that the power of rare variant association tests is in some cases quite sensitive to underlying assumptions about the relationship between selection and effect sizes. This work highlights both the challenge and the promise of applying forward simulations in genetic studies that seek to infer the parameters of evolutionary models and detect statistical associations.

APA, Harvard, Vancouver, ISO, and other styles
12

Yassin, Nihad Jaro. "Application of parametric and solid modelling techniques to human body simulations." Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/1384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Boucher, Luke. "Learning the structure of artificial grammars : computer simulations and human experiments." Thesis, University of Sussex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Fears, Tellis A. "Framing cultural attributes for human representation in military training and simulations." Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://edocs.nps.edu/npspubs/scholarly/theses/2008/Sept/08Sep%5FFears.pdf.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environments, and Simulation (MOVES))--Naval Postgraduate School, September 2008.
Thesis Advisor(s): Gibbons, Deborah ; Blais, Curtis. "September 2008." Description based on title screen as viewed on November 4, 2008. Includes bibliographical references (p. 39-42). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
15

Zordan, Victor B. "Motion capture-driven simulations that hit and react." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/8314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Dawson, Karen Margaret. "Advanced thermal hydraulic simulations for human reliability assessment of nuclear power plants." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112392.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 101-102).
Human Risk Assessment (HRA) in the nuclear power industry has advanced in the last two decades. However, there is a lack of understanding of the magnitude of the effect of thermal hydraulic (TH) uncertainties upon the failure probabilities of the operator actions of nuclear units. I demonstrate in this work that there is an effect of TH uncertainties on the operating crew's probability of recognizing errors during a loss of coolant accident (LOCA) initiating event. The magnitude of the effect of the TH uncertainty on the operator's ability to recognize errors is dependent upon the size of the break, the operating state of the plant (in operation or shutting down), and the error that is committed. I utilized an uncertainty software, Dakota, coupled with an advanced TH software, MAAP4, to perform a Monte Carlo analysis to propagate selected TH uncertainties through a LOCA initiating event in which the automatic safety coolant injection system fails to automatically actuate. The operator mission is to manually actuate the safety coolant injection system. Two errors that the operating crew could make are 1) entering fire procedures and 2) testing for saturation of the primary system before the saturation occurs. I calculate the operator failure probabilities using the MERMOS HRA methodology (used by the French electric utility company Electricité de France, EdF). My results show a reduction in scenario failure probability from the values reported by EdF in its published MERMOS Catalogue of more than 80% for the operator recognizing the the error in entering fire procedures. For the error in testing for saturation of the primary system before saturation occurs, I calculated a scenario failure probability in Mode B of 0.0033, while the MERMOS Catalogue listed the scenario failure probability as negligible. My results show that there is an effect from TH uncertainties on operator failure probabilities. This research provides a method of improving the accuracy of failure probabilities in established HRA methodologies using TH simulations.
by Karen Margaret Dawson.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
17

Martínez, Mateu Laura. "Mapping of the electrical activity of human atria. Multiscale modelling and simulations." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/104604.

Full text
Abstract:
La fibrilación auricular es una de las arritmias cardíacas más comunes observadas en la práctica clínica. Por lo tanto, es de vital importancia desarrollar nuevas tecnologías destinadas a diagnosticar y acabar con este tipo de arritmia, para mejorar la calidad de vida de los pacientes y reducir los costes de los sistemas nacionales de salud. En los últimos años ha aumentado el uso de las nuevas técnicas de mapeo auricular, basadas en sistemas multi-electrodo para mapear la actividad eléctrica en humanos. Dichas técnicas permiten localizar y ablacionar los impulsores de la fibrilación auricular, como son las fuentes focales o los rotores. Sin embargo, todavía existe incertidumbre sobre su precisión y los procedimientos experimentales para su análisis están limitados debido a su carácter invasivo. Por lo tanto, las simulaciones computacionales son una herramienta muy útil para superar estas limitaciones, al permitir reproducir con fidelidad las observaciones experimentales, dividir el problema bajo estudio en sub-estudios más simples, y realizar investigaciones preliminares imposibles de llevar a cabo en el práctica clínica. Esta tesis doctoral se centra en el análisis de la precisión de los sistemas de mapeo multi-electrodo a través de modelos y simulaciones computacionales. Para ello, desarrollamos modelos realistas multi-escala con el objetivo de simular actividad eléctrica auricular reentrante, en primer lugar en una lámina de tejido auricular, y en segundo lugar en las aurículas completas. Posteriormente, analizamos los efectos de las configuraciones geométricas multi-electrodo en la precisión de la localización de los rotores, mediante el uso de agrupaciones multi-electrodo con distancias inter-electrodo equidistantes, así como a través de catéteres de tipo basket con distancias inter-electrodo no equidistantes. Después de calcular los electrogramas unipolares intracavitarios, realizamos mapas de fase, detecciones de singularidad de fase para rastrear los rotores, y mapas de frecuencia dominantes. Finalmente, descubrimos que la precisión de los sistemas de mapeo multi-electrodo depende de su posición dentro de la cavidad auricular, de la distancia entre los electrodos y el tejido, de la distancia inter-electrodo, y de la contribución de las fuentes de campo lejano. Además, como consecuencia de estos factores que pueden afectar a la precisión de los sistemas de mapeo multi-electrodo, observamos la aparición de rotores falsos que podrían contribuir al fracaso de los procesos de ablación de la fibrilación auricular.
Atrial fibrillation is one of the most common cardiac arrhythmias seen in clinical practice. Therefore, it is of vital importance to develop new technologies aimed at diagnosing and terminating this kind of arrhythmia, to improve the quality of life of patients and to reduce costs to national health systems. In the last years, new atrial mapping techniques based on multi-electrode systems are increasingly being used to map the atrial electrical activity in humans and localise and target atrial fibrillation drivers in the form of focal sources or rotors. However, significant concerns remain about their accuracy and experimental approaches to analyse them are limited due to their invasive character. Therefore, computer simulations are a helpful tool to overcome these limitations since they can reproduce with fidelity experimental observations, permit to split the problem to treat into more simple substudies, and allow the possibility of performing preliminary investigations impossible to carry out in the clinical practice. This PhD thesis is focused on the analysis for accuracy of the multielectrode mapping systems through computational models and simulations. For this purpose, we developed realistic multiscale models in order to simulate atrial electrical reentrant activity, first in a sheet of atrial tissue and, then, in the whole atria. Then, we analysed the effects of the multi-electrode geometrical configurations on the accuracy of localizing rotors, by using multi-electrode arrays with equidistant inter-electrode distances, as well as multi-electrode basket catheters with non-equidistant inter-electrode distances. After computing the intracavitary unipolar electrograms, we performed phase maps, phase singularity detections to track rotors, and dominant frequency maps. We finally found out that the accuracy of multi-electrode mapping systems depends on their position inside the atrial cavity, the electrode-to-tissue distance, the inter-electrode distance, and the contribution of far field sources. Furthermore, as a consequence of these factors, false rotors might appear and could contribute to failure of atrial fibrillation ablation procedures.
La fibril·lació auricular és una de les arítmies cardíaques més comuns observades en la pràctica clínica. Per tant, és de vital importància desenvolupar noves tecnologies destinades a diagnosticar i acabar amb aquest tipus d'arítmia, per tal de millorar la qualitat de vida dels pacients i reduir els costos dels sistemes nacionals de salut. En els últims anys, ha augmentat l'ús de les noves tècniques de mapeig auricular, basades en sistemes multielèctrode per a mapejar l'activitat elèctrica auricular en humans. Aquestes tècniques permeten localitzar i ablacionar els impulsors de la fibril·lació auricular, com són les fonts focals o els rotors. No obstant això, encara hi ha incertesa sobre la seua precisió i els procediments experimentals per al seu anàlisi estan limitats a causa del seu caràcter invasiu. Per tant, les simulacions computacionals són una eina molt útil per a superar aquestes limitacions, en permetre reproduir amb fidelitat les observacions experimentals, dividir el problema sota estudi en subestudis més simples, i realitzar investigacions preliminars impossibles de dur a terme en el pràctica clínica. Aquesta tesi doctoral es centra en l'anàlisi de la precisió del sistemes de mapeig multielèctrode mitjançant els models i les simulacions computacionals. Per a això, desenvolupàrem models realistes multiescala per tal de simular activitat elèctrica auricular reentrant, en primer lloc en una làmina de teixit auricular, i en segon lloc a les aurícules completes. Posteriorment, analitzàrem els efectes de les configuracions geomètriques multielèctrode en la precisió de la localització dels rotors, mitjançant l'ús d'agrupacions multielèctrode amb distàncies interelèctrode equidistants, així com catèters de tipus basket amb distàncies interelèctrode no equidistants. Després de calcular els electrogrames unipolars intracavitaris, vam realitzar mapes de fase, deteccions de singularitat de fase per a rastrejar els rotors, i mapes de freqüència dominants. Finalment, vam descobrir que la precisió dels sistemes de mapeig multielèctrode depèn de la seua posició dins de la cavitat auricular, de la distància entre els elèctrodes i el teixit, de la distància interelèctrode, i de la contribució de les fonts de camp llunyà. A més, com a conseqüència d'aquests factors, es va observar l'aparició de rotors falsos que podrien contribuir al fracàs de l'ablació de la fibril·lació auricular.
Martínez Mateu, L. (2018). Mapping of the electrical activity of human atria. Multiscale modelling and simulations [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/104604
TESIS
APA, Harvard, Vancouver, ISO, and other styles
18

Fabbri, Alan <1985&gt. "Computational modeling of human sinoatrial node: what simulations tell us about pacemaking." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8674/1/fabbri_alan_phD_thesis_final.pdf.

Full text
Abstract:
The Sinoatrial node (SAN) is the primary pacemaker in physiological conditions. SAN tissue is characterized by auto-ryhthmicity, i.e. it does not need external stimuli to initiate its electrical activity. The auto-rhythmic behavior is due to the spontaneous slow depolarization during the diastolic phase. Understanding the biophysical mechanisms at the base of diastolic depolarization is crucial to modulate the heart rate (HR). In turn, HR modulation is fundamental to treat cardiac arrhythmias, so that atria and ventricles can fill and pump the blood properly. The overall aim of the thesis is the investigation of the underlying mechanisms responsible for the pacemaking in human. To this end, a human computational model of the action potential (AP) of the SAN was developed. Pacemaking modulation at single cell level, effects of ion channel mutations on the beating rate and propagation of the electrical trigger from SAN to atrial tissue are the faced topics The human single cell SAN model was developed starting from the rabbit SAN by Severi et al.; the parent model was updated with experimental data and automatic optimization to match the AP features reported in literature. A sensitivity analysis was performed to identify the most influencing parameters. The investigation of pacemaking modulation was carried out through the simulation of current blockade and mimicking the stimulation of the autonomic nervous system. The model was validated comparing the simulated electrophysiological effects due to ion channel mutations on beating rate with clinical data of symptomatic subjects carriers of the mutation. More insights on pacemaking mechanisms were obtained thanks to the inclusion of calcium-activated potassium currents, which link changes in the intracellular calcium to the membrane. Finally, the propagation of the AP from the SAN to the atrial tissue and the source-sink interplay was investigated employing a mono-dimensional strand composed by SAN and atrial models.
APA, Harvard, Vancouver, ISO, and other styles
19

Lyu, Yeonhwan. "Simulations and Second / Foreign Language Learning: Improving communication skills through simulations." See Full Text at OhioLINK ETD Center (Requires Adobe Acrobat Reader for viewing), 2006. http://www.ohiolink.edu/etd/view.cgi?toledo1147363791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

GIORDANO, DEBORAH. "Transglutaminase, nutrition and human health." Doctoral thesis, Università degli Studi di Foggia, 2019. http://hdl.handle.net/11369/382619.

Full text
Abstract:
Conoscenze preesistenti: Le transglutaminasi (TGase) sono una classe di enzimi ampiamente diffusa tra gli organismi procarioti ed eucarioti. Gli enzimi di questa famiglia catalizzano modifiche post-traduzionali in molte proteine attraverso reazioni di trasferimento dell’acile, reazioni di deaminazione e di crosslinking (polimerizzazione) tra residui peptidici di lisina (accettore di acile) e glutammina (donatore di acile) intra- o inter-catena proteica. A causa della sua facilità di espressione e di purificazione, l’unica TGase ampiamente usata per le applicazioni industriali è la TGase microbica estratta da Streptomyces mobaraensis (MTGase). Oggigiorno la MTGase è disponibile in commercio ed è ampiamente usata nell’industria dei biopolimeri, in cosmetica, per applicazioni cliniche, nell’industria tessile della lana e soprattutto nell’industria alimentare. La sua abilità di catalizzare legami crociati in molti substrati proteici differenti è sempre più usata non solo per la produzione di salsicce, prosciutti e formaggi ma, molto recentemente, anche per la detossificazione della farina, come possibile terapia alternativa alla dieta senza glutine. Ne consegue che oggigiorno le applicazioni industriali della MTGase stiano aumentando, coinvolgendo sempre più settori e producendo una ricerca scientifica su questo argomento sempre più fervente, allo scopo di tentare di rispondere a specifiche esigenze industriali, come l’implementazione di sistemi di purificazione della MTGase più efficienti, la ricerca di fonti alternative di transglutaminasi microbica, e di fonti sicure di enzimi ricombinanti. Scopo del progetto di dottorato: lo scopo principale del progetto è l’identificazione di nuove forme di transglutaminasi microbica che possano diventare un’alternativa a quella attualmente in uso. È stata eseguita un’analisi approfondita delle sequenze note allo scopo di ottenere una classificazione delle TGase microbiche attraverso la loro similarità a forme note. Per selezionare le migliori candidate che possano essere forme attive in appropriate condizioni, le sequenze selezionate sono state soggette di modellamento molecolare e simulazioni molecolari. Per testare l’attività enzimatica, sono stati effettuati dei saggi sperimentali su una nuova forma trovata ed un’ulteriore nuova forma è stata espressa. Risultati: il presente lavoro propone in primo luogo un’analisi, ad oggi assente, dell’ampio panorama delle transglutaminasi microbiche, sviluppando la prima classificazione delle TGase microbiche basata sulle loro caratteristiche di sequenza e sulle loro specifiche strutture secondarie predette. Al fine di classificare ed analizzare le caratteristiche strutturali di tutte le sequenze annotate come aventi un TGase core, sono state utilizzate tecniche computazionali che coinvolgono analisi di sequenza, studi comparativi, costruzione di alberi filogenetici, modellamento per omologia e simulazioni di dinamica molecolare. Tramite questo approccio, è stata effettuata una classificazione preliminare di queste sequenze dividendole in cinque gruppi principali. Ogni gruppo è stato studiato dal punto di vista delle sequenze per analizzare la presenza di motifs specifici. Per tre di questi cinque gruppi, sono state studiate anche le strutture secondarie e, da questa analisi, sono state rilevate caratteristiche specifiche per ogni gruppo. Inoltre, due nuove forme di TGase microbica (mTGase) sono state studiate in dettaglio: K. albida mTGase e l’ipotetica mTGase da SaNDy (organismo non rivelato per possibilità di brevetto). Per la prima, in comparazione con la MTGase, sono state effettuate analisi della tasca relativa al sito attivo e simulazioni di dinamica molecolare. Per la seconda, invece, sono state utilizzate tecniche sperimentali per purificare l’ipotetico enzima al fine di testarne l’attività su substrati alimentari. Saggi sperimentali su entrambe le proteine sono ancora in corso, al fine di trovare le migliori condizioni di attività enzimatica e i migliori substrati di reazione. Le simulazioni di dinamica molecolare eseguite sulla mTGase di K. albida hanno suggerito alcune spiegazioni alla maggiore specificità di questo enzima rispetto alla MTGase, dimostrata sperimentalmente da Steffan e colleghi, ed alcune indicazioni per variare le condizioni di attività usate per testarla. Inoltre, l’analisi dei substrati ha permesso di trovare nuovi possibili substrati, sui quali l’enzima potrebbe essere impiegato ai fini della riduzione delle allergenicità. D’altro canto, l’enzima estratto da SaNDy, mostrando una più alta somiglianza con la MTGase, potrebbe essere meno selettivo della mTGase da K. albida nei confronti di specifici substrati, pertanto potrebbe essere possibile una sua applicazione anche su substrati gliadinici, tuttavia, per provare ciò, sono necessari ulteriori esperimenti. Note: il presente lavoro di dottorato è stato principalmente svolto presso il Laboratorio di Bioinformatica del CNR di Avellino sotto la supervisione del Dr. Facchiano, tuttavia, tutte le simulazioni di dinamica molecolare sono state eseguite presso il Dipartimento di Biochimica dell’Università di Zurigo, nel laboratorio di biologia strutturale e computazionale sotto la supervisione del Prof. A. Caflisch e del suo gruppo di ricerca (periodo di formazione all’estero obbligatorio). I saggi di attività sperimentale sul substrato gliadinico sono stati effettuati dal laboratorio di spettrometria di massa CeSMA-ProBio presso il CNR di Avellino; e l’ipotetica mTGase da SaNDy è stata invece clonata, espressa e purificata durante la collaborazione con il laboratorio di Molecular Sensing presso il CNR of Avellino.
Background: transglutaminases (TGase) are a class of enzymes widely spread in eukaryotic and prokaryotic organisms. Enzymes of this family catalyze post-translational modifications in many proteins by acyl transfer reactions, deamidation and crosslinking (polymerisation) between protein intra- or inter-chain glutamine (acyl donor) and lysine (acyl acceptor) peptide residues. Due to its facility of expression and purification, the only TGase enzyme widely used for industrial applications is the microbial TGase extracted from Streptomyces mobaraensis (MTGase). Nowadays the MTGase is commercially available and widely used in biopolymers industry, in cosmetics, in clinical applications, in wool textiles, and above all in the food processing industry. Its ability to catalyze crosslinks on many different protein substrates is increasingly used not only for sausage, ham and cheese production but, very recently, also for flour detoxification, as a possible alternative therapy to the gluten free diet. It follows that nowadays the industrial applications of MTGase have increased, covering more and more fields producing a very active scientific research about this topic aimed at attempt to meet specific industrial needs, as the implementation of more efficient system for MTGase production, the research of alternative sources of microbial TGase, and safe source of recombinant enzymes. Aims of the doctorate project: the main aim of the project is the identification of novel forms of microbial TGases that could become an alternative to that in use. A depth screening of known sequences has been performed, with the aim of obtaining a classification of microbial TGases for their similarity to known forms. To select the best candidates to be active forms under appropriate conditions, molecular modelling and molecular simulations have been performed on selected sequences. To test the enzymatic activity, experimental assays have been performed with a novel form, and another novel form has been expressed. Results: the present work proposes at first an analysis, lacking so far, of the wide microbial transglutaminase world, developing the first classification of the microbial TGase based on their sequence features and their specific predicted secondary structures. In order to classify and analyze the structural features of all the sequences annotated as having a TGase core computational techniques involving sequence analyses, comparative studies, building of phylogenetic trees, homology models and molecular dynamic simulations have been used. From this approach, a preliminary classification of these sequences was done by dividing them in five main groups. Each group has been investigated from the sequence point of view to analyze the presence of specific motifs. For three of this five groups, also the secondary structures have been investigated and, from this analysis, features specific for each group have been detected. Moreover, two novel forms of microbial TGase (mTGase) have been investigated in the detail: K. albida mTGase and the hypothetical mTGase from SaNDy (organism not disclosed for patent opportunity). Molecular dynamics simulations and active site pocket analyses have been performed for the first, in comparison with MTGase. For the second, instead, experimental technique has been used to purify the hypothetical enzyme in order to test it on food related substrates. Experimental assays on both the proteins are still ongoing, to find the best enzymatic activity conditions and the best substrates of reaction. The molecular dynamic simulations performed on K. albida mTGase have suggested some explanations to the higher specificity of this enzyme than MTGase, experimentally demonstrated by Steffen et colleague, and several indications to change the activity conditions used to test it. Moreover, the substrates screening has allowed to find novel possible substrates, on which this enzyme could be employed for the allergenicity reduction. On the other hand, the enzyme extracted from SaNDy, showing a higher similarity with MTGase, could be less selective than K. albida mTGase for specific substrates, so it could be possible its application also on the gliadin substrate, but to prove it further experiments are necessary. Note: the present PhD work has been mainly performed in the Bioinformatics Laboratory at the CNR of Avellino under Dr. Facchiano’s supervision, however all the MD simulations have been performed at the Biochemistry Department of the University of Zurich, in the computational and structural biology laboratory under the supervision of Prof. A. Caflisch and his research group (compulsory abroad training period). Experimental activity assays on gliadin substrate have been performed by the spectrometry mass CeSMA-ProBio lab at the CNR of Avellino; and the hypothetical mTGase from SaNDy was instead cloned, expressed and purified in collaboration with the Laboratory for Molecular Sensing at the CNR of Avellino.
APA, Harvard, Vancouver, ISO, and other styles
21

GIRAUDO, MARTINA. "Passive shielding of space radiation for human exploration missions - Simulations and Radiation Tests." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2711122.

Full text
Abstract:
Space radiation is one of the main showstoppers for human exploration of deep space. When leaving the protection provided by Earth’s atmosphere and magnetic field, the astronaut crew find themselves immersed into a complex radiation field, originated by the interaction of different high-energy radiation sources with the spacecraft’s walls, and characterized by many particle species with a broad range of energies. The biological effects of the long-term radiation exposure is largely uncertain and could give rise not only to late solid cancers and leukemia, but also to early effects to cardiac and nervous tissues, possibly undermining mission success. An available countermeasure to defend the astronauts from radiation is passive shielding, i.e. the interposition of shielding materials between the radiation sources and the exposed subjects. However, the majority of space radiation is practically impossible to completely stop: the high energetic particles constituting the space environment have the capability to penetrate several meters of materials, generating a harmful component of secondary particles, further contributing to the radiation dose. The ability of a material to attenuate the incoming space radiation and the nature of the generated secondary particles largely depends on the traversed material itself, in particular on the ratio between its charge and mass atomic numbers, Z/A. The lower is this ratio, the higher the material’s capability to attenuate the incoming radiation will be, both through electromagnetic and nuclear interactions. While the radiobiology community is focusing on the biological effects, radiation physics is trying to lower uncertainties characterizing the radiation interactions with materials, performing radiation measurements of various nature. In this framework I focused my PhD activity on the study of materials which could be used in space as shielding layers and multipurpose structures have been evaluated and selected under different criteria. At first, their ability to shield different kinds of space radiation were calculated with the aid of 1D Monte Carlo simulations, also followed by an evaluation of their structural and thermal proprieties, cost, availability and compatibility with the space environment. Simulations, in particular, were performed both to support the material selection process both to produce guidelines for design. The selected materials were then procured to be tested under different radiation beams and different set-ups, in single and multi-layers configurations, in an attempt to reproduce space exposure conditions. At the same time, the radiation tests have been reproduced by means of Monte Carlo simulations, to compare the experimental results and the simulations’ outputs, confirming the codes’ ability to reproduce radiation measurements involving High Z-number and high Energy (HZE) particles. For some materials, suggestions were provided on which nuclear model was better reproducing the data. The performed experimental campaign suggested that a candidate shielding material suitable for Galactic Cosmic Rays (GCR) should be tested with at least two beams with different characteristics, since the results indicated that some materials good at shielding 972 MeV/nuc 56Fe ions performed very poorly when irradiated with high energetic alphas. Furthermore, among the material types included in this investigation work, Lithium Hydride resulted the best option to stop space radiation, when only radiation shielding properties are considered. At the end of the experimental campaigns, on the basis of the test results, a 3D simulation activity has started and is still on-going and a modular space habitat model has been created. Monte Carlo simulations have been carried out, reproducing different Moon exposure scenarios with the goal of calculating crew radiation exposure during a Moon surface mission. This work reports results only for a standard aluminum habitat, with only Moon soil used as shielding material. However, future simulations will include Lithium Hydride and possibly others materials as shielding layers, to evaluate their effectiveness in reducing the dose in a realistic exposure scenario. Preliminary results show that even with a heavily shielded spacecraft (the habitat taken in consideration in this work is providing from every direction at least 30 g/cm2 of aluminum equivalent) radiation exposure approaches values close to the existing annual radiation exposure limits. Part of this thesis’ work was done at Thales Alenia Space, using Thales Alenia Space infrastructures and in the framework of the ROSSINI2 study. The ROSSINI2 study has been supported by European Space Agency (ESA) under the contract RFP IPLPTE/LF/mo/942.2014 and with the generous support of NASA and BNL, providing beam time at the NSRL facility.
APA, Harvard, Vancouver, ISO, and other styles
22

Noetscher, Gregory Michael. "The VHP-F Computational Phantom and its Applications for Electromagnetic Simulations." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-dissertations/237.

Full text
Abstract:
Modeling of the electromagnetic, structural, thermal, or acoustic response of the human body to various external and internal stimuli is limited by the availability of anatomically accurate and numerically efficient computational models. The models currently approved for use are generally of proprietary or fixed format, preventing new model construction or customization. 1. This dissertation develops a new Visible Human Project - Female (VHP-F) computational phantom, constructed via segmentation of anatomical cryosection images taken in the axial plane of the human body. Its unique property is superior resolution on human head. In its current form, the VHP-F model contains 33 separate objects describing a variety of human tissues within the head and torso. Each obejct is a non-intersecting 2-manifold model composed of contiguous surface triangular elements making the VHP-F model compatible with major commercial and academic numerical simulators employing the Finite Element Method (FEM), Boundary Element Method (BEM), Finite Volume Method (FVM), and Finite-Difference Time-Domain (FDTD) Method. 2. This dissertation develops a new workflow used to construct the VHP-F model that may be utilized to build accessible custom models from any medical image data source. The workflow is customizable and flexible, enabling the creation of standard and parametrically varying models facilitating research on impacts associated with fluctuation of body characteristics (for example, skin thickness) and dynamic processes such as fluid pulsation. 3. This dissertation identifies, enables, and quantifies three new specific computational bioelectromagnetic problems, each of which is solved with the help of the developed VHP-F model: I. Transcranial Direct Current Stimulation (tDCS) of human brain motor cortex with extracephalic versus cephalic electrodes; II. RF channel characterization within cerebral cortex with novel small on-body directional antennas; III. Body Area Network (BAN) characterization and RF localization within the human body using the FDTD method and small antenna models with coincident phase centers. Each of those problems has been (or will be) the subject of a separate dedicated MS thesis.
APA, Harvard, Vancouver, ISO, and other styles
23

Kleiven, Svein. "Finite Element Modeling of the Human Head." Doctoral thesis, KTH, Farkost- och flygteknik, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3347.

Full text
Abstract:
The main objectives of the present thesis were to define the dimension of head injuries in Sweden over a longer period and to present a Finite Element (FE) model of the human head which can be used for preventive strategies in the future. The annual incidence of head injuries in Sweden between 1987 and 2000 was defined at over 22 000, cases most of which were mild head injuries. In contrast to traffic accidents, head injuriy due to fall was the most important etiology. Of special interest was that the number of hematoma cases has increased. A detailed and parameterized FE model of the human head was developed and used to evaluate the effects of head size, brain size and impact directions. The maximal effective stresses in the brain increased more than a fourfold, from 3.6 kPa for the smallest head size to 16.3 kPa for the largest head size using the same acceleration impulse. The size dependence of the intracranial stresses associated with injury is not predicted by the Head Injury Criterion (HIC). Simulations with various brain sizes indicated that the increased risk of Subdural Hematoma (SDH) in elderly people may to a part be explained by the reduced brain size resulting in a larger relative motion between the skull and the brain with distension of bridging veins. The consequences of this increased relative motion due to brain atrophy cannot be predicted by existing injury criteria. From studies of the influence of impact directions to the human head, the highest shear strain in the brain stem is found for a Superior-Inferior (SI) translational impulse, and in the corpus callosum for a lateral rotational impulse when imposing acceleration pulses corresponding to the same impact power. It was concluded that HIC is unable to predict consequences of a pure rotational impulse, while the Head Impact Power (HIP) criterion needs individual scaling coefficients for the different terms to account for differences in intracranial response due to a variation in load direction. It is also suggested that a further evaluation of synergic effects of the directional terms of the HIP is necessary to include combined terms and to improve the injuryprediction. Comparison of the model with experiments on localized motion of the brain shows that the magnitude and characteristics of the deformation are highly sensitive to the shear properties of the brain tissue. The results suggest that significantly lower values of these properties of the human brain than utilized in most 3D FE models today must be used to be able to predict the localised brain response of an impact to the human head. There is a symmetry in the motion of the superior and inferior markers for both the model and the experiments following a sagittal and a coronal impact. This can possibly be explained by the nearly incompressible properties of brain tissue. Larger relative motion between the skull and the brain is more apparent for an occipital impact than for a frontal one in both experiments and FE model. This correlates with clinical findings. Moreover, smaller relative motion between the skull and the brain is more apparent for a lateral impact than for a frontal one for both experiments and FE model. This is thought to be due to the supporting structure of the falx cerebri. Such a parametrized and detailed 3D model of the human head has not, to the best knowledge of the author, previously been developed. This 3D model is thought to be of significant value for looking into the effects of geometrical variations of the human head.
QC 20100428
APA, Harvard, Vancouver, ISO, and other styles
24

Ginsburger, Kévin. "Modeling and simulation of the diffusion MRI signal from human brain white matter to decode its microstructure and produce an anatomic atlas at high fields (3T)." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS158/document.

Full text
Abstract:
L'imagerie par résonance magnétique du processus de diffusion (IRMd) de l'eau dans le cerveau a connu un succès fulgurant au cours de la décennie passée pour cartographier les connexions cérébrales. C'est toujours aujourd'hui la seule technique d'investigation de la connectivité anatomique du cerveau humain in vivo. Mais depuis quelques années, il a été démontré que l'IRMd est également un outil unique de biopsie virtuelle in vivo en permettant de sonder la composition du parenchyme cérébral également in vivo. Toutefois, les modèles développés à l'heure actuelle (AxCaliber, ActiveAx, CHARMED) reposent uniquement sur la modélisation des membranes axonales à l'aide de géométries cylindriques, et restent trop simplistes pour rendre compte précisément de l'ultrastructure de la substance blanche et du processus de diffusion dans l’espace extra-axonal. Dans un premier temps, un modèle analytique plus réaliste de la substance blanche cérébrale tenant compte notamment de la dépendance temporelle du processus de diffusion dans le milieu extra-axonal a été développé. Un outil de décodage complexe permettant de résoudre le problème inverse visant à estimer les divers paramètres de la cytoarchitecture de la substance blanche à partir du signal IRMd a été mis en place en choisissant un schéma d'optimisation robuste pour l'estimation des paramètres. Dans un second temps, une approche Big Data a été conduite pour améliorer le décodage de la microstructure cérébrale. Un outil de création de tissus synthétiques réalistes de la matière blanche a été développé, permettant de générer très rapidement un grand nombre de voxels virtuels. Un outil de simulation ultra-rapide du processus de diffusion des particules d'eau dans ces voxels virtuels a ensuite été mis en place, permettant la génération de signaux IRMd synthétiques associés à chaque voxel du dictionnaire. Un dictionnaire de voxels virtuels contenant un grand nombre de configurations géométriques rencontrées dans la matière blanche cérébrale a ainsi été construit, faisant en particulier varier le degré de gonflement de la membrane axonale qui peut survenir comme conséquence de pathologies neurologiques telles que l’accident vasculaire cérébral. L'ensemble des signaux simulés associés aux configurations géométriques des voxels virtuels dont ils sont issus a ensuite été utilisé comme un jeu de données permettant l'entraînement d'un algorithme de machine learning pour décoder la microstructure de la matière blanche cérébrale à partir du signal IRMd et estimer le degré de gonflement axonal. Ce décodeur a montré des résultats de régression encourageants sur des données simulées inconnues, montrant le potentiel de l’approche computationnelle présentée pour cartographier la microstructure de tissus cérébraux sains et pathologiques in vivo. Les outils de simulation développés durant cette thèse permettront, en utilisant un algorithme de recalage difféomorphe de propagateurs de diffusion d’ensemble également développé dans le cadre de cette thèse, de construire un atlas probabiliste des paramètres microstructuraux des faisceaux de matière blanche
Diffusion Magnetic Resonance Imaging of water in the brain has proven very useful to establish a cartography of brain connections. It is the only in vivo modality to study anatomical connectivity. A few years ago, it has been shown that diffusion MRI is also a unique tool to perform virtual biopsy of cerebral tissues. However, most of current analytical models (AxCaliber, ActiveAx, CHARMED) employed for the estimation of white matter microstructure rely upon a basic modeling of white matter, with axons represented by simple cylinders and extra-axonal diffusion assumed to be Gaussian. First, a more physically plausible analytical model of the human brain white matter accounting for the time-dependence of the diffusion process in the extra-axonal space was developed for Oscillating Gradient Spin Echo (OGSE) sequence signals. A decoding tool enabling to solve the inverse problem of estimating the parameters of the white matter microstructure from the OGSE-weighted diffusion MRI signal was designed using a robust optimization scheme for parameter estimation. Second, a Big Data approach was designed to further improve the brain microstructure decoding. All the simulation tools necessary to construct computational models of brain tissues were developed in the frame of this thesis. An algorithm creating realistic white matter tissue numerical phantoms based on a spherical meshing of cell shapes was designed, enabling to generate a massive amount of virtual voxels in a computationally efficient way thanks to a GPU-based implementation. An ultra-fast simulation tool of the water molecules diffusion process in those virtual voxels was designed, enabling to generate synthetic diffusion MRI signal for each virtual voxel. A dictionary of virtual voxels containing a huge set of geometrical configurations present in white matter was built. This dictionary contained virtual voxels with varying degrees of axonal beading, a swelling of the axonal membrane which occurs after strokes and other pathologies. The set of synthetic signals and associated geometrical configurations of the corresponding voxels was used as a training data set for a machine learning algorithm designed to decode white matter microstructure from the diffusion MRI signal and estimate the degree of axonal beading. This decoder showed encouraging regression results on unknown simulated data, showing the potential of the presented approach to characterize the microstructure of healthy and injured brain tissues in vivo. The microstructure decoding tools developed during this thesis will in particular be used to characterize white matter tissue microstructural parameters (axonal density, mean axonal diameter, glial density, mean glial cells diameter, microvascular density ) in short and long bundles. The simulation tools developed in the frame of this thesis will enable the construction of a probabilistic atlas of the white matter bundles microstructural parameters, using a mean propagator based diffeomorphic registration tool also designed in the frame of this thesis to register each individual
APA, Harvard, Vancouver, ISO, and other styles
25

Fan, Xuelong. "Kinematic analysis of traumatic brain injuries in boxing using finite element simulations." Thesis, KTH, Medicinsk teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192526.

Full text
Abstract:
The purpose of the thesis was to analyze and evaluate the head injuries due to a striking in a boxing match by LS-DYNA. A simplified arm model was built up and was equipped with three segments which were linked with two spherical joints. The strain-stress curves of the boxing glove foam and glove leather were measured in the Neuronic Lab in School of Technology and Health, KTH. The dimension and weight of the model was also set as adjustable to fulfill various requirements in different cases. Then a method was developed to facilitate the simulation. Finally, 39 video clips from the database were processed and the 13 cases were chosen to test the method and to perform the simulations. Additionally, the reliability of the model was assessed by comparing the outcome of the simulations with the results of the visual analysis from a previous study. The outcome showed that the model was able to restore the scenario from the videos both quantitatively and qualitatively, but it also suggest a high sensitivity of the model to the data artifacts from the video analysis. Interpretations and suggestions for the future work were also discussed.
APA, Harvard, Vancouver, ISO, and other styles
26

INOUE, ISAO. "On the Generalization of Non-Adjacent Dependencies : The Discrepancy between SRN Simulations and Human Behavior." 名古屋大学大学院国際言語文化研究科, 2006. http://hdl.handle.net/2237/7845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wojtusch, Janis [Verfasser], Oskar von [Akademischer Betreuer] Stryk, and André [Akademischer Betreuer] Seyfarth. "Uncertainty and Sensitivity in Human Motion Dynamics Simulations / Janis Wojtusch ; Oskar von Stryk, André Seyfarth." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2018. http://d-nb.info/1162275103/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mania, Aikaterini Katerina. "Fidelity metrics for virtual environment simulations based on human judgements of spatial memory awareness states." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Toulgoat, Isabelle. "Modélisation du comportement humain dans les simulations de combat naval." Phd thesis, Université du Sud Toulon Var, 2011. http://tel.archives-ouvertes.fr/tel-00626811.

Full text
Abstract:
Cette thèse porte sur la modélisation du comportement humain dans les simulations de combat naval. Au sein de l'entreprise DCNS, les simulations de combat naval permettent d'évaluer les performances opérationnelles des navires militaires, dans un scénario donné. Les simulations actuelles ne permettent pas de prendre en compte l'analyse et la décision d'un opérateur, qui peuvent parfois conduire à des réactions inattendues. Le but de cette thèse est donc de modéliser le comportement d'un opérateur pour les simulations de combats navals.Pour représenter les connaissances, la logique non monotone la plus employée a été utilisée: la logique des défauts. Une prise en compte du temps a été ajoutée à cette logique des défauts. La logique des défauts va permettre de calculer des extensions. Chaque extension correspond à une action possible pour l'opérateur.Une méthode qui permet de choisir une extension a été définie. Cette méthode simule la décision de l'opérateur et elle prend en compte le caractère de l'opérateur.
APA, Harvard, Vancouver, ISO, and other styles
30

Miyawaki, Shinjiro. "Automatic construction and meshing of multiscale image-based human airway models for simulations of aerosol delivery." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/1990.

Full text
Abstract:
The author developed a computational framework for the study of the correlation between airway morphology and aerosol deposition based on a population of human subjects. The major improvement on the previous framework, which consists of a geometric airway model, a computational fluid dynamics (CFD) model, and a particle tracking algorithm, lies in automatic geometry construction and mesh generation of airways, which is essential for a population-based study. The new geometric model overcomes the shortcomings of both centerline (CL)-based cylindrical models, which are based on the skeleton and average branch diameters of airways called one-dimensional (1-D) trees, and computed tomography (CT)-based models. CL-based models are efficient in terms of pre- and post-processing, but fail to represent trifurcations and local morphology. In contrast, in spite of the accuracy of CT-based models, it is time-consuming to build these models manually, and non-trivial to match 1-D trees and three-dimensional (3-D) geometry. The new model, also known as a hybrid CL-CT-based model, is able to construct a physiologically-consistent laryngeal geometry, represent trifurcations, fit cylindrical branches to CT data, and create the optimal CFD mesh in an automatic fashion. The hybrid airway geometries constructed for 8 healthy and 16 severe asthmatic (SA) subjects agreed well with their CT-based counterparts. Furthermore, the prediction of aerosol deposition in a healthy subject by the hybrid model agreed well with that by the CT-based model. To demonstrate the potential application of the hybrid model to investigating the correlation between skeleton structure and aerosol deposition, the author applied the large eddy simulation (LES)-based CFD model that accounts for the turbulent laryngeal jet to three hybrid models of SA subjects. The correlation between diseased branch and aerosol deposition was significant in one of the three SA subjects. However, whether skeleton structure contributes to airway abnormality requires further investigation.
APA, Harvard, Vancouver, ISO, and other styles
31

MESCHINI, VALENTINA. "Fluid-structure interaction in the left ventricle of the human heart: numerical simulations and experimental validation." Doctoral thesis, Gran Sasso Science Institute, 2017. http://hdl.handle.net/20.500.12571/9691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Deram, Aurelien. "Environnement générique pour la validation de simulations médicales." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00793236.

Full text
Abstract:
Dans le cadre des simulations pour l'entrainement, le planning, ou l'aide per-opératoire aux gestes médicaux-chirurgicaux, de nombreux modèles ont été développés pour décrire le comportement mécanique des tissus mous. La vérification, la validation et l'évaluation sont des étapes cruciales en vue de l'acceptation clinique des résultats de simulation. Ces tâches, souvent basées sur des comparaisons avec des données expérimentales ou d'autres simulations, sont rendues difficiles par le nombre de techniques de modélisation existantes, le nombre d'hypothèses à considérer et la difficulté de réaliser des expériences réelles utilisables. Nous proposons un environnement de comparaison basé sur une analyse du processus de modélisation et une description générique des éléments constitutifs d'une simulation (e.g. géométrie, chargements, critère de stabilité) ainsi que des résultats (expérimentaux ou provenant d'une simulation). La description générique des simulations permet d'effectuer des comparaisons avec diverses techniques de modélisation (e.g. masse-ressorts, éléments finis) implémentées sur diverses plateformes de simulation. Les comparaisons peuvent être faites avec des expériences réelles, d'autres résultats de simulation ou d'anciennes versions du modèle grâce à la description commune des résultats, et s'appuient sur un ensemble de métriques pour quantifier la précision et la vitesse de calcul. La description des résultats permet également de faciliter l'échange d'expériences de validation. La pertinence de la méthode est montrée sur différentes expériences de validation et de comparaison de modèles. L'environnement et ensuite utilisé pour étudier l'influence des hypothèses de modélisations et des paramètres d'un modèle d'aspiration de tissu utilisé par un dispositif de caractérisation des lois de comportement. Cette étude permet de donner des pistes pour l'amélioration des prédictions du dispositif.
APA, Harvard, Vancouver, ISO, and other styles
33

Wong, Yuna Huh. "Ignoring the innocent non-combatants in urban operations and in military models and simulations /." Santa Monica, CA : RAND, 2006. http://www.rand.org/pubs/rgsd_issertations/RGSD201/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Daouacher, Maria. "Evaluation of occupant kinematics in crash using the PIPER model : in frontal and oblique crash simulations." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-74250.

Full text
Abstract:
A child dies in road traffic crashes every fourth minute. Totally were 186 300 children under the age of 18 killed in vehicle accidents in 2012, even more were severely injured [1]. The World Health Organisation (WHO) could conclude that fatalities in traffic accidents are more likely to occur in low- to middle income countries compared to high income countries [1]. Finite element based human body models has enabled the increased understanding of kinematics and injury mechanisms of child occupants. These models sustain higher biofidelity than the previously used crash test dummies. The European project PIPER [6] had the aim to develop a model that, combined with a framework, would simplify positioning and also to offer a scalable HBM child model. The PIPER framework software and the scalable PIPER model offers child HBM:s within the ages 1.5-6-years old and is an useful tool for the analysis of child occupants.    The present study evaluates the kinematics and dummy responses of the 4- and 6-year old PIPER model evaluated. The objective of this master thesis is to evaluate the PIPER model with respect to its sensitivity to seat belt geometries, child restraint system, load cases and child anthropometrics. The aim of the master thesis is to get an increased comprehension of the PIPER model and its capability to evaluate occupant kinematics relevant for safety developments, with a special focus on seat belt geometry and interaction in frontal impacts.   The PIPER model showed good sensitivity to different seatbelt geometries regarding the abdominal part of the shoulder belt and to different CRS. The PIPER framework was perceived as hard to use and with the presence of errors. The kinematic response showed good accuracy compared to other previous studies with other crash test dummies however, reoccurring error termination could not be neglectable.    The PIPER model is limited to its ease to positioning in desired sitting postures within the PIPER framework. It is regardless of its disadvantages believed to be a suitable tool to further understand occupant kinematics, as for different belt routings, child anthropometrics and dummy responses are further studies needed to validate the outputs that the model offers and to conclude its robustness in crashworthiness tests.
APA, Harvard, Vancouver, ISO, and other styles
35

NOTARANGELO, Girolama. "Asymptotic mean-square stability analysis and simulations of a stochastic model for the human immune response with memory." Doctoral thesis, Università degli studi di Ferrara, 2011. http://hdl.handle.net/11392/2388820.

Full text
Abstract:
In this thesis we extended a deterministic model for the Human Immune response, consisting of a non-linear system of differential equations with distributed time delay, which was introduced by Beretta, Kirshner and Marino in 2007, by incorporating stochastic perturbations with multiplicative noise around equilibria of the deterministic model. Our aim is to study the robustness of the equilibria of the deterministic model for the human immune response system with respect to fluctuations due to considering the human body as a noisy enviroment. We do this by analysing the asymptotic mean square stability of the equilibria of our stochastic model. Our work could be divided roughly into two parts. In the first part we analyse the stability of a general non linear system of stochastic differential equations with distributed memory terms by studying the stability properties of the linearisation in the first approximation. First of all we state, using Halanay's inequalities, comparison results useful in the investigation of exponential mean square stability of linear stochastic delay differential systems with distributed memory terms. Then we provide conditions under which asymptotic mean square stability of a nonlinear system of stochastic delay differential equations is implied by the exponential mean square stability of linearised stochastic delay system in the first approximation. In the second part we apply the theoretical results obtained in the first part to investigate the stochastic stability properties of the equilibria of our stochastic model of human immune response. The theoretical results are illustrated by numerical simulations and an uncertainty and sensivity analysis of our stochastic model, suggesting that the deterministic model is robust with respect to the stochastic perturbations.
APA, Harvard, Vancouver, ISO, and other styles
36

Killen, Bryce A. "Muscular stabilisation of the knee and development of automated and tuned subject-specific musculoskeletal models for gait simulations." Thesis, Griffith University, 2019. http://hdl.handle.net/10072/387282.

Full text
Abstract:
Computational models of the human musculoskeletal system allow researchers to investigate human biomechanics without the need for invasive methods or expensive experiments. These models can be combined with standard motion capture technology to simulate individual’s movement patterns. With relatively little data processing, the model’s joint kinematics can be calculated along with joint kinetics, thus characterising an individual’s generalised joint coordinates (i.e., joint motion) and external joint loads. Without and with the incorporation of electromyograms (EMG) acquired during these tasks, further methods can be employed to estimate muscle forces and subsequently joint contact loading (Lloyd and Besier, 2003; Pandy and Andriacchi, 2010; Pizzolato et al., 2015; Sartori et al., 2012a; Sasaki and Neptune, 2010; Saxby et al., 2016b; Sritharan et al., 2012; Winby et al., 2009). Indeed, substantial research has focused on developing methods for estimating the magnitude of the joint contact loading within the tibiofemoral joint (TFJ) during a range of locomotion tasks (Fregly et al., 2012; Gerus et al., 2013; Kim et al., 2009). Understanding typical TFJ contact loading is crucial, as the magnitude of joint contact loading has been associated with the development and progression of TFJ osteoarthritis (Andriacchi and Mundermann, 2006). Tibiofemoral joint contact loading is primarily caused by muscles, which act to compress the joint (Sasaki, 2010). Although net and grouped muscle contributions to TFJ contact loading has previously been investigated during walking gait (Pandy and Andriacchi, 2010; Sasaki and Neptune, 2010; Saxby et al., 2016b; Sritharan et al., 2012; Winby et al., 2009), other locomotion tasks such as running and sidestep cutting, herein referred to as sidestepping remain largely unexplored. Along with the loading magnitude, other loading parameters may play a vital role influencing joint health, such as the region of loading in combination with the distribution of loading (Chaudhari et al., 2008) Models previously used to estimate the magnitude of joint contact loading have typically been linearly scaled versions of a generic musculoskeletal model, e.g., “gait2392” (Delp et al., 2007) or TLEM 2.0 (Carbone et al., 2015). These models use generic bone geometries which may not reflect each individual’s anatomy, even after linear scaling (Kainz et al., 2017a). As such, these models may be inappropriate tools to accurately estimate TFJ contact loading magnitudes, as bone geometry influence muscle tendon unit (MTU) force estimates and contact mechanics (Demers et al., 2014; Gerus et al., 2013; Lerner et al., 2015). Furthermore, these models may be inappropriate for the estimation of the region of loading within the TFJ, as this feature is highly dependent on joint anatomy (Lerner et al., 2015). Additionally, limitations within these linear scaled generic models, particularly the TFJ kinematic models, further hamper their utility for investigating regional loading within the TFJ (Demers et al., 2014). If regional loading is to be investigated, computational models that accurately represent subject-specific three-dimensional (3D) bone and joint geometry, 6 degree of freedom (DOF) joint kinematics, and feasible MTU pathways, lengths, and moment arms are required. The overarching aim of this thesis was to investigate features related to TFJ contact loading. Specifically, estimate individual muscle contributions to medial and lateral TFJ contact loading during walking, running, and sidestepping. Second, develop a framework that automatically creates and tunes highly detailed subject-specific computational musculoskeletal models that can be used to investigate various feature of TFJ loading. To investigate individual muscle contributions to TFJ contact loading during various dynamic locomotion tasks, 54 healthy individuals were recruited as part of an ongoing project. Each participant underwent a standard motion capture gait analysis session, wherein whole body and segment motions were captured using 3D motion capture. Ground reaction forces were acquired via in-ground force plates and muscle activation patterns acquired via surface EMG. Motion capture data were used within the free and open-source musculoskeletal modelling platform OpenSim (Delp et al., 2007) to estimate model joint kinematics and kinetics. Using established calibrated EMG-informed neuromusculoskeletal modelling methods (Hoang et al., 2018; Pizzolato et al., 2015; Saxby et al., 2016b), muscle forces and subsequently muscle contributions to TFJ contact loading (Winby et al., 2009) were estimated. Results for walking, running, and sidestepping showed during weight acceptance, the vastus medialis and vastus lateralis muscles dominated contribution to medial and lateral TFJ contact loading respectively. During mid-stance and push-off, the contribution to medial and lateral TFJ contact loading was dominated by the medial and lateral gastrocnemii muscles respectively for all three tasks. Although there were similarities in which muscles dominated medial and lateral TFJ contact loading, differences were shown in the magnitude of relative muscle contributions between locomotion tasks. These differences were driven by different kinematic (Novacheck, 1995), kinetic (Novacheck, 1995), muscle activations (Besier et al., 2003a), and stabilisation requirements present in each tasks. Specific differences were, quadriceps contribution to medial and lateral TFJ contact loading were higher during running compared to walking, while gastrocnemii contribution to medial and lateral TFJ contact loading were higher during walking compared to running. Comparing running and sidestepping, contribution of selected muscles to medial TFJ contact loading were higher during sidestepping, while selected muscle contributions to lateral TFJ contact loading were higher in running. Muscles which dominate the contribution to TFJ contact loading, also provide a majority a TFJ stabilisation during these tasks. Results may provide valuable information for rehabilitation following orthopaedic surgeries to restore TFJ stability and prevent future injuries. To further address the overarching aims of this thesis, highly detailed subject-specific musculoskeletal models were required and thus developed. Subject-specific musculoskeletal models may contain joints and MTU pathways that are both physically and physiologically infeasible. First, the articulating bones of joints may interpenetrate, and similarly MTUs can penetrate bone surfaces. Second, joint kinematics can have discontinuities and may not follow the patterns of previously reported cadaveric studies. Likewise, MTU pathways, if inappropriately defined, can create MTU lengths and moment arms (MTU kinematics) that exhibit discontinuities and do not follow patterns of previously published cadaveric data. This thesis created subject-specific musculoskeletal models that addressed these shortcomings along with shortcomings of previous modelling methods. To evaluate the framework developed to create detailed subject-specific musculoskeletal models, a set of 6 individuals from an on-going study were used. These subjects spanned age (21 – 32 years), height (160.5 – 185 cm), and mass (45 – 89 kg) ranges, and were composed of three females and three males. Each subject underwent a standard gait analysis session as well as a comprehensive magnetic resonance imaging (MRI) protocol enabling detailed visualisation of their bones, muscles, cartilages, and other articular structures (i.e., ligaments). The framework developed within this thesis was built atop a pre-existing open-source framework, the Musculoskeletal Atlas Project (MAP) Client (Zhang et al., 2014), written in Python (Python Software Foundation. Python Language Reference, version 2.7. Available at http://www.python.org). The MAP Client was used in combination with manually segmented MRIs to well reconstruct subject-specific bone geometry using direct image segmentation and pre-developed MAP Client statistical shape models (SSMs). Bone geometries were then used to customise a generic OpenSim model (Delp et al., 2007) with subject-specific bone geometries and other personalised features (i.e., joint positions , MTU origin and insertions, and MTU via points). This generated model represents the standard model produced via the MAP Client pathway; however a number of further developments were required. Required improvements ranged from simple inclusions, such as adding customised marker sets, patellae, and patellofemoral joints (PFJs). More complex additions involved the definition of 6 DOF TFJ and PFJ kinematics, MTU origins, insertions and pathways. Six DOF (1 independent and 5 coupled) subject-specific TFJ and PFJ mechanisms were built from segmented MRIs. These mechanisms were then automatically tuned to be physically and physiologically feasible using previously published methods (Brito da Luz et al., 2017), which were incorporated into the MAP Client framework. The MTU pathways, i.e., origins, insertions and wrapping surfaces, were defined using the MAP Client mean SSMs and subject-specific MAP Client-generated bone reconstructions, which were automatically tuned to be physically and physiologically feasible (discussed in detail later). In the present study, MTU origins and insertions were defined using an atlas-based method (Zhang et al., 2015). The MTU origins and insertions were positioned on the MAP Client mean SSMs in the same anatomical regions (i.e., node point) as the atlas and could be queried using node indices. Node indices were used to define subject-specific MTU origin and insertions on the subject-specific MAP Client-generated bone models. The MTU wrapping surfaces were defined in a two-step process: (i) placement and fitting of wrapping surfaces, and (ii) optimisation of the wrapping surfaces’ geometrical dimensions and positions. Initial selection and placement of wrapping surfaces was done manually based on anatomical regions and landmarks defined as bone mesh elements and nodes indices on the MAP Client mean SSM bones. Following this manual identification, using individual’s subject-specific bones, wrapping surfaces were automatically positioned using bone elements and nodes identified through the MAP Client using custom written Python (Python Software Foundation. Python Language Reference, version 2.7. Available at http://www.python.org) software. Wrapping surface dimensions were based on analytical shapes automatically fit to anatomical regions of subject-specific MAP Client-generated bones using custom written Python software. Once wrapping surfaces were placed, their positions, orientations, and dimensions were automatically optimised with the aim of producing MTU pathways that were physically and physiologically feasible. “Physically feasible” MTU pathways (i) did not penetrate bones, and (ii) did not produce non-sensible wrapping scenarios, such as completing a circumferential loop of a wrapping cylinder. “Physiological feasible” MTU lengths and moment arms (i) closely follow the pattern of measurements taken from cadavers, available in literature, and (ii) are free of discontinuities. The developed framework created and tuned subject-specific rigid body musculoskeletal models that largely produced the desired outcomes while overcoming limitations with previous modelling methods. With respect to the definition of physically feasible MTU pathways, the inclusion of MTU wrapping surfaces fit to each subject’s anatomy largely reduced the number of bone MTU penetrations. However, prior to tuning, including MTU wrapping surfaces fit to each subject’s anatomy were detrimental to many MTU kinematic metrics (i.e., MTU kinematic smoothness, and pattern similarity to literature data). The designed optimisation routine, to tune MTU wrapping surfaces provided further improvements to MTU pathways, and more importantly improved MTU kinematic smoothness and pattern similarity with literature data. Improvements to both MTU pathways and kinematics was present in models which contained simplified (MAP Client standard) as well as subject-specific 6 DOF TFJ and PFJ kinematic mechanisms. The fact that improvements were shown in both models, regardless of the joint model, provided further confidence in the MTU wrapping surface optimisation process that was developed. Additionally, the fact that improvements were only shown once tuned, provided further evidence that the tuning of subject-specific musculoskeletal models is a necessary step in the developed framework. The optimised subject-specific musculoskeletal models containing simplified joint models, compared to subject-specific joint models, performed more consistently and often produced favourable MTU kinematics and pathways. Models with simplified joint models often exhibited fewer bone MTU penetrations, smoother MTU kinematics, and kinematics which more closely matched the pattern of previously reported cadaveric data. The less consistent results seen in models with subject-specific joint kinematics may be due to greater inter-subject variability within estimated kinematics for both the TFJ and PFJ. Further improvements to both the developed MTU wrapping surface optimisation framework and joint kinematic models may produce more consistent results for models with subject-specific joint kinematic models. Although not implemented within this thesis, the models produced using this proposed framework can be used to investigated the regional loading of the TFJ during a range of locomotion and other dynamic tasks. The framework presented here represents a large advancement of the field of subject-specific computational modelling. Along with addressing a number of short comings of previous models, the methods presented in this thesis are predominately automated which reduces the time and cost burdens which are typically associated with building subject-specific models. These time and cost burdens are related to the collection and segmentation of full lower limb MRI. Additionally, all developed methods utilise free and open-source software, facilitating the sharing and wider adoption of these subejct specific methods and models. Improvments to these developed models and methods facilitate their use in both academic research as well as a number of clinical and medical applications. The highly automated and tuned framework reduces both the time and knowledge burden on the user, providing further advantages to these methods.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School Allied Health Sciences
Griffith Health
Full Text
APA, Harvard, Vancouver, ISO, and other styles
37

Hirschberg, Jens. "Simulations of mechanical adaptation and their relationship to stress bearing in skeletal tissue." University of Western Australia. School of Anatomy and Human Biology, 2005. http://theses.library.uwa.edu.au/adt-WU2005.0095.

Full text
Abstract:
[Truncated abstract] In this work a computer simulation program, similar to a finite element program, is used to study the relationship between skeletal tissue structure and function. Though other factors affect the shape of bone (e.g., genetics, hormones, blood supply), the skeleton adapts its shape mainly in response to the mechanical environment to which it is exposed throughout life. The specific relationship between the mechanical environment and the mechanical adaptation response of the skeleton is reviewed. Theories of mechanical adaptation are applied to the sites of tendon attachment to bone (entheses), the adaptation of generalised trabecular bone (i.e., Wolff’s Law of trabecular architecture), sesamoid bones that are often found where a tendon wraps around a bony pulley, and the internal trabecular structure of a whole bony sesamoid such as the patella. The relative importance of compression rather than tension in bone adaptation theories is still not fully understood. Some mechanical adaptation theories suggest that an overwhelming tensile stress at a skeletal location does not stimulate bone deposition, but would instead lead to bone resorption. The skeletal locations studied in this work were chosen because they have been proposed to be in tension. Computer simulations involving models are an ideal method to analyse the mechanical environment of a skeletal location. They are able to determine the mechanical stresses at, and the stress patterns around, complex biological situations. This study uses a two dimensional computer simulation program, Fast Lagrangian Analysis of Continua (Flac), to analyse the stress at the skeletal locations, and to test theories of mechanical adaptation over time by simulating physiological adaptation. The initial purpose of this study is to examine the stress in the skeletal tissue in generalised trabeculae, anatomical sites where a tendon wraps around a bony pulley, in the trabecular networks that fill the patella, and at tendon attachments. A secondary purpose, that follows directly from the first, is to relate the results of these initial stress analyses to existing and hypothetical skeletal tissue remodelling theories, to suggest how the complex skeletal structures might be generated solely in response to their mechanical environment. The term “remodelling” is used throughout this work to refer to mechanical adaptation of bone, usually at a surface of bone, rather than the internal regeneration of osteons (Haversion systems)
APA, Harvard, Vancouver, ISO, and other styles
38

Calmet, Hadrien. "Large-scale CFD and micro-particles simulations in a large human airways under sniff condition and drug delivery application." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/670232.

Full text
Abstract:
As we inhale, the air drawn through our nose undergoes successive accelerations and decelerations as it is turned, split, and recombined before splitting again at the end of the trachea as it enters the bronchi. Fully describing the dynamic behaviour of the airflow and how it transports inhaled particles poses a severe challenge to computational simulations. The dynamics of unsteady flow in the human large airways during a rapid and short inhalation (a so-called sniff) is a perfect example of perhaps the most complex and violent human inhalation inflow. Combining the flow solution with a Lagrangian computation reveals the effects of flow behaviour and airway geometry on the deposition of inhaled microparticles. Highly detailed large-scale computational fluid dynamics allow resolving all the spatial and temporal scales of the flow, thanks to the use of massive computational resources. A highly parallel finite element code running on supercomputers can solve the transient incompressible Navier-Stokes equations on unstructured meshes. Given that the finest mesh contained 350 million elements, the study sets a precedent for large-scale simulations of the respiratory system, proposing an analysis strategy for mean flow, fluctuations, wall shear stresses, energy spectral and particle deposition on a rapid and short inhalation. Then in a second time, we will propose a drug delivery study of nasal sprayed particle from commercial product in a human nasal cavity under different inhalation conditions; sniffing, constant flow rate and breath-hold. Particles were introduced into the flow field with initial spray conditions, including spray cone angle, insertion angle, and initial velocity. Since nasal spray atomizer design determines the particle conditions, fifteen particle size distributions were used,each defined by a log-normal distribution with a different volume mean diameter. This thesis indicates the potential of large-scale simulations to further understanding of airway physiological mechanics, which is essential to guide clinical diagnosis; better understanding of the flow and delivery of therapeutic aerosols, which could be applied to improve diagnosis and treatment.
En una inhalación, el aire que atraviesa nuestra cavidad nasal es sometido a una serie de aceleraciones y deceleraciones al producirse un giros, bifurcaciones y recombinarse de nuevo antes de volver a dividirse de nuevo a la altura de la tráquea en la entrada a los bronquios principales. La descripción precisa y acurada del comportamiento dinámico de este fluido así como el transporte de partículas inhalada que entran con el mismo a través de una simulación computacional supone un gran desafío. La dinámica del fluido en las vías respiratorias durante una inhalación rápida y corta (también llamado sniff) es un ejemplo perfecto de lo que sería probablemente la inhalación en el ser humano más compleja y violenta. Combinando la solución del fluido con un modelo lagrangiano revela el comportamiento del flujo y el effecto de la geometría de las vías respiratorias sobre la deposición de micropartículas inhaladas. La dinámica de fluidos computacional a gran escala de alta precisión permite resolver todas las escalas espaciales y temporales gracias al uso de recursos computacionales masivos. Un código de elementos finitos paralelos que se ejecuta en supercomputadoras puede resolver las ecuaciones transitorias e incompresibles de Navier-Stokes. Considerando que la malla más fina contiene 350 millones de elementos, cabe señalar que el presente estudio establece un precedente para simulaciones a gran escala de las vías respiratorias, proponiendo una estrategia de análisis para flujo medio, fluctuaciones, tensiones de corte de pared, espectro de energía y deposición de partículas en el contexto de una inhalación rápida y corta. Una vez realizado el analisis anterior, propondremos un estudio de administración de fármacos con un spray nasal en una cavidad nasal humana bajo diferentes condiciones de inhalación; sniff, caudal constante y respiración sostenida. Las partículas se introdujeron en el fluido con condiciones iniciales de pulverización, incluido el ángulo del cono de pulverización, el ángulo de inserción y la velocidad inicial. El diseño del atomizador del spray nasal determina las condiciones de partículas, entonces se utilizaron quince distribuciones de tamaño de partícula, cada uno definido por una distribución logarítmica normal con una media de volumen diferente. Esta tesis demuestra el potencial de las simulaciones a gran escala para una mejor comprensión de los mecanismos fisiológicos de las vías respiratorias. Gracias a estas herramientas se podrá mejorar el diagnóstico y sus respectivos tratamientos ya que con ellas se profundizará en la comprensión del flujo que recorre las vías aereas así como el transporte de aerosoles terapéuticos.
APA, Harvard, Vancouver, ISO, and other styles
39

Agarwala, Vineeta. "Integrating empirical data and population genetic simulations to study the genetic architecture of type 2 diabetes." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11120.

Full text
Abstract:
Most common diseases have substantial heritable components but are characterized by complex inheritance patterns implicating numerous genetic and environmental factors. A longstanding goal of human genetics research is to delineate the genetic architecture of these traits - the number, frequencies, and effect sizes of disease-causing alleles - to inform mapping studies, elucidate mechanisms of disease, and guide development of targeted clinical therapies and diagnostics. Although vast empirical genetic data has now been collected for common diseases, different and contradictory hypotheses have been advocated about features of genetic architecture (e.g., the contribution of rare vs. common variants). Here, we present a framework which combines multiple empirical datasets and simulation studies to enable systematic testing of hypotheses about both global and locus-specific complex trait architecture. We apply this to type 2 diabetes (T2D).
APA, Harvard, Vancouver, ISO, and other styles
40

Maas, Ramona [Verfasser], Sigrid [Akademischer Betreuer] Leyendecker, Oliver [Akademischer Betreuer] Röhrle, and Jorge A. C. [Akademischer Betreuer] Ambrósio. "Biomechanics and optimal control simulations of the human upper extremity / Ramona Maas. Gutachter: Sigrid Leyendecker ; Oliver Röhrle ; Jorge A.C. Ambrósio." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2014. http://d-nb.info/1075477530/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Pesce, Luca [Verfasser], Paolo Akademischer Betreuer] Carloni, Carsten [Akademischer Betreuer] [Honerkamp, and Rachel [Akademischer Betreuer] Nechushtai. "Molecular simulations studies on human NEET proteins, novel targets for pharmaceutical intervention / Luca Pesce ; Paolo Carloni, Carsten Honerkamp, Rachel Nechushtai." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://nbn-resolving.de/urn:nbn:de:101:1-2020091006430609584041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Nicolaï, Adrien. "Conformational dynamics and free-energy landscape of human and E.coli Hsp70 chaperones from all-atom and coarse-grained numerical simulations." Thesis, Dijon, 2012. http://www.theses.fr/2012DIJOS060.

Full text
Abstract:
Les protéines de choc thermique Hsp70 [70 kDa Heat Shock Protein] sont considérées comme des chaperons moléculaires très importants qui assistent au repliement des protéines naissantes ainsi qu’au repliement des protéines dénaturées en condition de stress dans le milieu intracellulaire. Les protéines Hsp70 sont présentes chez tous les organismes, tels que l’humain, la bactérie ou encore la levure et ont pour propriété d’avoir une séquence d’acides aminés hautement conservée entre les différentes espèces. [...] Ainsi la connaissance à l’échelle atomique des structures de la protéine hHsp70 dans ses conformations ouverte et fermée est un prérequis essentiel pour comprendre les interactions entre les deux domaines NBD et SBD de la protéine et pour élucider les mécanismes de communication inter-domaine. Cependant, il n’existe pas de structure expérimentale complète de la protéine hHsp70. Dans cette thèse, nous présentons les structures « tout-atome » des conformations ouverte et fermée de la protéine hHsp70, qui ont été modélisées par homologie à partir de la structure par diffraction des rayons X [DRX] de la protéine Hsp110 de la levure Sacharomyces cerevisae [dans la conformation ouverte] et à partir de la structure résolue par Résonance Magnétique Nucléaire [RMN] de la protéine Hsp70 de la bactérie E. coli [dans la conformation fermée]. Ces deux modèles structuraux de la protéine humaine Hsp70, dans les états ouvert et fermé, ont ensuite été relaxés par dynamique moléculaire non biaisée à la température de 300K en utilisant un solvant explicite sur une échelle de temps respectivement de 2.7 et 0.5 μs. L’hétérogénéité conformationelle de la protéine hHsp70 observée dans les simulations de dynamique moléculaire a été comparée à celle extraite d’expériences de resonance par transfert d’énergie entre fluorophores [FRET pour Förster resonance energy transfer] et de diffraction aux petits angles [SAXS pour Small Angle X-ray Scattering] effectuées sur des protéines homologues à hHsp70. [...] Une fois les structures 3D « tout-atome » résolues, la transition entre la conformation ouverte et la conformation fermée [et vice-versa] des protéines Hsp70 a été étudiée en utilisant deux techniques de simulations numériques : une analyse des modes normaux [Normal Mode Analysis où NMA] de la protéine Hsp70 dans chacune de ces deux conformations et une nouvelle méthode développée au cours de cette thèse, basée sur le concept de paysage d’énergie libre [Free-Energy Landscape où FEL]. [...] Cette étude a également permis d’identifier les sous-domaines et résidus clés qui apparaissent comme jouant un rôle important dans la dynamique conformationelle de la protéine Hsp70 dans l’approximation harmonique. Pour comprendre comment la fixation du nucléotide dans le domaine NBD peut engendrer un changement important de conformation de la protéine Hsp70, nous avons réalisé des simulations de dynamique moléculaire tout-atome non biaisée [sur une échelle de temps de 2 μs] de la protéine Hsp70 de la bactérie E. coli [appelée E. coli DnaK], dans trois conditions de nucléotides différentes [liée à l’ATP, liée à l’ADP et sans nucléotide]. [...] Finalement, en combinant l’analyse des modes normaux et du paysage d’énergie libre de la protéine Hsp70, nous avons pu établir une liste de résidus et de structures locales impliqués dans la dynamique conformationelle et dans les mécanismes de communication de la protéine hHsp70. La plupart de ces résidus ont été identifiés expérimentalement comme jouant un rôle crucial dans la communication entre les domaines NBD et le domaine SBD de protéines Hsp70 homologues. Notre étude nous a également permis d’identifier de nouveaux résidus clés. Ces nouveaux résidus pourraient être testés expérimentalement par mutagénèse et leurs positions pourraient être de nouvelles cibles pour la fixation d’inhibiteurs de fonctions biologiques de Hsp70, notamment dans le cas de tumeurs cancéreuses
The 70 kDa heat shock proteins [Hsp70s] are key molecular chaperones which assist in the correct folding of nascent proteins and refolding of proteins under stress conditions in the intracellular environment. Hsp70s are present in all organisms and are highly conserved between the different species. [...] The conformational dynamics between the two conformations is governed by the ATP binding, ATP hydrolysis and by nucleotide exchange through an allosteric mechanism which is not fully understood.Knowledge of the conformations of hHsp70 at the atomic level is central to understand the interactions between its NBD and SBD. However, no complete structure of hHsp70 is known. In the present thesis, we report two conformations of hHsp70, constructed by homology modeling from the yeast Saccharomyces cerevisiae co-chaperone protein Hsp110 [openconformation] and from the bacteria Escherichia coli Hsp70 [closed conformation]. The open and closed conformations of hHsp70 built by homology were relaxed by using unbiased all-atom molecular dynamics [MD] simulations at 300 K in explicit solvent on a timescale of 2.7 and 0.5 μs, respectively. The conformational heterogeneity of hHsp70 observed in MD simulations was comparedwith those extracted from single-molecule Forster resonance energy transfer [FRET]experiments and to small-angle X-ray scattering [SAXS] data of Hsp70 homologs. [...] In the present thesis, the transitions between the open and closed conformation of Hsp70s were studied by using two different computational methods: the Normal Mode Analysis [NMA] and a new method developed in the present thesis based on the Free-Energy Landscape [FEL] concept.[...] These collective modes provide a mechanistic representation of the communication between the NBD and the SBD and allow us to identify subdomains and residues that appear to have a critical role in the conformational dynamics of Hsp70s in the harmonicapproximation. Second, in order to understand how the nucleotide binding in the NBD of Hsp70 induces a conformational change of the whole protein, we performed unbiased all-atom MD simulations [2 μs] of E. coli Hsp70 [named E. Coli DnaK], in three different nucleotide-binding states [ATPbound,ADP-bound and nucleotide free]. [...] Finally, by combining the NMA and the FEL analysis, we established a list of the local structures and of the residues relevant for the conformational dynamics and for the interdomain communication in hHsp70. Most of these residues could be related to previous experimental evidences of their role in the interdomain communication between the NBD and SBD domains of Hsp70 homologs but other were never identified before. All the relevant residues found in MD could be tested experimentally by mutational analysis and could be crucial locations to dock small peptides and for the design of inhibitors for the cancer therapy
APA, Harvard, Vancouver, ISO, and other styles
43

Pesce, Luca Verfasser], Paolo [Akademischer Betreuer] Carloni, Carsten [Akademischer Betreuer] [Honerkamp, and Rachel [Akademischer Betreuer] Nechushtai. "Molecular simulations studies on human NEET proteins, novel targets for pharmaceutical intervention / Luca Pesce ; Paolo Carloni, Carsten Honerkamp, Rachel Nechushtai." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1217415971/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Fierro, Fabrizio Verfasser], Paolo [Akademischer Betreuer] Carloni, and Marc [Akademischer Betreuer] [Spehr. "Human chemosensory G-protein coupled receptors : insight into agonist binding from bioinformatics and multiscale simulations / Fabrizio Fierro ; Paolo Carloni, Marc Spehr." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1193181550/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Fierro, Fabrizio [Verfasser], Paolo Akademischer Betreuer] Carloni, and Marc [Akademischer Betreuer] [Spehr. "Human chemosensory G-protein coupled receptors : insight into agonist binding from bioinformatics and multiscale simulations / Fabrizio Fierro ; Paolo Carloni, Marc Spehr." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1193181550/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Middlebrooks, Sam E. "Experimental Interrogation Of Network Simulation Models Of Human Task And Workload Performance In A U.S. Army Tactical Operations Center." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/34429.

Full text
Abstract:
This thesis research is involved with the development of new methodologies for enhancing the experimental use of computer simulations to optimize predicted human performance in a work domain. Using a computer simulation called Computer modeling Of Human Operator System Tasks (CoHOST) to test the concepts in this research, methods are developed that are used to establish confidence limits and significance thresholds by having the computer model self report its limits. These methods, along with experimental designs that are tailored to the use of computer simulation instead of human subject based research, are used in the CoHOST simulation to investigate the U.S. Army battalion level command and control work domain during combat conditions and develop recommendations about that domain based on the experimental use of CoHOST with these methodologies. Further, with the realization that analytical results showing strictly numerical data do not always satisfy the need for understanding by those who could most benefit from the analysis, the results are further interpreted in accordance with a team performance model and the CoHOST analysis results are mapped to it according to macroergonomic and team performance concepts. The CoHOST computer simulation models were developed based on Army needs stemming from the Persian Gulf war. They examined human mental and physical performance capabilities resulting from the introduction of a new command and control vehicle with modernized digital communications systems. Literature searches and background investigations were conducted, and the CoHOST model architecture was developed that was based on a taxonomy of human performance. A computer simulation design was implemented with these taxonomic based descriptors of human performance in the military command and control domain using the commercial programming language MicroSaintâ ¢. The original CoHOST development project developed results that suggested that automation alone does not necessarily improve human performance. The CoHOST models were developed to answer questions about whether human operators could operate effectively in a specified work domain. From an analytical point of view this satisfied queries being made from the developers of that work domain. However, with these completed models available, the intriguing possibility now exists to allow an investigation of how to optimize that work domain to maximize predicted human performance. By developing an appropriate experimental design that allows evaluative conditions to be placed on the simulated human operators in the computer model rather than live human test subjects, a series of computer runs are made to establish test points for identified dependent variables against specified independent variables. With these test points a set of polynomial regression equations are developed that describe the performance characteristics according to these dependent variables of the human operator in the work domain simulated in the model. The resulting regression equations are capable of predicting any outcome the model can produce. The optimum values for the independent variables are then determined that produce the maximum predicted human performance according to the dependent variables. The conclusions from the CoHOST example in this thesis complement the results of the original CoHOST study with the prediction that the primary attentional focus of the battalion commander during combat operations is on establishing and maintaining an awareness and understanding of the situational picture of the battlefield he is operating upon. Being able to form and sustain an accurate mental model of this domain is the predicted predominant activity and drives his ability to make effective decisions and communicate those decisions to the other members of his team and to elements outside his team. The potential specific benefit of this research to the Army is twofold. First, the research demonstrates techniques and procedures that can be used without any required modifications to the existing computer simulations that allow significant predictive use to be made of the simulation beyond its original purpose and intent. Second, the use of these techniques with CoHOST is developing conclusions and recommendations from that simulation that Army force developers can use with their continuing efforts to improve and enhance the ability of commanders and other decision makers to perform as new digital communications systems and procedures are producing radical changes to the paradigm that describes the command and control work domain. The general benefits beyond the Army domain of this research fall into the two areas of methodological improvement of simulation based experimental procedures and in the actual application area of the CoHOST simulation. Tailoring the experimental controls and development of interrogation techniques for the self-reporting and analysis of simulation parameters and thresholds are topics that bode for future study. The CoHOST simulation, while used in this thesis as an example of new and tailored techniques for computer simulation based research, has nevertheless produced conclusions that deviate somewhat from prevailing thought in military command and control. Refinement of this simulation and its use in an even more thorough simulation based study could further address whether the military decision making process itself or contributing factors such as development of mental models for understanding of the situation is or should be the primary focus of team decision makers in the military command and control domain.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
47

Pybus, Oliveras Marc 1985. "Detection and classification of positive selection in human populations." Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/384314.

Full text
Abstract:
Detecting positive selection in genomic regions is a recurrent topic in human population genetics studies. Over the years, many positive selection tests have been implemented to highlight specific genomic patterns left by a selective event when compared to neutral expectations. However, there is little consistency among the regions detected in several genome-wide scans using different tests and/or populations: population-specific demographic dynamics, local genomic features or different types of selection acting along the genome at different times and selective coefficients might explain such discrepancies. The present doctoral thesis is focused in the study of this problem and the development of a innovative solution: a machine-learning classification framework that exploits the combined ability of some selection tests to uncover the different features expected under the hard sweep model, such as sweep completeness and age of onset. The method was calibrated and applied to three reference populations from The 1000 Genome Project to generate a genome-wide classification map of hard selective sweeps. This study improves the way a selective sweep is detected by overcoming the classical selection vs. no-selection classification strategy, and offers an explanation to the lack of consistency observed among selection tests when applied to real data.
La detecció de selecció positiva en regions genòmiques ha estat un tema recurrent en molts estudis de genètica de poblacions humanes. En conseqüència, durant els últims anys s'han publicat molts mètodes estadístics per detectar els senyals genòmics creats per un procés de selecció molecular. No obstant això, en general hi ha poca consistència entre les regions detectades pels diferents mètodes: dinàmiques demogràfiques especifiques de població, propietats locals de les regions analitzades o diferents tipus de selecció actuant a diferents marcs temporals i intensitats podrien explicar aquestes discrepàncies. Aquesta tesi doctoral està centrada en l'estudi d'aquest problema i en el desenvolupament d'una solució: un mètode de classificació de selecció positiva basat en algoritmes d'aprenentatge automàtic. El mètode combina diferents tests per detectar selecció positiva per obtenir informació sobre el tipus i mode de selecció que afecta una regió genòmica determinada. Aquest nou mètode presenta una alta sensitivitat cap a senyals de selecció positiva i és capaç de proveir informació sobre l'edat del esdeveniment selectiu, així com del seu estat final. Aquest treball millora la forma en què la selecció positiva és detectada avui en dia i proporciona una explicació a la falta de consistència observada entre els mètodes de detecció de selecció positiva quan s'apliquen en dades reals.
APA, Harvard, Vancouver, ISO, and other styles
48

Whiteley, Chris G., and Duu-Jong Lee. "Computer simulations of the interaction of human immunodeficiency virus (HIV) aspartic protease with spherical gold nanoparticles: implications in acquired immunodeficiency syndrome (AIDS)." IOP Publishing Ltd, 2016. http://hdl.handle.net/10962/67083.

Full text
Abstract:
publisher version
The interaction of gold nanoparticles (AuNP) with human immune-deficiency virus aspartic protease (HIVPR) is modelled using a regime of molecular dynamics simulations. The simulations of the 'docking', first as a rigid-body complex, and eventually through flexible-fit analysis, creates 36 different complexes from four initial orientations of the nanoparticle strategically positioned around the surface of the enzyme. The structural deviations of the enzymes from the initial x-ray crystal structure during each docking simulation are assessed by comparative analysis of secondary structural elements, root mean square deviations, B-factors, interactive bonding energies, dihedral angles, radius of gyration (R g), circular dichroism (CD), volume occupied by C α , electrostatic potentials, solvation energies and hydrophobicities. Normalisation of the data narrows the selection from the initial 36 to one 'final' probable structure. It is concluded that, after computer simulations on each of the 36 initial complexes incorporating the 12 different biophysical techniques, the top five complexes are the same no matter which technique is explored. The significance of the present work is an expansion of an earlier study on the molecular dynamic simulation for the interaction of HIVPR with silver nanoparticles. This work is supported by experimental evidence since the initial 'orientation' of the AgNP with the enzyme is the same as the 'final' AuNP-HIVPR complex generated in the present study. The findings will provide insight into the forces of the binding of the HIVPR to AuNP. It is anticipated that the protocol developed in this study will act as a standard process for the interaction of any nanoparticle with any biomedical target.
APA, Harvard, Vancouver, ISO, and other styles
49

Sjöblom, Björn. "To do what we usually do : An ethnomethodological investigation of intensive care simulations." Thesis, Linköping University, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6382.

Full text
Abstract:

Simulators provide great promises of pedagogical utility in a wide array of practices. This study focuses on the use of a full-scale mannequin simulator in training of personnel at an intensive care unit at a Swedish hospital. In medicine, simulators are a means of doing realistic training without risks for the patient. Simulators for use in intensive care medicine are built to resemble as closely as possible the human physiology. In the studied sessions the simulator (a Laerdal SimMan) is set up to be an as-authentic-as-possible replication of the nurses regular, day-to-day practice.

In examining the training-sessions, it was found that the participants often did other things than “proper” simulation, such as joking or making comments about the simulation. These “transgressional activities” were studied from a perspective of ethnomethodology, using video-recordings of the session. These were transcribed and analyzed in detail using ethnomethodologically informed interaction analysis.

Several themes were developed from the recordings and transcripts. These have in common that they demonstrate the participants’ own achievement and maintenance of the simulation as a distinct activity. The analysis provides an account of how the local order of the simulation is upheld, how it is breached and how the participants find their way back into doing “proper” simulation. It is an overview of the interactional methods that participants utilize to accomplish the simulation as a simulation.

This study concludes with a discussion of how this study can provide a more nuanced view of simulations, in particular the relation between simulated and “real” practices. Notions of realism, authenticity and fidelity in simulations can all be seen to be the participants’ own concern, which informs their activities in the simulation.

APA, Harvard, Vancouver, ISO, and other styles
50

Ryan, Steven Francis. "Fatigue Simulation of Human Cortical Bone using Non-Homogeneous Finite Element Models to Examine the Importance of Sizing Factors on Damage Laws." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33067.

Full text
Abstract:
Finite element modeling has become a powerful tool in orthopedic biomechanics, allowing simulations with complex geometries. Current fatigue behavior simulations are unable to accurately predict the cycles to failure, creep, and damage or modulus loss even when applied to a bending model. It is thought that the inhomogeneity of the models may be the source of the problem. It has also been suggested that the volume size of the element will affect the fatigue behavior. This is called a stressed volume effect. In this thesis non-homogeneous finite element models were used to examine the effects of â sizing factorsâ on damage laws in fatigue simulations.

Non-homogeneous finite element models were created from micro computed tomography (CT) images of dumbbell shaped fatigue samples. An automatic voxel meshing technique was used which converted the CT data directly into mesh geometry and material properties.

My results showed that including these sizing factors improved the accuracy of the fatigue simulations on the non-homogeneous models. Using the Nelder-Mead optimization routine, I optimized the sizing factors for a group of 5 models. When these optimized sizing factors were applied to other models they improved the accuracy of the simulations but not as much as for the original models, but they improved the results more than with no sizing factors at all. I found that in our fatigue simulations we could account for the effects of stressed volume and inhomogeneity by including sizing factors in the life and damaging laws.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography