To see the other types of publications on this topic, follow the link: Time-in procedures.

Dissertations / Theses on the topic 'Time-in procedures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 46 dissertations / theses for your research on the topic 'Time-in procedures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wu, Ying-keh. "Empirical Bayes procedures in time series regression models." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/76089.

Full text
Abstract:
In this dissertation empirical Bayes estimators for the coefficients in time series regression models are presented. Due to the uncontrollability of time series observations, explanatory variables in each stage do not remain unchanged. A generalization of the results of O'Bryan and Susarla is established and shown to be an extension of the results of Martz and Krutchkoff. Alternatively, as the distribution function of sample observations is hard to obtain except asymptotically, the results of Griffin and Krutchkoff on empirical linear Bayes estimation are extended and then applied to estimating the coefficients in time series regression models. Comparisons between the performance of these two approaches are also made. Finally, predictions in time series regression models using empirical Bayes estimators and empirical linear Bayes estimators are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Hertel, Russel, and n/a. "Time-in: a logical consequence for misbehaving children in primary school." University of Canberra. Education, 1993. http://erl.canberra.edu.au./public/adt-AUC20060207.140309.

Full text
Abstract:
Time-In, A Logical Consequence for Misbehaving Students, studied the effects of a primary school discipline program designed and implemented by a trainee school counsellor. The program delivered a series of logical consequences for students' misbehaviour and a formalized entry point for counselling intervention. The program was based on a critical incident technique that required teachers to issue infringement notices to misbehaving students who failed to respond to warnings or contravened existing rules regarding safe and responsible behaviour. Counselling and system responses (loss of privileges, parent notification, in school suspension, exclusion) occurred within an established formula dependent on the number of infringements accruing to the student. The school counsellor assumed full responsibility for the collection of infringements, monitoring of on-going student misbehaviour, parental contact and overall management functions of the host school's discipline program. Counselling sessions and mode of therapy were selected and employed to meet the specific needs of misbehaving students once extended misbehaviour patterns emerged. Data collected throughout the study's duration (one year) indicated a decline in the number of recurring offenders and a drop-off in the number of infringements received by those pupils who continued to transgress school policies regarding safe and responsible behaviour. Five hundred and forty-seven infringement notices were issued during the study which resulted in a total of 83 counselling sessions. Male students dominated all categories of misbehaviour and accounted for 86% of the infringements issued. Seventy-seven per cent of infringements issued were from class teacher to students in the class setting. Three questionnaires were administered at the end of the program to teachers, parents and students. Both parent and teacher questionnaire results supported Time-In procedures but almost half of the students responded negatively to the continuation of the program. Several hypotheses were posited for this outcome.
APA, Harvard, Vancouver, ISO, and other styles
3

Grewal, Inderraj Singh. "Self-customized electronic procedures for Just In Time training of space telerobotics." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120436.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 97-99).
Astronauts on Long Duration Missions (LDMs) will face complex problems for which they are untrained. Rehearsal may be unfeasible; the task need be completed on the first attempt and preparation is limited to a review of the electronic procedures (EPs). This motivates Just-In-Time Training (JITT): Astronauts learn- generic skills and EPs recombine these skills to train the new task immediately prior to execution. EPs typically have a fixed level of depth of detail, which ignores individual astronaut competence and the task's hierarchical step/sub-step structure. One astronaut may need details for all sub-steps, whereas another may simply refer to the highest level steps. By varying depth of detail, an astronaut may be able to customize the EPs to aid task performance by reducing extraneous cognitive load and focusing attention to salient features. The question is whether this approach reduces errors when a space telerobotics task is performed for the first time. To answer this, an experiment was carried out over two days on a desktop robotics simulator. On Day 1, all subjects (n=14) were trained to criterion on robotics skills, and were required to pass a screening assessment for continued participation in the experiment. On Day 2, JITT was given as a 30 minute period for procedure review before performing the task. Control group subjects were given non-alterable procedures, while the treatment group was able to customize. Customized JITT led to a lower error count (Mcontroi = 26.3, Mtreatment = 4.6, p = 0.023, mixed regression), and greater accuracy in adhering to the procedures (Mcontro = 82%, Mtreatment = 91%, p = 0.067, Welch's two-sample t-test, SDcontroi = 11%, SDtreatment = 3.6%, p = 0.014, f-test). Despite attempts to balance subject proficiency between groups, the treatment group was noted to exhibit a lower error rate during Day 1 training. So, while these results support the perspective that customization reduced extraneous cognitive load, there remains a potential confound of unbalanced groups. This experiment will help inform NASA training protocols for LDMs.
Supported by a National Space Biomedical Research Institute grant Customized Refresher and Just-in-Time Training for Long-Duration Spaceflight Crews (NCC958HFP03801)
by Inderraj Singh Grewal.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
4

Goode, Matthew Emyr David. "Quality control procedures for GNSS precise point positioning in the presence of time correlated residuals." Thesis, University of Newcastle upon Tyne, 2014. http://hdl.handle.net/10443/2533.

Full text
Abstract:
Precise point positioning (PPP) is a technique for processing Global Navi- gation Satellite Systems (GNSS) data, often using recursive estimation methods e.g. a Kalman Filter, that can achieve centimetric accuracies using a single receiver. PPP is now the dominant real-time application in o shore marine positioning industry. For high precision real-time applications it is necessary to use high rate orbit and clock corrections in addition to high rate observations. As Kalman filters require input of process and measurement noise statistics, not precisely known in practice, the filter is non-optimal. Geodetic quality control procedures as developed by Baarda in the 1960s are well established and their extension to GNSS is mature. This methodology, largely unchanged since the 1990s, is now being applied to processing techniques that estimate more parameters and utilise many more observations at higher rates. \Detection, Identification and Adaption" (DIA), developed from an optimal filter perspective and utilising Baarda's methodology, is a widely adopted GNSS quality control procedure. DIA utilises various test statistics, which require observation residuals and their variances. Correct derivation of the local test statistic requires residuals at a given epoch to be uncorrelated with those from previous epochs. It is shown that for a non-optimal filter the autocorrelations between observations at successive epochs are non-zero which has implications for proper application of DIA. Whilst less problematic for longer data sampling periods, high rate data using real-time PPP results in significant time correlations between residuals over short periods. It is possible to model time correlations in the residuals as an autoregressive process. Using the autoregressive parameters, the effect of time correlation in the residuals can be removed, creating so-called whitened residuals and their variances. Thus a whitened test statistic can be formed, that satisfies the preferred assumption of uncorrelated residuals over time. The effectiveness of this whitened test statistic and its impact on quality control is evaluated.
APA, Harvard, Vancouver, ISO, and other styles
5

Piesciorovsky, Emilio C. "Relay in the loop test procedures for adaptive overcurrent protection." Diss., Kansas State University, 2015. http://hdl.handle.net/2097/20537.

Full text
Abstract:
Doctor of Philosophy
Electrical and Computer Engineering
Anil Pahwa
Noel N. Schulz
Microgrids with distributed generators have changed how protection and control systems are designed. Protection systems in conventional U.S. distribution systems are radial with the assumption that current flows always from the utility source to the end user. However, in a microgrid with distributed generators, currents along power lines do not always flow in one direction. Therefore, protection systems must be adapted to different circuit paths depending on distributed generator sites in the microgrid and maximum fuse ampere ratings on busses. Adaptive overcurrent protection focuses on objectives and constraints based on operation, maximum load demand, equipment, and utility service limitations. Adaptive overcurrent protection was designed to protect the power lines and bus feeders of the microgrid with distributed generators by coordinating fuses and relays in the microgrid. Adaptive overcurrent protection was based on the relay setting group and protection logic methods. Non-real-time simulator (NRTS) and real-time simulator (RTS) experiments were performed with computer-based simulators. Tests with two relays in the loop proved that primary relays tripped faster than backup relays for selectivity coordination in the adaptive overcurrent protection system. Relay test results from tripping and non-tripping tests showed that adaptive inverse time overcurrent protection achieved selectivity, speed, and reliability. The RTS and NRTS with two relays in the loop techniques were described and compared in this work. The author was the first graduate student to implement real-time simulation with two relays in the loop at the Burns & McDonnell - K-State Smart Grid Laboratory. The RTS experimental circuit and project are detailed in this work so other graduate students can apply this technique with relays in the loop in smart grid research areas such as phasor measurement units, adaptive protection, communication, and cyber security applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Kokten, Selen. "Bounding Procedures On Bi-directional Labeling Algorithm Of Time Dependent Vehicle Routing Problem With Time Windows In Branch-and-cut-and-price Framework." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613790/index.pdf.

Full text
Abstract:
In this thesis we consider a Time-Dependent Vehicle Routing Problem with Time Windows (TDVRPTW) which is solved by a Branch and Cut and Price (BCP) algorithm. The decomposition of an arc based formulation leads to a set-partitioning problem as the master problem, and a Time-Dependent Elementary Shortest Path Problem with Resource Constraints (TDESPPRC) as the pricing problem. The main contribution of this thesis is the modified fathoming and bounding procedures applied on bi-directional Time-Dependent Labeling algorithm (TDL) which is used solve the TDESPPRC. The aim of the fathoming proposed is to solve TDVRPTW more efficiently by not extending the unproductive labels in bi-directional TDL algorithm. Moreover, an arc bounding model is introduced to stop the extension of labels as an alternative to resource bounding used in bi-directional search. In addition, independent from the work on TDVRPTW, the thesis includes an effects analysis of a new customer on Kuehne+Nagel(K+N) Netherlands Fast Moving Consumer Goods (FMCG) and returns distribution network. This study focused on analyzing the current performance of the distribution network and evaluating the scenarios for K+N&rsquo
s future distribution network by a simulation study.
APA, Harvard, Vancouver, ISO, and other styles
7

Machado, Renato Bobsin 1976. "Método computacional para acompanhamento e interação remota em tempo real para videocolonoscopia." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/312303.

Full text
Abstract:
Orientadores: Wu Feng Chung, Huei Diana Lee
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Ciências Médicas
Made available in DSpace on 2018-08-23T17:40:33Z (GMT). No. of bitstreams: 1 Machado_RenatoBobsin_D.pdf: 21928006 bytes, checksum: efd4a25c4c6dd6e8c6a3f4f4426de95c (MD5) Previous issue date: 2013
Resumo: A área computacional aplicada à medicina tem contribuído para aumentar a eficiência no armazenamento, na transmissão e na análise de dados referentes aos pacientes e, consequentemente, na precisão do diagnóstico. Nesse contexto, para ampliar ainda mais estas ações, tornam-se essenciais a formação e a consolidação de redes integradas e colaborativas de apoio à área médica à distância. Outro aspecto fundamental que deve ser considerado no trato de informações pertencentes aos pacientes e profissionais médicos é a segurança, contemplando critérios como integridade, confidencialidade e autenticidade dessas informações. Neste trabalho, desenvolveu-se um método original em telemedicina para o acompanhamento e a interação remota entre especialistas da área médica, em tempo real, durante a realização de exames videocolonoscópicos. Para a proteção de dados e para a transmissão segura e eficiente das informações referentes aos pacientes e aos exames, propôs-se um método de segurança específico. Esses métodos foram implementados em um sistema computacional aplicando tecnologia Web e ferramentas open source. Para aferir o desempenho desse sistema, avaliou-se a taxa Quadros por Segundo (QPS) durante a transmissão de vídeos sem compactação. Este processo se deu em dois ambientes distintos, com diferentes resoluções, sendo o primeiro caracterizado apenas pela rede local e, o segundo, pela rede local juntamente com a Internet, simulando ambientes reais de aplicação do método proposto. As análises dos resultados desse trabalho permitiram concluir que: 1. O método proposto, implementado no sistema computacional, cumpre os requisitos estabelecidos para transmissão de dados, segurança de informações e interação em tempo real entre os usuários; 2. O método proposto é aplicável para a realização de procedimentos videocolonoscópicos, em redes locais e na Internet. 3. O método de segurança definido neste trabalho prove privacidade para a transmissão de dados, de vídeos e de imagens, assim como para a interação entre os participantes locais e remotos
Abstract: Computational methods and tools applied to medicine have contributed to increase efficiency in storage, transmission and analysis of data related to patients and, consequently, the accuracy of diagnoses. In this context, to further expand these actions, it became essential the creation and consolidation of integrated and collaborative networks to support the medical area. Another fundamental aspect which must be considered in dealing with information about patients and medical professionals is security, considering criteria such as integrity, confidentiality and authenticity of this information. In this work, we have developed an original telemedicine method for monitoring and remotely interaction among medical experts, in real time, during the performance of video-colonoscopic procedures. For data protection, secure transmission and efficient use of information related to patients and their examinations, we have proposed a specific security method. Both methods were implemented in a computing system by applying Web technology and open source tools. In order to assess the performance of this system, we have evaluated the transmission rate in frames per second (FPS) during the streaming of an uncompressed video. We performed our experiments simulating real environments in two different scenarios with distinct resolutions, one being characterized only by the local network and the second considering the local network and the Internet. The analysis of the results has shown that: (1) the proposed method, implemented in the computational system, meets the requirements for data transmission, information security and real-time interaction among the users; (2) the proposed method is applicable for performing video-colonoscopic procedures, via local networks and the Internet, and; (3) the security method built for this system provides privacy during the transmission of the data, video and images, as well as the interaction between the local and remote participants
Doutorado
Fisiopatologia Cirúrgica
Doutor em Ciências
APA, Harvard, Vancouver, ISO, and other styles
8

Cevik, Deniz. "Determination Of The Change In Building Capacity During Earthquakes." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607085/index.pdf.

Full text
Abstract:
There is a great amount of building stock built in earthquake regions where earthquakes frequently occur. It is very probable that such buildings experience earthquakes more than once throughout their economic life. The motivation of this thesis arose from the lack of procedures to determine the change in building capacity as a result of prior earthquake damage. This study focuses on establishing a method that can be employed to determine the loss in the building capacity after experiencing an earthquake. In order to achieve this goal a number of frames were analyzed under several randomly selected earthquakes. Nonlinear time-history analyses and nonlinear static analyses were conducted to assess the prior and subsequent capacities of the frames under consideration. The structural analysis programs DRAIN-2DX and SAP2000 were employed for this purpose. The capacity curves obtained by these methods were investigated to propose a procedure by which the capacity of previously damaged structures can be determined. For time-history analyses the prior earthquake damage can be taken into account by applying the ground motion histories successively to the structure under consideration. In the case of nonlinear static analyses this was achieved by modifying the elements of the damaged structure in relation to the plastic deformation they experience. Finally a simple approximate procedure was developed using the regression analysis of the results. This procedure relies on the modification of the structure stiffness in proportion to the ductility demand the former earthquake imposes. The proposed procedures were applied to an existing 3D building to validate their applicability.
APA, Harvard, Vancouver, ISO, and other styles
9

Kirsch, Gregory Allan. "Transfer of adults from a Catholic Church sui iuris to the Latin church either upon request or at the time of marriage the procedures and formalities involved in procuring a transfer /." Theological Research Exchange Network (TREN), 1999. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tillman, Markus. "Procedural Rendering of Geometry-Based Grass in Real-Time." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2583.

Full text
Abstract:
Since grass is abundant on our planet it plays an important role in the rendering of many different outdoor scenes. This study focuses on the real-time rendering of many individual grass blades with geometry. As a grass blade in real life is very thin and have a simple shape it can be represented with only a handful of vertices. The challenge is introduced when a meadow of grass is to be rendered as it can contain billions of grass blades. Two different algorithms were developed; one which use traditional vertex buffers to store and render the grass blades while the other makes use of textures. Quantitative data was generated from these algorithms. Among this data were images of the scene. These images were subjected to a questionnaire to collect qualitative information about the grass. All the generated data was then analyzed and interpreted to find advantages and disadvantages of the algorithms. The buffer-based algorithm was found to be slightly more computationally efficient compared to the texture-based algorithm. The quality of the visual result was perceived to be towards good while the realism was perceived as mediocre at best. The advantage of the texture-based algorithm is that it allows more options to handle the grass blades data when rendering. Using the terrain data to generate the grass blades was concluded to be advantageous. The realism of the grass could have been improved by using a grass blade texture as well as introducing variety in density and grass species.
Eftersom gräs är rikligt på vår planet spelar den en viktig roll vid renderingen av många olika utomhusscener. Denna studie fokuserar på realtidsrendering av många individuella gräsblad med geometri. Eftersom ett gräsblad i verkligheten är mycket tunnt och har en enkel form kan den representeras med endast en handfull vertiser. Utmaningen introduceras när en äng av gräs ska renderas eftersom som den kan innehålla miljarder gräsblad. Två olika algoritmer utvecklades, en som använder traditionella vertex buffrar för att lagra och rendera gräsbladen medan den andra använder sig av texturer. Kvantitativ data genererades från dessa algoritmer. Bland denna data fanns bilder av scenen. Dessa bilder utsattes för ett frågeformulär för att samla in kvalitativ information om gräset. All den data som genereras analyserades och tolkades för att hitta fördelar och nackdelar med algoritmerna. Den bufferbaserade algoritmen upptäcktes vara beräkningsmässigt effektivare jämfört med den texturbaserade algoritmen. Den upplevda kvalitén på det visuella resultatet ansågs vara närmare bra medan realismen uppfattades som medioker i bästa fall. Fördelen med den texturen-baserad algoritm är att den tillåter fler möjligheter att hantera gräsblads-data vid rendering. Slutsatsen av att använda terrängens data för att generera gräsbladen sågs vara fördelaktigt. Realismen av gräset kunde förbättrats genom att använda en gräsblads-textur, samt variation i densitet och gräsarter.
APA, Harvard, Vancouver, ISO, and other styles
11

Furstenburg, Phillip Pieter. "Purpose-orientated stocking of procedure trolleys saves time in busy Emergency Centres." Master's thesis, Faculty of Health Sciences, 2019. http://hdl.handle.net/11427/31524.

Full text
Abstract:
Background and aim Inefficient storage and sourcing of routinely required consumables located on procedure trolleys results in time wasted when preparing for common procedures in Emergency Centres, contributing to poor efficiency and quality of care. We designed a novel purpose-orientated procedure trolley, and evaluated its impact on time spent on procedure preparation and efficiency. Methods In an urban emergency centre, eight participants were measured each day over 24 days, once using the contemporary setup and once using the modified procedure setup. During each simulation, efficiency markers were assessed (time spent on procedure preparation, steps taken, stops made, and amount of time participants had to open a drawer to locate required items). Results The mean time required to collect the required items for IV cannulation and blood sampling from the purpose-orientated trolley was 22.7 seconds(s) (SD = 3.66) compared to 49.2 s (SD = 15.45) using the contemporary trolley. There was a significant difference in mean collection time between the two trolleys (p < 0.0005). There was a significant difference (p-value < 0.0005) in all the other categories: steps taken, stops made, and drawer opening. Conclusion In our setting, stocking procedure trolleys in a purpose-orientated manner has the potential to improve efficiency by reducing time spent on procedure preparation.
APA, Harvard, Vancouver, ISO, and other styles
12

Grelsson, David. "Tile Based Procedural Terrain Generation in Real-Time : A Study in Performance." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5409.

Full text
Abstract:
Context. Procedural Terrain Generation refers to the algorithmical creation of terrains with limited or no user input. Terrains are an important piece of content in many video games and other forms of simulations. Objectives. In this study a tile-based approach to creating endless terrains is investigated. The aim is to find if real-time performance is possible using the proposed method and possible performance increases from utilization of the GPU. Methods. An application that allows the user to walk around on a seemingly endless terrain is created in two versions, one that exclusively utilizes the CPU and one that utilizes both CPU and GPU. An experiment is then conducted that measures performance of both versions of the application. Results. Results showed that real-time performance is indeed possible for smaller tile sizes on the CPU. They also showed that the application benefits significantly from utilizing the GPU. Conclusions. It is concluded that the tile-based approach works well and creates a functional terrain. However performance is too poor for the technique to be utilized in e.g. a video game.
APA, Harvard, Vancouver, ISO, and other styles
13

Þorsteinsson, Jóhannes. "Real Time Procedural Wind Soundscape : The effect of procedural wind soundscape on navigation in virtual 3D space." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11452.

Full text
Abstract:
Sound design with the help of procedurally generated sound in video games has seen arise in the last few years given how that method gives us greated freedom in how soundreacts in realtime to the games, and the players. This research looks into if there is anydifference in how procedural sound, in this case procedurally generated wind, affectsthe navigation of players in a three dimensional world, as opposed to static samplebased sound design.
APA, Harvard, Vancouver, ISO, and other styles
14

Venter, Johannes Petrus. "Developing a procedure to optimise cycle time in a manufacturing plant / Venter J.P." Thesis, North-West University, 2011. http://hdl.handle.net/10394/7271.

Full text
Abstract:
Productivity advances generated from ‘lean manufacturing’ are self–evident. Plants that adopt ‘lean’ are more capable of achieving shorter lead times, less waste in the system and higher quality levels. The goal of this study was to ascertain which ‘lean’ tools and techniques are available for use. A matrix was constructed with a summation of the authors who agree that specific ‘lean’ tools will reduce cycle time. It was found that reduced set–up time and waste elimination are most affected by the implementation of ‘lean’ tools and techniques. An empirical study was conducted to confirm the results of the literature study. The respondents’ knowledge on the ‘lean’ tools was also tested. It was found that respondents have a sound understanding of set–up time; they agree that it must be reduced in the plant. Pre–scientific evidence and the response from the empirical study confirm that there is a substantial amount of waste in the factory. A current state value–stream map was drawn from a single welded part Product X. The value–stream was analysed to reduce the cycle time in the process, with the focus on set–up time reduction and waste elimination. The future state value–stream map was drawn, displaying astonishing results. A continuous improvement (kaizen) programme will help reduce the cycle time even further by making use of the other ‘lean’ tools discussed in this study. This programme forms part of the procedure to optimise cycle time.
Thesis (M.B.A.)--North-West University, Potchefstroom Campus, 2012.
APA, Harvard, Vancouver, ISO, and other styles
15

Streyl, Dominik. "Establishment of a standard operating procedure for predicting the time of calving in cattle." Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-133289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Abdul, Karim Ahmad. "Procedural locomotion of multi-legged characters in complex dynamic environments : real-time applications." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10181/document.

Full text
Abstract:
Les créatures à n-pattes, comme les quadrupèdes, les arachnides ou les reptiles, sont une partie essentielle de n’importe quelle simulation et ils participent à rendre les mondes virtuels plus crédibles et réalistes. Ces créatures à n-pattes doivent être capables de se déplacer librement vers les points d’intérêt de façon réaliste, afin d’offrir une meilleure expérience immersive aux utilisateurs. Ces animations de locomotion sont complexes en raison d’une grande variété de morphologies et de modes de déplacement. Il convient d’ajouter à cette problématique la complexité des environnements où ils naviguent. Un autre défi lors de la modélisation de tels mouvements vient de la difficulté à obtenir des données sources. Dans cette thèse nous présentons un système capable de générer de manière procédurale des animations de locomotion pour des dizaines de créatures à n-pattes, en temps réel, sans aucune donnée de mouvement préexistante. Notre système est générique et contrôlable. Il est capable d’animer des morphologies différentes, tout en adaptant les animations générées à un environnement dynamique complexe, en temps réel, ce qui donne une grande liberté de déplacement aux créatures à n-pattes simulées. De plus, notre système permet à l’utilisateur de contrôler totalement l’animation produite et donc le style de locomotion
Multi-legged characters like quadrupeds, arachnids, reptiles, etc. are an essential part of any simulation and they greatly participate in making virtual worlds more life-like. These multi-legged characters should be capable of moving freely and in a believable way in order to convey a better immersive experience for the users. But these locomotion animations are quite rich due to the complexity of the navigated environments and the variety of the animated morphologies, gaits, body sizes and proportions, etc. Another challenge when modeling such animations arises from the lack of motion data inherent to either the difficulty to obtain them or the impossibility to capture them.This thesis addresses these challenges by presenting a system capable of procedurally generating locomotion animations fordozens of multi-legged characters in real-time and without anymotion data. Our system is quite generic thanks to the chosen Procedural-Based techniques and it is capable of animating different multi-legged morphologies. On top of that, the simulated characters have more freedom while moving, as we adapt the generated animations to the dynamic complex environments in real-time. Themain focus is plausible movements that are, at the same time,believable and fully controllable. This controllability is one of the forces of our system as it gives the user the possibility to control all aspects of the generated animation thus producing the needed style of locomotion
APA, Harvard, Vancouver, ISO, and other styles
17

Smith, Robert L. M. Eng Massachusetts Institute of Technology. "Afterimage Toon Blur : procedural generation of cartoon blur for 3D models in real time." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106376.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 48).
One of the notable distinctions of traditional animation techniques is the emphasis placed on motion. Objects in motion often make use of visual stylistic effects to visually enhance the motion, such as speed lines or afterimages. Unfortunately, at present, 2D animation makes much more use of these techniques than 3D animation, which is especially clear in the stylistic differences between 2D and 3D videogames. For 3D videogame designers fond of the look and feel of traditional animation, it would be beneficial if 3D models could emulate that 2D style. In that regard, I propose two techniques that use the location history of 3D models to, in real time, construct non-photorealistic motion blur effects in the vein of 2D traditional animation. With these procedural techniques, designers can maximize the convenience of 3D models while still retaining an aesthetic normally constrained to 2D animation.
by Robert L. Smith.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
18

Andersson, Johan, and Katrin Andersson. "Automated Software Testing in an Embedded Real-Time System." Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9772.

Full text
Abstract:

Today, automated software testing has been implemented successfully in many systems, however there does still exist relatively unexplored areas as how automated testing can be implemented in a real-time embedded system. This problem has been the foundation for the work in this master thesis, to investigate the possibility to implement an automated software testing process for the testing of an embedded real-time system at IVU Traffic Technologies AG in Aachen, Germany.

The system that has been the test object is the on board system i.box.

This report contains the result of a literature study in order to present the foundation behind the solution to the problem of the thesis. Questions answered in the study are: when to automate, how to automate and which traps should one avoid when implementing an automated software testing process in an embedded system.

The process of automating the manual process has contained steps as constructing test cases for automated testing, analysing whether an existing tool should be used or a unique test system needs to be developed. The analysis, based on the requirements on the test system, the literature study and an investigation of available test tools, lead to the development of a new test tool. Due to limited devlopement time and characterstics of the i.box, the new tool was built based on post execution evaluation. The tool was therefore divided into two parts, a part that executed the test and a part that evaluated the result. By implementing an automated test tool it has been proved that it is possible to automate the test process at system test level in the i.box.

APA, Harvard, Vancouver, ISO, and other styles
19

Alsteris, Leigh, and n/a. "Short-Time Phase Spectrum in Human and Automatic Speech Recognition." Griffith University. School of Microelectronic Engineering, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20060727.090845.

Full text
Abstract:
Incorporating information from the short-time phase spectrum into a feature set for automatic speech recognition (ASR) may possibly serve to improve recognition accuracy. Currently, however, it is common practice to discard this information in favour of features that are derived purely from the short-time magnitude spectrum. There are two reasons for this: 1) the results of some well-known human listening experiments have indicated that the short-time phase spectrum conveys a negligible amount of intelligibility at the small window durations of 20-40 ms used for ASR spectral analysis, and 2) using the short-time phase spectrum directly for ASR has proven di?cult from a signal processing viewpoint, due to phase-wrapping and other problems. In this thesis, we explore the possibility of using short-time phase spectrum information for ASR by considering the two points mentioned above. To address the ?rst point, we conduct our own set of human listening experiments. Contrary to previous studies, our results indicate that the short-time phase spectrum can indeed contribute signi?cantly to speech intelligibility over small window durations of 20-40 ms. Also, the results of these listening experiments, in addition to some ASR experiments, indicate that at least part of this intelligibility may be supplementary to that provided by the short-time magnitude spectrum. To address the second point (i.e., the signal processing di?culties), it may be necessary to transform the short-time phase spectrum into a more physically meaningful representation from which useful features could possibly be extracted. Speci?cally, we investigate the frequency-derivative (or group delay function, GDF) and the time-derivative (or instantaneous frequency distribution, IFD) as potential candidates for this intermediate representation. We have performed various experiments which show that the GDF and IFD may be useful for ASR. We conduct several ASR experiments to test a feature set derived from the GDF. We ?nd that, in most cases, these features perform worse than the standard MFCC features. Therefore, we suggest that a short-time phase spectrum feature set may ultimately be derived from a concatenation of information from both the GDF and IFD representations. For best performance, the feature set may also need to be concatenated with short-time magnitude spectrum information. Further to addressing the two aforementioned points, we also discuss a number of other speech applications in which the short-time phase spectrum has proven to be very useful. We believe that an appreciation for how the short-time phase spectrum has been used for other tasks, in addition to the results of our research, will provoke fellow researchers to also investigate its potential for use in ASR.
APA, Harvard, Vancouver, ISO, and other styles
20

Roach, Jeffrey Wayne. "Predicting Realistic Standing Postures in a Real-Time Environment." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/291.

Full text
Abstract:
Procedural human motion generation is still an open area of research. Most research into procedural human motion focus on two problem areas: the realism of the generated motion and the computation time required to generate the motion. Realism is a problem because humans are very adept at spotting the subtle nuances of human motion and so the computer generated motion tends to look mechanical. Computation time is a problem because the complexity of the motion generation algorithms results in lengthy processing times for greater levels of realism. The balancing human problem poses the question of how to procedurally generate, in real-time, realistic standing poses of an articulated human body. This report presents the balancing human algorithm that addresses both concerns: realism and computation time. Realism was addressed by integrating two existing algorithms. One algorithm addressed the physics of the human motion and the second addressed the prediction of the next pose in the animation sequence. Computation time was addressed by identifying techniques to simplify or constrain the algorithms so that the real-time goal can be met. The research methodology involved three tasks: developing and implementing the balancing human algorithm, devising a real-time simulation graphics engine, and then evaluating the algorithm with the engine. An object-oriented approach was used to model the balancing human as an articulated body consisting of systems of rigid-bodies connected together with joints. The attributes and operations of the object-oriented model were derived from existing published algorithms.
APA, Harvard, Vancouver, ISO, and other styles
21

Haden, Lonnie A. "A numerical procedure for computing errors in the measurement of pulse time-of-arrival and pulse-width." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ross, Luke Michael Guskiewicz Kevin M. "Procedural reaction time and balance performance during a dual or single task in healthy collegiate students." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2008. http://dc.lib.unc.edu/u?/etd,1693.

Full text
Abstract:
Thesis (M.A.)--University of North Carolina at Chapel Hill, 2008.
Title from electronic title page (viewed Sep. 16, 2008). "... in partial fulfillment of the requirements for the degree of Master of Arts in the Department of Exercise and Sport Science Athletic Training." Discipline: Exercise and Sports Science; Department/School: Exercise and Sport Science.
APA, Harvard, Vancouver, ISO, and other styles
23

Chung, Sunjung. "Effect of Poor Sanitation Procedures on Cross-Contamination of Animal Species in Ground Meat Products." Chapman University Digital Commons, 2019. https://digitalcommons.chapman.edu/food_science_theses/3.

Full text
Abstract:
While the presence of ≥1% of an undeclared species in ground meat generally used as an indicator of intentional mislabeling as opposed to cross-contamination, the actual percent of undeclared species resulting from cross-contamination has not been experimentally determined. The objective of this study was to quantify the effect of sanitation procedures on the crosscontamination of animal species in ground meat products, using undeclared pork in ground beef. Pork (13.6 kg) was processed using a commercial grinder, then one of three sanitation treatments was completed (“no cleaning”, “partial cleaning”, or “complete cleaning”). Next, beef (13.6 kg) was ground using the same equipment. For “no cleaning,” beef was ground immediately after pork without any cleaning step; for “partial cleaning,” the hopper tray was wiped, and excess meat was taken out from the auger; for “complete cleaning,” all parts of the grinder were disassembled and thoroughly cleaned with water and soap. A 100-g sample was collected for each 0.91 kg (2 lb) of beef processed with the grinder and each sanitation treatment was tested twice. Real-time polymerase chain reaction (PCR) was used to quantify pork in ground beef. For “no cleaning,” the first 100-g sample of ground beef run through the grinder contained 24.42 ± 10.41% pork, while subsequent samples contained
APA, Harvard, Vancouver, ISO, and other styles
24

Olsson, Viktor. "A search-based approach for procedurally generating player adapted enemies in real-time." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20847.

Full text
Abstract:
An Evolutionary Algorithm was run in real-time for the procedural generation ofenemies in a third-person, wave based hack and slash and shoot 'em up game. Thealgorithm evaluates enemies as individuals based on their effectiveness at battlingthe player character. Every generation is presented as a new wave of enemieswhose properties have been adjusted according to the fitness of the last wave. Byconstantly making new enemies more adept at the task of the defeating the currentplayer, I attempt to automatically and naturally raise the difficulty as the gameprogresses. The goal is also to improve player satisfaction as a result. By analyzingthe response from players and observing the changes of the generated enemies, Idetermine whether or not this is an appropriate implementation of EvolutionaryAlgorithms. Results showed that the success of the algorithm varied substantiallybetween tests, giving a number of both failed and successful tests. I go throughsome of the individual data and draw conclusions on what specific conditions makesthe algorithm perform desirably.
APA, Harvard, Vancouver, ISO, and other styles
25

Streyl, Dominik [Verfasser], and Holm [Akademischer Betreuer] Zerbe. "Establishment of a standard operating procedure for predicting the time of calving in cattle / Dominik Streyl. Betreuer: Holm Zerbe." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2011. http://d-nb.info/101517034X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Hodkinson, Peter William. "A cross sectional study of procedural sedation in adults in emergency departments with full time clinicians in the Cape Town metropole." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/2861.

Full text
Abstract:
Includes bibliographical references (leaves 74-77).
The aims of this study were to describe procedural sedation practice in EDs, with specific emphasis on facilities for PS, characteristics of clinicians performing PS, monitoring equipment and personnel, drug regimes, complications and clinician satisfaction with present PS practice. A second aim was to propose evidence-based protocols for the use of PS for those EDs where current practices are found to be outdated and not evidence based.
APA, Harvard, Vancouver, ISO, and other styles
27

Shen, Ningyan 1961. "Determinants of waiting time from initial diagnostic procedure to surgery among women with localized breast cancer in Quebec, 1992-1997." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=38279.

Full text
Abstract:
Background. The early diagnosis and treatment of breast cancer has become an important health care concern. A recent study reported the median waiting time for breast cancer surgery in Quebec was 34 days with 14% of women waiting in excess of 90 days. Understanding the determinants of long waiting is essential to develop optimum interventions to reduce delay. Objective. The purpose of this study was to identify the determinants of waiting time to surgery among women with primary breast cancer in Quebec between 1992 and 1997. Methods. The target population was all women 20 years and older diagnosed with primary breast cancer in Quebec between 1992 and 1997. The data was compiled from physician fee-for-service claims maintained by the Regie de I'assurance maladie du Quebec (RAMQ); the Quebec hospital discharge database (MedEcho), and the 1991 Canadian census. Waiting time was defined as the number of days from the initial breast diagnostic procedure to the first definitive surgical treatment. Three-level hierarchical linear models were used for statistical analysis. Findings . Overall, 13,383 women with primary breast cancer treated by 614 surgeons in 107 hospitals were identified. No statistically significant variation of waiting time was found among hospitals. Longer waiting times for breast cancer surgery were observed for women 50 to 64 years of age, without comorbidity, with history of benign breast disease, living in the lower education areas, having surgery at day-surgery setting, having surgery in more recent years, or having surgery performed by younger a surgeon (20 to 49 years old). Women who had surgery performed in a teaching hospital had longer waiting times and this effect was larger when mastectomy was performed. These results could be used to identify women and care delivery practices at higher risk for delays which could be the focus of interventions.
APA, Harvard, Vancouver, ISO, and other styles
28

Peuthert, Benjamin M. [Verfasser]. "Mutual relationships in taxation procedure : a survey of family firms' tax compliance, tax auditors' negotiation strategy and time consumption / Benjamin M. Peuthert." Hannover : Technische Informationsbibliothek (TIB), 2017. http://d-nb.info/1152966510/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Charlton, Cade T. "Effects of a Self-Management Procedure Using Student Feedback on Staff Members' use of Praise in an Out-Of-School Time Program." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/4703.

Full text
Abstract:
Out-of-school time (OST) programs are under increasing pressure to improve student achievement. However, OST program administrators face a number of challenges to improving program effectiveness including inconsistent student participation, inexperienced staff members, and a lack of high-quality professional development. OST program administrators can address these challenges by implementing staff development practices that encourage the use of effective instructional strategies. Specific praise is a simple and effective instructional strategy that has been linked to improved student engagement, enhanced academic achievement, and stronger student-teacher relationships. Unfortunately, there have been very few studies examining the effects of interventions designed to increase OST staff members’ use of specific praise. One staff development strategy that could be both feasible and effective is the use of self-management. Although there are a variety of approaches to self-management designed for teachers, not all self-management strategies are effective. One strategy that might increase the feasibility and effectiveness of self-management programs is the use of student feedback. The process of comparing self-evaluations against a third-party standard such as student feedback is called matching in the self-management literature. Students can be a useful source of feedback because they observe their teachers frequently and can report the use of instructional strategies like specific praise. The purpose of this study was to examine the effects of a self-management procedure using student feedback on OST staff members’ use of specific praise. A multiple-baseline design across participants was used to examine the effects of the intervention on specific praise rates. All participants increased their use of specific praise after implementing the self-management procedures. General praise rates became more variable throughout the study. These findings provide evidence for a functional effect on specific praise but not for general praise. Teachers reported high levels of satisfaction with the feasibility and effectiveness of the intervention. A statistically significant correlation was found between specific praise rates and student reports of specific praise.
APA, Harvard, Vancouver, ISO, and other styles
30

Fonsêca, Sílvia Ferraz Sobreira. "A concessão da tutela antecipada em face de sua postulação implícita." Universidade Católica de Pernambuco, 2007. http://www.unicap.br/tede//tde_busca/arquivo.php?codArquivo=158.

Full text
Abstract:
O processo tem por finalidade constituir instrumento hábil para satisfazer o interesse da parte que postula em juízo a realização de um direito material. A grande preocupação da ciência processual moderna, contudo, diz respeito à eficácia da tutela jurisdicional, diante do fato de que a morosidade constitui grande óbice à efetividade do processo. Nesse diapasão, foi inserido no ordenamento pátrio, por meio da Lei n 8.952/94, o instituto da antecipação da tutela como medida de caráter provisório e satisfativo, aplicável de maneira genérica às situações sujeitas ao processo de cognição. Consiste em fornecer ao autor, total ou parcialmente, aquilo que pretende obter ao final do processo, nas hipóteses em que haja fundado receio de dano irreparável ou de difícil reparação, ou fique caracterizado o abuso de direito de defesa ou manifesto propósito protelatório do réu, com a finalidade de assegurar a utilidade do resultado final do processo. O questionamento desse trabalho incide, contudo, no que diz respeito à possibilidade da concessão da tutela antecipada genérica em face de sua postulação implícita. Foram analisados os diversos posicionamentos que tentam solucionar a indagação exposta. Alguns doutrinadores afirmam não ser possível, em virtude da exigência expressa do dispositivo legal no tocante ao requerimento da parte interessada. Argumentam, ainda, que representaria uma afronta aos princípios tradicionais do processo, como o da demanda ou da iniciativa da parte, da adstrição do juiz ao pedido, ao princípio dispositivo e ao princípio da imparcialidade do juiz. Outros doutrinadores entendem pela possibilidade da concessão do provimento antecipatório amparadas no pedido implícito, nas hipóteses em que o julgador observar que estão presentes os requisitos exigidos pela lei. Alegam que diante do caráter publicista do processo, este deve atender acima de tudo aos princípios do amplo acesso à justiça e do devido processo legal, considerando a importância da efetividade da tutela jurisdicional. Esses autores alegam que não haveria violação aos princípios constitucionais, na medida em que o pedido de antecipação da tutela estaria contido, mesmo que implicitamente, no pedido inicial da parte. Após as pesquisas desenvolvidas, chegamos à conclusão de que deve ser possível a concessão da tutela antecipada ainda que o pedido não esteja expresso na inicial. É que, fazendo uma ponderação dos valores em jogo no caso concreto, o juiz deve estar apto a dar maior efetividade ao processo, em cumprimento aos ditames constitucionais.
The process has as main purpose to constitute skillful instrument to satisfy the interest of the part that claims in judgment the accomplishment of a material right. The great concern of modern procedural science, however, is about the effectiveness of the jurisdictional guardianship, ahead of the fact of slowdown constitutes great obstacle to the effectiveness of the process. In this diapason, he was inserted in the native order, through Law n 8.952/94, the institute of the anticipation of the guardianship as measured of provisory and fulfillment, applicable character in generic way to the situations citizens to the cognition process. It consists of supplying to the author, total or partially, what it intends to get to the end of the process, in the hypotheses where it has established distrust of irreparable damage or difficult repairing, or is characterized the abuse of process of defense or manifesto dilatory intention of the male defendant, with the purpose to assure the utility of the final result of the process. The questioning of this work happens, however, in what it says respect to the possibility of the concession of the generic anticipated guardianship in face of its implicit postulation. The diverse positioning had been analyzed trying to solve the investigation displayed. Some instructors affirm not to be possible, in virtue of the express requirement of the legal device in regards to the petition of the interested person. They argue, still, that it would represent a confront to the traditional principles of the process, as of the demand or the initiative of the part, the astriction of the judge to the order, the principle device and the principle of the impartiality of the judge. Other instructors understand for the possibility of the supported concession of the anticipated provisions in the implicit order, in the hypotheses where the judge to observe that the requirements demanded for the law are gifts. They ahead allege that of the character publicist of the process, this must take care of above all to the principles of the ample access to justice and of due process of law, considering the importance of the effectiveness of the jurisdictional guardianship. These authors allege that, would not have breaking to the principles constitutional, in the measure where the order of anticipation of the guardianship would be contained, same that implicitly, in the initial order of the part. After the developed research we come to the conclusion that must be possible the concession of the anticipated guardianship despite the order is not express in the initial. He is that, making a balance of the values in game in the case concrete, the judge must is apt to give bigger effectiveness to the process, in fulfillment to the constitutional concept
APA, Harvard, Vancouver, ISO, and other styles
31

Roßbach, André Christian. "Evaluation of Software Architectures in the Automotive Domain for Multicore Targets in regard to Architectural Estimation Decisions at Design Time." Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-163372.

Full text
Abstract:
In this decade the emerging multicore technology will hit the automotive industry. The increasing complexity of the multicore-systems will make a manual verification of the safety and realtime constraints impossible. For this reason, dedicated methods and tools are utterly necessary, in order to deal with the upcoming multicore issues. A lot of researchprojects for new hardware platforms and software frameworks for the automotive industry are running nowadays, because the paradigms of the “High-Performance Computing” and “Server/Desktop Domain” cannot be easily adapted for the embedded systems. One of the difficulties is the early suitability estimation of a hardware platform for a software architecture design, but hardly a research-work is tackling that. This thesis represents a procedure to evaluate the plausibility of software architecture estimations and decisions at design stage. This includes an analysis technique of multicore systems, an underlying graph-model – to represent the multicore system – and a simulation tool evaluation. This can guide the software architect, to design a multicore system, in full consideration of all relevant parameters and issues
In den nächsten Jahren wird die aufkommende Multicore-Technologie auf die Automobil-Branche zukommen. Die wachsende Komplexität der Multicore-Systeme lässt es nicht mehr zu, die Verifikation von Sicherheits- und Echtzeit-Anforderungen manuell auszuführen. Daher sind spezielle Methoden und Werkzeuge zwingend notwendig, um gerade mit den bevorstehenden Multicore-Problemfällen richtig umzugehen. Heutzutage laufen viele Forschungsprojekte für neue Hardware-Plattformen und Software-Frameworks für die Automobil-Industrie, weil die Paradigmen des “High-Performance Computings” und der “Server/Desktop-Domäne” nicht einfach so für die Eingebetteten Systeme angewendet werden können. Einer der Problemfälle ist das frühe Erkennen, ob die Hardware-Plattform für die Software-Architektur ausreicht, aber nur wenige Forschungs-Arbeiten berücksichtigen das. Diese Arbeit zeigt ein Vorgehens-Model auf, welches ermöglicht, dass Software-Architektur Abschätzungen und Entscheidungen bereits zur Entwurfszeit bewertet werden können. Das beinhaltet eine Analyse Technik für Multicore-Systeme, ein grundsätzliches Graphen-Model, um ein Multicore-System darzustellen, und eine Simulatoren Evaluierung. Dies kann den Software-Architekten helfen, ein Multicore System zu entwerfen, welches alle wichtigen Parameter und Problemfälle berücksichtigt
APA, Harvard, Vancouver, ISO, and other styles
32

Rempe, Inga [Verfasser], Sven [Akademischer Betreuer] Dänicke, Annette [Akademischer Betreuer] Zeyner, and Winfried [Akademischer Betreuer] Drochner. "Investigations of time-dependent effects of dietary deoxynivalenol and zearalenone exposure on female piglets and in vivo evaluation of a feed decontamination procedure / Inga Rempe. Betreuer: Sven Dänicke ; Annette Zeyner ; Winfried Drochner." Halle, Saale : Universitäts- und Landesbibliothek Sachsen-Anhalt, 2014. http://d-nb.info/1053959362/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Núñez, Antonio. "Sources of Errors and Biases in Traffic Forecasts for Toll Road Concessions." Phd thesis, Université Lumière - Lyon II, 2007. http://tel.archives-ouvertes.fr/tel-00331794.

Full text
Abstract:
The objective of this thesis is to study the sources of discrepancy between the actual traffic in motorways under concession schemes and the traffic forecast ex-ante. The demand forecast for a specific project is the main variable influencing its realization. From a public sector perspective, socio-economic evaluations are driven by demand forecasts, which gives the basis for choose and hierarchy public projects in order to maximise social welfare. From a private sector perspective, traffic forecasts are the base of financial evaluation and toll setting.Despite its importance and the numerous and important developments in the field, the differences of forecast and ex-post traffic are usually very high. Some recent studies show that differences as big as 20% are much more the rule than the exception.A huge amount of uncertainty is associated with the forecasting exercise. First because transport is a derived demand and depends on many exogenous variables, also uncertain; because modelling is and simplification exercise, implies many assumptions and rely on field data, many times incomplete or of low quality; moreover, modelling human (in this case users) behaviour is always a dangerous enterprise.Although these arguments could explain at least the larger part of errors associated with forecasts, one can wonder whether the agents implicated in the forecast would or could use this uncertainty strategically in their favour. In a competition for the field scheme (bids), the bidder may overestimate the demand in order to reduce the toll included in the bid. This strategic behaviour can introduce a high bias in forecasts. Also, overoptimistic (or overpessimistic) forecasters may introduce a bias in the forecast.We propose to focus in turn on the three main groups of agents involved in the demand forecast process. The forecasters, the project promoters and the users. Study all the issues related to them would be a too ambitious (or more concretely impossible) task. We then focus on some particular issues related to the modelling of the actors' behaviour in the context of the demand forecast for toll roads.Regarding the forecaster behaviour, we present the results of the first large sample survey on forecasters' perceptions and opinions about forecasting demand for transport projects, based on an on-line survey. We first describe the main characteristics of forecasters. We then describe the last forecast forecasters prepared. We turn to the models forecasters apply, the errors they declare on past forecasts and the main sources of errors according to them. We then describe the forecast environment in terms of pressure forecasters receive. These unique results provide a picture of the world of forecasters and forecasts, allowing for a better understanding of them. We turn then to the study of the optimism and overconfidence in transport forecasts. Optimism and overconfidence in general are recognized human traits. We analyze the overoptimistic bias by comparing the distribution of stated errors with actual errors found in literature; we also compare the own skilful of subjects in doing forecasts with studies showing self-evaluations of a common skill - driving. We finally propose a regression of the competence, quality and errors on the main forecasters' and projects' specific variables.Results show that the distribution of errors transport forecasters state has a smaller average magnitude and a smaller variance than those found in literature. Comparing forecasters perception of their own competence with the results found in literature about drivers skill self-evaluation, however, we could not find a significant difference, meaning that the forecasters' overconfidence is in line with what could be viewed as a normal human overconfidence level.The pressure for results forecasters receive and the strategic manipulation they affirm exist merit a special attention. They imply that while forecasters' behavioural biases may exist and should be take in account when evaluation forecasts, the project promoter may influence forecasts by pressuring the forecasters to produce results which better fit his expectancies.We then study the bidders' strategic behaviour in auctions for road concessions. We address three questions in turn. First, we investigate the overall effects of the winner's curse on bidding behaviour in such auctions. Second, we examine the effects of the winner's curse on contract auctions with differing levels of common-value components. Third, we investigate how the winner's curse affects bidding behaviour in such auctions when we account for the possibility for bidders to renegotiate. Using a unique, self-constructed, dataset of 49 worldwide road concessions, we show that the winner's curse effect is particularly strong in toll road concession contract auctions. Thus, we show that bidders bid less aggressively in toll road concession auctions when they expect more competition. We observe that this winner's curse effect is even larger for projects where the common uncertainty is greater. Moreover, we show that the winner's curse effect is weaker when the likelihood of renegotiation is higher. While the traditional implication would be that more competition is not always desirable when the winner's curse is particularly strong, we show that, in toll road concession contract auctions, more competition may be always desirable. Modelling aggregated users' behaviour, we study the long term traffic maturity. We argue that traffic maturity results from decreasing marginal utility of transport. The elasticity of individual mobility with respect to the revenue decreases after a certain level of mobility is reached. In order to find evidences of decreasing elasticity we analyse a cross-section time-series sample including 40 French motorways' sections. This analysis shows that decreasing elasticity can be observed in the long term. We then propose a decreasing function for the traffic elasticity with respect to the economic growth, which depends on the traffic level on the road. Although “unconditional” decreasing elasticities were already proposed in the literature, this is the first work, as far as we know, putting this idea in evidence and giving it a functional form. This model provides better interpretation of the coupling between traffic and economic growth, and a better long-term forecast. From the disaggregate perspective, we study the main individual modal choice variable, the value of time. The value of travel time savings is a fundamental concept in transport economics and its size strongly affects the socio-economic evaluation of transport schemes. Financial assessment of tolled roads rely upon the value of time as the main (or even the unique) willingness to pay measure. Values of time estimates, which primarily represent behavioural values, as then increasingly been used as measures of out-of-pocket money. In this setting, one of the main issues regarding the value of time is its distribution over the population. We apply the Logit, the Mixed Logit and the Bayesian Mixed Logit models to estimate the value of time in freight transport in France. Estimations with mixed logit faced many difficulties, as expected. These difficulties could be avoided using the Bayesian procedures, providing also the opportunity of properly integrating a priori beliefs. Results show that 1) using a single constant value of time, representative of an average, can lead to demand overestimation, 2) the estimated average value of time of freight transport in France is about 45 Euro, depending on the load/empty and hire/own account variables, which implies that 3) the standard value recommended in France should be reviewed upwards.
APA, Harvard, Vancouver, ISO, and other styles
34

Otrusina, Ondřej. "Stavebně technologický projekt rekonstrukce silničního mostu v obci Vlkoš." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-392248.

Full text
Abstract:
This masters dissertation deals with the preparation and realization of a road bridge recontruction in the Vlkoš village. The introduction of this thesis contains basic information about the structures of the construction. Execution of main technological stages is addressed in construction technological study. The thesis also includes a coordinational site-layout of the construction, design of site equipment, design of the main construction machines and mechanisms, time and financial schedule, construction workers deployment plan, plan of achines usage. In my thesis I deal mainly with the supporting construction of the bridge. For these stages i implemented technological directives in addition of Inspection and check plan. Safety and protection of health durink work is also addressed. I also worked out the technical report of the constuction sites facilities including the needed drawings.
APA, Harvard, Vancouver, ISO, and other styles
35

Pérez, Forero Fernando José. "Essays in structural macroeconometrics." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/119323.

Full text
Abstract:
This thesis is concerned with the structural estimation of macroeconomic models via Bayesian methods and the economic implications derived from its empirical output. The first chapter provides a general method for estimating structural VAR models. The second chapter applies the method previously developed and provides a measure of the monetary stance of the Federal Reserve for the last forty years. It uses a pool of instruments and taking into account recent practices named Unconventional Monetary Policies. Then it is shown how the monetary transmission mechanism has changed over time, focusing the attention in the period after the Great Recession. The third chapter develops a model of exchange rate determination with dispersed information and regime switches. It has the purpose of fitting the observed disagreement in survey data of Japan. The model does a good job in terms of fitting the observed data.
Esta tesis trata sobre la estimación estructural de modelos macroeconómicos a través de métodos Bayesianos y las implicancias económicas derivadas de sus resultados. El primer capítulo proporciona un método general para la estimación de modelos VAR estructurales. El segundo capítulo aplica dicho método y proporciona una medida de la posición de política monetaria de la Reserva Federal para los últimos cuarenta años. Se utiliza una variedad de instrumentos y se tienen en cuenta las prácticas recientes denominadas políticas no convencionales. Se muestra cómo el mecanismo de transmisión de la política monetaria ha cambiado a través del tiempo, centrando la atención en el período posterior a la gran recesión. El tercer capítulo desarrolla un modelo de determinación del tipo de cambio con información dispersa y cambios de régimen, y tiene el propósito de capturar la dispersión observada en datos de encuestas de expectativas de Japón. El modelo realiza un buen trabajo en términos de ajuste de los datos.
APA, Harvard, Vancouver, ISO, and other styles
36

Kardimis, Théofanis. "La chambre criminelle de la Cour de cassation face à l’article 6 de la Convention européenne des droits de l’homme : étude juridictionnelle comparée (France-Grèce)." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE3004.

Full text
Abstract:
La première partie de l’étude est consacrée à l’invocation, intra et extra muros, du droit à un procès équitable. Sont analysés ainsi, dans un premier temps, l’applicabilité directe de l’article 6 et la subsidiarité de la Convention par rapport au droit national et de la Cour Européenne des Droits de l’Homme par rapport aux juridictions nationales. Le droit à un procès équitable étant un droit jurisprudentiel, l’étude se focalise, dans un second temps, sur l’invocabilité des arrêts de la Cour Européenne et plus précisément sur l’invocabilité directe de l’arrêt qui constate une violation du droit à un procès équitable dans une affaire mettant en cause l’Etat et l’invocabilité de l’interprétation conforme à l’arrêt qui interprète l’article 6 dans une affaire mettant en cause un Etat tiers. L’introduction dans l’ordre juridique français et hellénique de la possibilité de réexamen de la décision pénale définitive rendue en violation de la Convention a fait naitre un nouveau droit d’accès à la Cour de cassation lequel trouve son terrain de prédilection aux violations de l’article 6 et constitue peut-être le pas le plus important pour le respect du droit à un procès équitable après l’acceptation (par la France et la Grèce) du droit de recours individuel. Quant au faible fondement de l’autorité de la chose interprétée par la Cour Européenne, qui est d’ailleurs un concept d’origine communautaire, cela explique pourquoi un dialogue indirect entre la Cour Européenne et la Cour de cassation est possible sans pour autant changer en rien l’invocabilité de l’interprétation conforme et le fait que l’existence d’un précédent oblige la Cour de cassation à motiver l’interprétation divergente qu’elle a adoptée.La seconde partie de l’étude, qui est plus volumineuse, est consacrée aux garanties de bonne administration de la justice (article 6§1), à la présomption d’innocence (article 6§2), aux droits qui trouvent leur fondement conventionnel dans l’article 6§1 mais leur fondement logique dans la présomption d’innocence et aux droits de la défense (article 6§3). Sont ainsi analysés le droit à un tribunal indépendant, impartial et établi par la loi, le délai raisonnable, le principe de l’égalité des armes, le droit à une procédure contradictoire, le droit de la défense d’avoir la parole en dernier, la publicité de l’audience et du prononcé des jugements et arrêts, l’obligation de motivation des décisions, la présomption d’innocence, dans sa dimension procédurale et personnelle, le « droit au mensonge », le droit de l’accusé de se taire et de ne pas contribuer à son auto-incrimination, son droit d’être informé de la nature et de la cause de l’accusation et de la requalification envisagée des faits, son droit au temps et aux facilités nécessaires à la préparation de la défense, y compris notamment la confidentialité de ses communications avec son avocat et le droit d’accès au dossier, son droit de comparaître en personne au procès, le droit de la défense avec ou sans l’assistance d’un avocat, le droit de l’accusé d’être représenté en son absence par son avocat, le droit à l’assistance gratuite d’un avocat lorsque la situation économique de l’accusé ne permet pas le recours à l’assistance d’un avocat mais les intérêts de la justice l’exigent, le droit d’interroger ou faire interroger les témoins à charge et d’obtenir la convocation et l’interrogation des témoins à décharge dans les mêmes conditions que les témoins à charge et le droit à l’interprétation et à la traduction des pièces essentielles du dossier. L’analyse est basée sur la jurisprudence strasbourgeoise et centrée sur la position qu’adoptent la Cour de cassation française et l’Aréopage
The first party of the study is dedicated to the invocation of the right to a fair trial intra and extra muros and, on this basis, it focuses on the direct applicability of Article 6 and the subsidiarity of the Convention and of the European Court of Human Rights. Because of the fact that the right to a fair trial is a ‘‘judge-made law’’, the study also focuses on the invocability of the judgments of the European Court and more precisely on the direct invocability of the European Court’s judgment finding that there has been a violation of the Convention and on the request for an interpretation in accordance with the European Court’s decisions. The possibility of reviewing the criminal judgment made in violation of the Convention has generated a new right of access to the Court of cassation which particularly concerns the violations of the right to a fair trial and is probably the most important step for the respect of the right to a fair trial after enabling the right of individual petition. As for the weak conventional basis of the authority of res interpretata (“autorité de la chose interprétée”), this fact explains why an indirect dialogue between the ECHR and the Court of cassation is possible but doesn’t affect the applicant’s right to request an interpretation in accordance with the Court’s decisions and the duty of the Court of cassation to explain why it has decided to depart from the (non-binding) precedent.The second party of the study is bigger than the first one and is dedicated to the guarantees of the proper administration of justice (Article 6§1), the presumption of innocence (Article 6§2), the rights which find their conventional basis on the Article 6§1 but their logical explanation to the presumption of innocence and the rights of defence (Article 6§3). More precisely, the second party of the study is analyzing the right to an independent and impartial tribunal established by law, the right to a hearing within a reasonable time, the principle of equality of arms, the right to adversarial proceedings, the right of the defence to the last word, the right to a public hearing and a public pronouncement of the judgement, the judge’s duty to state the reasons for his decision, the presumption of innocence, in both its procedural and personal dimensions, the accused’s right to lie, his right to remain silent, his right against self-incrimination, his right to be informed of the nature and the cause of the accusation and the potential re-characterisation of the facts, his right to have adequate time and facilities for the preparation of the defence, including in particular the access to the case-file and the free and confidential communication with his lawyer, his right to appear in person at the trial, his right to defend either in person or through legal assistance, his right to be represented by his counsel, his right to free legal aid if he hasn’t sufficient means to pay for legal assistance but the interests of justice so require, his right to examine or have examined witnesses against him and to obtain the attendance and examination of witnesses on his behalf under the same conditions as witnesses against him and his right to the free assistance of an interpreter and to the translation of the key documents. The analysis is based on the decisions of the European Court of Human Rights and focuses on the position taken by the French and the Greek Court of Cassation (Areopagus) on each one of the above mentioned rights
APA, Harvard, Vancouver, ISO, and other styles
37

Staudhammer, Christina. "Statistical procedures for development of real-time statistical process control (SPC) in lumber manufacturing." Thesis, 2004. http://hdl.handle.net/2429/17233.

Full text
Abstract:
High raw material costs and reduced allowable forest harvest levels have created challenges for the Canadian lumber industry. Sawlogs typically comprise 75% of all the costs in a sawmill and insufficient log availability is a widespread problem. Thus, maximum product value and yield from every log processed is an urgent priority. Effective statistical process control (SPC) procedures can greatly enhance product value and yield, ensuring accuracy and minimum waste. However, present procedures are manual in nature. The time and effort required means that only small data samples are collected at infrequent intervals, seriously limiting quality control effectiveness. Attempts to implement automated SPC with non-contact laser range sensors (LRS) have thus far had only limited success. Such systems have given frequent false alarms, prompting tolerances to be set excessively wide. Thus, real problems are often missed for extended periods. The objective of this research was to establish a system for collecting and processing real-time LRS size control data for automated lumber manufacturing. An SPC system was developed that incorporated multi-sensor data filtering procedures, a model with complex structure, and new control charting procedures. The LRS data were first filtered for measurement errors using techniques from image processing. Non-sawing defects were then removed from the data using a sheet-of-light profiling system and defect recognition algorithm. Defect-free filtered data were modeled in a multi-stage process, which explicitly considered multiple sources of variation and a complex correlative structure. New SPC charts were developed that went beyond traditional size control methods, simultaneously monitoring multiple surfaces and specifically targeting common sawing defects. Nineteen candidate control charts were evaluated. For some sawing defects (e.g., machine positioning errors and wedge), traditional X-bar and range charts are suggested. These charts were explicitly developed to take into account the components of variance in the model. For other sawing defects (e.g., taper, snipe, flare, and snake), control charts are suggested that are non-traditional. The charts that target these defects were based on the decomposition of LRS measurements into trend, waviness, and roughness. Applying these methods will lead to process improvements in sawmills, so that machines producing defective material can be identified, allowing prompt repairs to be made.
Forestry, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
38

Shu-Jung, Liao, and 廖淑戎. "Comparison of Time Delay Procedures in Teaching Chained Tasks to Students with Moderate Mental Retardation." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/53499213731688920658.

Full text
Abstract:
碩士
國立臺灣師範大學
特殊教育學系在職進修碩士學位班
92
The purpose of this study was to compare the effectiveness (numbers of correct anticipations) and efficiency (sessions, type of errors, minutes of instructional time, and nonwait versus wait errors to criterion) of two time-delay procedures in teaching chained tasks. The procedures were evaluated using the alternating treatment design. Three secondary-age students with moderate mental retardation in National Lin-Kou Special Education School for Mentally Retarded were taught two chained tasks before lunch. A different chained task was taught during each of two daily sessions, one task with constant time delay (CTD) and the other with progressive time delay (PTD). Maintenance of chained tasks were assessed. Also, the detailed error analysis were assessed. The results indicated that (a) both strategies were effective, and produced criterion-level responding in the instructional setting, (b) CTD procedure was more efficient than PTD. (c) both strategies produced criterion-level responding that maintained in 1-week follow-up probes. (d) PTD procedure produced more than CTD in terms of topographical, sequence, and duration errors. (e) error data indicated that sequence errors occur most frequently. (f) in terms of nonwait versus wait errors to criterion, both strategies had no significant difference. (g) the percent of nonwait errors was higher than the percent of wait errors.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhu, Lei. "Procedures for identifying and modeling time-to-event data in the presence of non--proportionality." Thesis, 2013. https://hdl.handle.net/2144/14132.

Full text
Abstract:
For both randomized clinical trials and prospective cohort studies, the Cox regression model is a powerful tool for evaluating the effect of a treatment or an explanatory variable on time-to-event outcome. This method assumes proportional hazards over time. Systematic approaches to efficiently evaluate non-proportionality and to model data in the presence of non-proportionality are investigated. Six graphical methods are assessed to verify the proportional hazards assumption based on characteristics of the survival function, cumulative hazard, or the feature of residuals. Their performances are empirically evaluated with simulations by checking their ability to be consistent and sensitive in detecting proportionality or non-proportionality. Two-sample data are generated in three scenarios of proportional hazards and five types of alternatives (that is, non-proportionality). The usefulness of these graphical assessment methods depends on the event rate and type of non-proportionality. Three numerical (statistical testing) methods are compared via simulation studies to investigate the proportional hazards assumption. In evaluating data for proportionality versus non-proportionality, the goal is to test a non-zero slope in a regression of the variable or its residuals on a specific function of time, or a Kolmogorov-type supremum test. Our simulation results show that statistical test performance is affected by the number of events, event rate, and degree of divergence of non-proportionality for a given hazards scenario. Determining which test will be used in practice depends on the specific situation under investigation. Both graphical and numerical approaches have benefits and costs, but they are complementary to each other. Several approaches to model and summarize non-proportionality data are presented, including non-parametric measurements and testing, semi-parametric models, and a parametric approach. Some illustrative examples using simulated data and real data are also presented. In summary, we present a systemic approach using both graphical and numerical methods to identify non-proportionality, and to provide numerous modeling strategies when proportionality is violated in time-to-event data.
APA, Harvard, Vancouver, ISO, and other styles
40

"Design, Development and Evaluation of Collaborative Team Training Method in Virtual Worlds for Time-critical Medical Procedures." Doctoral diss., 2014. http://hdl.handle.net/2286/R.I.24760.

Full text
Abstract:
abstract: Medical students acquire and enhance their clinical skills using various available techniques and resources. As the health care profession has move towards team-based practice, students and trainees need to practice team-based procedures that involve timely management of clinical tasks and adequate communication with other members of the team. Such team-based procedures include surgical and clinical procedures, some of which are protocol-driven. Cost and time required for individual team-based training sessions, along with other factors, contribute to making the training complex and challenging. A great deal of research has been done on medically-focused collaborative virtual reality (VR)-based training for protocol-driven procedures as a cost-effective as well as time-efficient solution. Most VR-based simulators focus on training of individual personnel. The ones which focus on providing team training provide an interactive simulation for only a few scenarios in a collaborative virtual environment (CVE). These simulators are suited for didactic training for cognitive skills development. The training sessions in the simulators require the physical presence of mentors. The problem with this kind of system is that the mentor must be present at the training location (either physically or virtually) to evaluate the performance of the team (or an individual). Another issue is that there is no efficient methodology that exists to provide feedback to the trainees during the training session itself (formative feedback). Furthermore, they lack the ability to provide training in acquisition or improvement of psychomotor skills for the tasks that require force or touch feedback such as cardiopulmonary resuscitation (CPR). To find a potential solution to overcome some of these concerns, a novel training system was designed and developed that utilizes the integration of sensors into a CVE for time-critical medical procedures. The system allows the participants to simultaneously access the CVE and receive training from geographically diverse locations. The system is also able to provide real-time feedback and is also able to store important data during each training/testing session. Finally, this study also presents a generalizable collaborative team-training system that can be used across various team-based procedures in medical as well as non-medical domains.
Dissertation/Thesis
Ph.D. Biomedical Informatics 2014
APA, Harvard, Vancouver, ISO, and other styles
41

Fong-Lin and 陳豐霖. "The Role of Real-Time Three-Dimensional Echocardiography in Congenital Septal Defects: Assessing and Guiding the Treatment Procedures for Atrial Septal Defect and Ventricular Septal Defect." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/42657016292355927292.

Full text
Abstract:
博士
中山醫學大學
醫學研究所
95
Background: Two-dimensional echocardiography (2DE) enhanced by combining with color Doppler technology has significant limitations in providing precise quantitative information, geometric assumptions to calculate chamber volume, mass, and ejection fraction. Reconstructed three-dimensional echocardiographic (3DE) systems (from multiple cross-sectional echocardiographic scans) are still cumbersome and time-consuming. Real-time 3DE (RT3DE) with shorter imaging time than with 3D reconstruction techniques can obtain qualitative and quantitative information on heart disorders. Our purpose was to estimate the patient of Atrial Septal Defect (ASD) and Ventricular Septal Defect (VSD). Transcatheter Amplatzer septal occluder (ASO) device closure of atrial septal defects (ASDs) has traditionally been guided by two-dimensional transesophageal echocardiography (2D-TEE) and intracardiac echocardiography (ICE) modalities. Real-time three-dimensional transthoracic echocardiography (RT3D-TTE) provides rotating images to define ASD and adjacent structures with potential as an alternative to 2D-TEE or ICE for guiding the device closure of ASD. There are two subjects of this study. The first subject was to assess the feasibility and effectiveness of RT3D-TTE in parasternal four-chamber views to guide ASO device closure of ASD. The second subject was to investigate the feasibility and potential value of RT3DE as a means of accurately and quantitatively estimating the size of VSD to correlate with the surgical findings. Materials and Methods: From February 2004 to August 2005, total 97 patients of septal defect were samples. Among in these patients, the 59 patients underwent transcatheter ASO device closure of ASD. The first 30 patients underwent 2D-TEE guidance under general anesthesia and the remaining 29 patients underwent RT3D-TTE guidance with local anesthesia. In addition, the 38 patients with VSD were examined with RT3DE. 3D image data-base was post-processed using TomTec echo 3D workstation. The results were compared with the results measured by 2 DE and surgical findings. Results: The first subject of atrial septal defect patients. All interventions were successfully completed without complications. The clinical characteristics and transcatheter closure variables of RT3D-TTE and 2D-TEE were compared. Echocardiographic visualization of ASD and ASO deployment was found to be adequate when using either methods. Catheterization laboratory time (39.1±5.4 vs 78.8±14.1 minutes, P < 0.001) and interventional procedure length (7.6±4.2 vs 15.3±2.9 minutes, P < 0.001) were shortened by using RT3D-TTE as compared with 2DE-TEE. There was no difference in the rate of closure following either method, assessed after a 6-month follow-up. The maximal diameter measured by RT3D-TTE and 2D-TEE was correlated well with a balloon stretched ASD size (y = 0.985x + 0.628, r = 0.924 vs y = 0.93x + 2.08, r = 0.885, respectively). The second subject of RT3DE produced novel views of VSD and improved quantification of the size of the defect. The sizes obtained from 3DE have equivalent correlation with surgical findings as diameter measured by 2-DE (r = 0.89 vs r = 0.90). Good agreement between blinded observers was achieved by little interobserver variability. Conclusion: RT3D-TTE may be a feasible, safe, and effective alternative to the standard practice of using 2D-TEE to guide ASO deployment. In addition, RT3DE offers intraoperative visualization of VSD to generate a “virtual sense of depth”without extending examining time. From an LV en face projection, the positions, sizes, and shapes of VSDs can be accurately determined to permit quantitative recording of VSD dynamics. It is a potentially valuable clinical tool to provide precise imaging for surgical and catheter-based closure of difficult perimembranous and singular or multiple muscular VSD.
APA, Harvard, Vancouver, ISO, and other styles
42

Xin, Jianhong. "Court mediation in China : time for reform." Thesis, 2000. http://hdl.handle.net/2429/11020.

Full text
Abstract:
This thesis focuses on the current court mediation institution in China against the worldwide movement of alternative dispute resolution in searching for more consensual and more efficient ways of resolving disputes. When the West is seeking more informality-oriented forms of dispute resolution, China, on the other side of the world, is making great efforts to improve its formal justice system rather than conventional means of dispute resolution like mediation. This thesis attempts to identify the role court mediation has played in Chinese legal history, to explore its current functions, to examine the rationale underlying the system, and to suggest its future reform. The economic analysis of law, particularly Posner's economic analysis of civil procedure and the Coase Theorem, and the ideas of Rawls' theory of justice provide theoretical underpinnings for this study. A review of these classical theories is conducted from the perspectives of efficiency and fairness. Although it is generally understood that both efficiency and fairness cannot be equally achieved by a legal policy, a good one should be concerned with both efficiency and fairness. The article concludes that the balance between efficiency and fairness should be presented in an optimal court mediation form. China's court mediation has remained an important means of dispute resolution, but left much to be improved. The author argues that the current court mediation is not as successful as it declares; it is, in fact, neither efficient nor just. The existing law governing court mediation does not provide a clear function and purpose for court mediation, nor does it consider the efficiency and fairness of court mediation. In practice, although it remains the dominant position in resolving disputes, it is merely a substitute for adjudication rather than a substantive alternative dispute resolution. By analyzing the current allocation of cases for different dispute resolutions, the author suggests that considering the overloaded court caseloads and the lack of a variety of alternative dispute resolutions in today's China, court mediation should be preserved, but thoroughly reformed, as a more acceptable and efficient means of resolving disputes. Upon its reform, this conventional means of dispute resolution with Chinese characteristics will play a positive role in the future.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Liu-Ming, and 陳劉明. "The Handoff Procedure for Real Time Voice Communication in ZigBee Environment." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/77746323088157426492.

Full text
Abstract:
碩士
國立高雄大學
電機工程學系碩士班
96
ZigBee is a wireless communication technology for short distance with low data rate. Some advantages for ZigBee are like better network scalability and deployment network environment easy. Typical application for ZigBee includes precaution sensor, safety monitor, and automation control. The applications for ZigBee are usually low bitrate, non-real time transmission. In addition, some research teams have provided real time voice multi-hop mechanism for using bandwidth efficiently. The end devices can only work with their own Base Station (coordinator/ router). As the end devices go to the synchronization loss state, they will out of the link to their Base Station, terminate its data transmission and loss the message. Therefore, this paper provides the handoff procedure for this problem. Thus the mechanism make the end device detect, prevent breaking in moving situation and prevention ping-pong effect for the handoff decision-making. Besides, it can select the minimum ZigBee depth one with the best transmission efficiency in many suitable Base Stations of handoff. In the side of Base Station, it would guide the end device to handoff favorably to keep the transmission of originally real time voice. Finally, the paper simulated and analyzed by NS-2(Network Simulator).
APA, Harvard, Vancouver, ISO, and other styles
44

"Procedural reaction time and balance performance during a dual or single task in healthy collegiate students." THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL, 2008. http://pqdtopen.proquest.com/#viewpdf?dispub=1453002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

"Perturbations in The Arrow of Time: Computational and Procedural Dissociations of Timing and Non-Timing Processes." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.51606.

Full text
Abstract:
abstract: Timing performance is sensitive to fluctuations in time and motivation, thus interval timing and motivation are either inseparable or conflated processes. A behavioral systems model (e.g., Timberlake, 2000) of timing performance (Chapter 1) suggests that timing performance in externally-initiated (EI) procedures conflates behavioral modes differentially sensitive to motivation, but that response-initiated (RI) procedures potentially dissociate these behavioral modes. That is, timing performance in RI procedures is expected to not conflate these behavioral modes. According to the discriminative RI hypothesis, as initiating-responses become progressively discriminable from target responses, initiating-responses increasingly dissociate interval timing and motivation. Rats were trained in timing procedures in which a switch from a Short to a Long interval indexes timing performance (a latency-to-switch, LTS), and were then challenged with pre-feeding and extinction probes. In experiments 1 (Chapter 2) and 2 (Chapter 3), discriminability of initiating-responses was varied as a function of time, location, and form for rats trained in a switch-timing procedure. In experiment 3 (Chapter 4), the generalizability of the discriminative RI hypothesis was evaluated in rats trained in a temporal bisection procedure. In experiment 3, but not 1 and 2, RI enhanced temporal control of LTSs relative to EI. In experiments 1 and 2, the robustness of LTS medians to pre-feeding but not extinction increased with the discriminability of initiating-responses from target responses. In experiment 3, the mean LTS was robust to pre-feeding in EI and RI. In all three experiments, pre-feeding increased LTS variability in EI and RI. These results provide moderate support for the discriminative RI hypothesis, indicating that initiating-responses selectively and partially dissociate interval timing and motivation processes. Implications for the study of cognition and motivation processes are discussed (Chapter 5).
Dissertation/Thesis
Doctoral Dissertation Psychology 2018
APA, Harvard, Vancouver, ISO, and other styles
46

Roßbach, André Christian. "Evaluation of Software Architectures in the Automotive Domain for Multicore Targets in regard to Architectural Estimation Decisions at Design Time." Master's thesis, 2014. https://monarch.qucosa.de/id/qucosa%3A20224.

Full text
Abstract:
In this decade the emerging multicore technology will hit the automotive industry. The increasing complexity of the multicore-systems will make a manual verification of the safety and realtime constraints impossible. For this reason, dedicated methods and tools are utterly necessary, in order to deal with the upcoming multicore issues. A lot of researchprojects for new hardware platforms and software frameworks for the automotive industry are running nowadays, because the paradigms of the “High-Performance Computing” and “Server/Desktop Domain” cannot be easily adapted for the embedded systems. One of the difficulties is the early suitability estimation of a hardware platform for a software architecture design, but hardly a research-work is tackling that. This thesis represents a procedure to evaluate the plausibility of software architecture estimations and decisions at design stage. This includes an analysis technique of multicore systems, an underlying graph-model – to represent the multicore system – and a simulation tool evaluation. This can guide the software architect, to design a multicore system, in full consideration of all relevant parameters and issues.:Contents List of Figures vii List of Tables viii List of Abbreviations ix 1. Introduction 1 1.1. Motivation 1 1.2. Scope 2 1.3. Goal and Tasks 2 1.4. Structure of the Thesis 3 I. Multicore Technology 4 2. Fundamentals 5 2.1. Automotive Domains 5 2.2. Embedded System 7 2.2.1. Realtime 7 2.2.2. Runtime Predictions 8 2.2.3. Multicore Processor Architectures 8 2.3. Development of Automotive Embedded Systems 9 2.3.1. Applied V-Model 9 2.3.2. System Description and System Implementation 10 2.4. Software Architecture 11 2.5. Model Description of Software Structures 13 2.5.1. Design Domains of Multicore Systems 13 2.5.2. Software Structure Components 13 3. Trend and State of the Art of Multicore Research, Technology and Market 17 3.1. The Importance of Multicore Technology 17 3.2. Multicore Technology for the Automotive Industry 19 3.2.1. High-Performance Computing versus Embedded Systems 19 3.2.2. The Trend for the Automotive Industry 20 3.2.3. Examples of Multicore Hardware Platforms 23 3.3. Approaches for Upcoming Multicore Problems 24 3.3.1. Migration from Single-Core to Multicore 24 3.3.2. Correctness-by-Construction 25 3.3.3. AUTOSAR Multicore System 26 3.4. Software Architecture Simulators 28 3.4.1. Justification for Simulation Tools 28 3.4.2. System Model Simulation Software 29 3.5. Current Software Architecture Research Projects 31 3.6. Portrait of the current Situation 32 3.7. Summary of the Multicore Trend 32 II. Identification of Multicore System Parameters 34 4. Project Analysis to Identify Crucial Parameters 35 4.1. Analysis Procedure 35 4.1.1. Question Catalogue 36 4.1.2. Three Domains of Investigation 37 4.2. Analysed Projects 41 4.2.1. Project 1: Online Camera Calibration 41 4.2.2. Project 2: Power Management 45 4.2.3. Project 3: Battery Management 46 4.3. Results of Project Analysis 51 4.3.1. Ratio of Parameter Influence 51 4.3.2. General Influences of Parameters 53 5. Abstract System Model 54 5.1. Requirements for the System-Model 54 5.2. Simulation Tool Model Evaluation 55 5.2.1. System Model of PRECISION PRO 55 5.2.2. System Model of INCHRON 57 5.2.3. System Model of SymTA/S 58 5.2.4. System Model of Timing Architects 59 5.2.5. System Model of AMALTHEA 60 5.3. Concept of Abstract System Model 62 5.3.1. Components of the System Model 63 5.3.2. Software Function-Graph 63 5.3.3. Hardware Architecture-Graph 64 5.3.4. Specification-Graph for Mapping 65 6. Testcase Implementation 67 6.1. Example Test-System 68 6.1.1. Simulated Test-System 70 6.1.2. Testcases 73 6.2. Result of Tests 74 6.2.1. Processor Core Runtime Execution 74 6.2.2. Communication 75 6.2.3. Memory Access 76 6.3. Summary of Multicore System Parameters Identification 78 III. Evaluation of Software Architectures 80 7. Estimation-Procedure 81 7.1. Estimation-Procedure in a Nutshell 81 7.2. Steps of Estimation-Procedure 82 7.2.1. Project Analysis 82 7.2.2. Timing and Memory Requirements 83 7.2.3. System Modelling 84 7.2.4. Software Architecture Simulation 85 7.2.5. Results of a Validated Software Architecture 86 7.2.6. Feedback of Partly Implemented System 88 8. Implementation and Simulation 89 8.1. Example Project Analysis – Online Camera Calibration 89 8.1.1. Example Project Choice 90 8.1.2. OCC Timing Requirements Analysis 90 8.2. OCC Modelling 94 8.2.1. OCC Software Function-Graph 95 8.2.2. OCC Hardware Architecture 96 8.2.3. OCC Mapping – Specification-Graph 101 8.3. Simulation of the OCC Model with Tool Support 102 8.3.1. Tasks for Tool Setup 103 8.3.2. PRECISION PRO 105 8.3.3. INCHRON 107 8.3.4. SymTA/S 108 8.3.5. Timing Architects 112 8.3.6. AMALTHEA 115 8.4. System Optimisation Possibilities 116 8.5. OCC Implementation Results 117 9. Results of the Estimation-Procedure Evaluation 119 9.1. Tool-Evaluation Results 119 9.2. Findings of Estimation, Simulation and ECU-Behavior. 123 9.2.1. System-Specific Issues 123 9.2.2. Communication Issues 123 9.2.3. Memory Issues 124 9.2.4. Timing Issues 124 9.3. Summary of the Software Architecture Evaluation 125 10.Summary and Outlook 127 10.1. Summary 127 10.2. Usability of the Estimation-Procedure 128 10.3. Outlook and Future Work 129 11. Bibliography xii IV. Appendices xxi A. Appendices xxii A.1. Embedded Multicore Technology Research Projects xxii A.1.1. Simulation Software xxii A.1.2. Multicore Software Research Projects xxiii A.2. Testcase Implementation Results xxvi A.2.1. Function Block Processor Core Executions xxvi A.2.2. Memory Access Mechanism xxvii A.2.3. Memory Access Timings of Different Datatypes xxviii A.2.4. Inter-Processor Communication xxix A.3. Further OCC System Description xxxii A.3.1. OCC Timing Requirements of the FB xxxii A.3.2. INCHRON Validation Results xxxiv A.4. Detailed System Optimisation xxxv A.4.1. Optimisation through Hardware Alternation xxxv A.4.2. Optimisation through Mapping Alternation xxxv A.4.3. Optimisation of Execution Timings xxxvii B. Estimation-Procedure Engineering Paper xl B.1. Components and Scope of Software Architecture xl B.2. Estimation-Procedure in a Nutshell xlii B.3. Project Analysis xliii B.3.1. System level analysis xliv B.3.2. Communication Domain xlv B.3.3. Processor Core Domain xlvi B.3.4. Memory Domain xlvii B.3.5. Timing and Memory Requirements xlviii B.4. System Modelling xlix B.4.1. Function Model xlix B.4.2. Function-Graph l B.4.3. Possible ECU Target l B.4.4. Architecture-Graph l B.4.5. Software Architecture Mapping li B.4.6. Domain Specific Decision Guide lii B.5. Software Architecture Simulation liii B.6. Results of a Simulated Software Architecture lv B.7. Feedback of Partly Implemented System for Software Architecture Improvement lvi B.8. Benefits of the Estimation-Procedure lvii
In den nächsten Jahren wird die aufkommende Multicore-Technologie auf die Automobil-Branche zukommen. Die wachsende Komplexität der Multicore-Systeme lässt es nicht mehr zu, die Verifikation von Sicherheits- und Echtzeit-Anforderungen manuell auszuführen. Daher sind spezielle Methoden und Werkzeuge zwingend notwendig, um gerade mit den bevorstehenden Multicore-Problemfällen richtig umzugehen. Heutzutage laufen viele Forschungsprojekte für neue Hardware-Plattformen und Software-Frameworks für die Automobil-Industrie, weil die Paradigmen des “High-Performance Computings” und der “Server/Desktop-Domäne” nicht einfach so für die Eingebetteten Systeme angewendet werden können. Einer der Problemfälle ist das frühe Erkennen, ob die Hardware-Plattform für die Software-Architektur ausreicht, aber nur wenige Forschungs-Arbeiten berücksichtigen das. Diese Arbeit zeigt ein Vorgehens-Model auf, welches ermöglicht, dass Software-Architektur Abschätzungen und Entscheidungen bereits zur Entwurfszeit bewertet werden können. Das beinhaltet eine Analyse Technik für Multicore-Systeme, ein grundsätzliches Graphen-Model, um ein Multicore-System darzustellen, und eine Simulatoren Evaluierung. Dies kann den Software-Architekten helfen, ein Multicore System zu entwerfen, welches alle wichtigen Parameter und Problemfälle berücksichtigt.:Contents List of Figures vii List of Tables viii List of Abbreviations ix 1. Introduction 1 1.1. Motivation 1 1.2. Scope 2 1.3. Goal and Tasks 2 1.4. Structure of the Thesis 3 I. Multicore Technology 4 2. Fundamentals 5 2.1. Automotive Domains 5 2.2. Embedded System 7 2.2.1. Realtime 7 2.2.2. Runtime Predictions 8 2.2.3. Multicore Processor Architectures 8 2.3. Development of Automotive Embedded Systems 9 2.3.1. Applied V-Model 9 2.3.2. System Description and System Implementation 10 2.4. Software Architecture 11 2.5. Model Description of Software Structures 13 2.5.1. Design Domains of Multicore Systems 13 2.5.2. Software Structure Components 13 3. Trend and State of the Art of Multicore Research, Technology and Market 17 3.1. The Importance of Multicore Technology 17 3.2. Multicore Technology for the Automotive Industry 19 3.2.1. High-Performance Computing versus Embedded Systems 19 3.2.2. The Trend for the Automotive Industry 20 3.2.3. Examples of Multicore Hardware Platforms 23 3.3. Approaches for Upcoming Multicore Problems 24 3.3.1. Migration from Single-Core to Multicore 24 3.3.2. Correctness-by-Construction 25 3.3.3. AUTOSAR Multicore System 26 3.4. Software Architecture Simulators 28 3.4.1. Justification for Simulation Tools 28 3.4.2. System Model Simulation Software 29 3.5. Current Software Architecture Research Projects 31 3.6. Portrait of the current Situation 32 3.7. Summary of the Multicore Trend 32 II. Identification of Multicore System Parameters 34 4. Project Analysis to Identify Crucial Parameters 35 4.1. Analysis Procedure 35 4.1.1. Question Catalogue 36 4.1.2. Three Domains of Investigation 37 4.2. Analysed Projects 41 4.2.1. Project 1: Online Camera Calibration 41 4.2.2. Project 2: Power Management 45 4.2.3. Project 3: Battery Management 46 4.3. Results of Project Analysis 51 4.3.1. Ratio of Parameter Influence 51 4.3.2. General Influences of Parameters 53 5. Abstract System Model 54 5.1. Requirements for the System-Model 54 5.2. Simulation Tool Model Evaluation 55 5.2.1. System Model of PRECISION PRO 55 5.2.2. System Model of INCHRON 57 5.2.3. System Model of SymTA/S 58 5.2.4. System Model of Timing Architects 59 5.2.5. System Model of AMALTHEA 60 5.3. Concept of Abstract System Model 62 5.3.1. Components of the System Model 63 5.3.2. Software Function-Graph 63 5.3.3. Hardware Architecture-Graph 64 5.3.4. Specification-Graph for Mapping 65 6. Testcase Implementation 67 6.1. Example Test-System 68 6.1.1. Simulated Test-System 70 6.1.2. Testcases 73 6.2. Result of Tests 74 6.2.1. Processor Core Runtime Execution 74 6.2.2. Communication 75 6.2.3. Memory Access 76 6.3. Summary of Multicore System Parameters Identification 78 III. Evaluation of Software Architectures 80 7. Estimation-Procedure 81 7.1. Estimation-Procedure in a Nutshell 81 7.2. Steps of Estimation-Procedure 82 7.2.1. Project Analysis 82 7.2.2. Timing and Memory Requirements 83 7.2.3. System Modelling 84 7.2.4. Software Architecture Simulation 85 7.2.5. Results of a Validated Software Architecture 86 7.2.6. Feedback of Partly Implemented System 88 8. Implementation and Simulation 89 8.1. Example Project Analysis – Online Camera Calibration 89 8.1.1. Example Project Choice 90 8.1.2. OCC Timing Requirements Analysis 90 8.2. OCC Modelling 94 8.2.1. OCC Software Function-Graph 95 8.2.2. OCC Hardware Architecture 96 8.2.3. OCC Mapping – Specification-Graph 101 8.3. Simulation of the OCC Model with Tool Support 102 8.3.1. Tasks for Tool Setup 103 8.3.2. PRECISION PRO 105 8.3.3. INCHRON 107 8.3.4. SymTA/S 108 8.3.5. Timing Architects 112 8.3.6. AMALTHEA 115 8.4. System Optimisation Possibilities 116 8.5. OCC Implementation Results 117 9. Results of the Estimation-Procedure Evaluation 119 9.1. Tool-Evaluation Results 119 9.2. Findings of Estimation, Simulation and ECU-Behavior. 123 9.2.1. System-Specific Issues 123 9.2.2. Communication Issues 123 9.2.3. Memory Issues 124 9.2.4. Timing Issues 124 9.3. Summary of the Software Architecture Evaluation 125 10.Summary and Outlook 127 10.1. Summary 127 10.2. Usability of the Estimation-Procedure 128 10.3. Outlook and Future Work 129 11. Bibliography xii IV. Appendices xxi A. Appendices xxii A.1. Embedded Multicore Technology Research Projects xxii A.1.1. Simulation Software xxii A.1.2. Multicore Software Research Projects xxiii A.2. Testcase Implementation Results xxvi A.2.1. Function Block Processor Core Executions xxvi A.2.2. Memory Access Mechanism xxvii A.2.3. Memory Access Timings of Different Datatypes xxviii A.2.4. Inter-Processor Communication xxix A.3. Further OCC System Description xxxii A.3.1. OCC Timing Requirements of the FB xxxii A.3.2. INCHRON Validation Results xxxiv A.4. Detailed System Optimisation xxxv A.4.1. Optimisation through Hardware Alternation xxxv A.4.2. Optimisation through Mapping Alternation xxxv A.4.3. Optimisation of Execution Timings xxxvii B. Estimation-Procedure Engineering Paper xl B.1. Components and Scope of Software Architecture xl B.2. Estimation-Procedure in a Nutshell xlii B.3. Project Analysis xliii B.3.1. System level analysis xliv B.3.2. Communication Domain xlv B.3.3. Processor Core Domain xlvi B.3.4. Memory Domain xlvii B.3.5. Timing and Memory Requirements xlviii B.4. System Modelling xlix B.4.1. Function Model xlix B.4.2. Function-Graph l B.4.3. Possible ECU Target l B.4.4. Architecture-Graph l B.4.5. Software Architecture Mapping li B.4.6. Domain Specific Decision Guide lii B.5. Software Architecture Simulation liii B.6. Results of a Simulated Software Architecture lv B.7. Feedback of Partly Implemented System for Software Architecture Improvement lvi B.8. Benefits of the Estimation-Procedure lvii
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography