Dissertations / Theses on the topic 'Effective cone'

To see the other types of publications on this topic, follow the link: Effective cone.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Effective cone.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Okazaki, Ryotaro. "On an effective determination of Shintani's decomposition of the cone R+n." 京都大学 (Kyoto University), 1992. http://hdl.handle.net/2433/86217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Koyama, Shuji, Takahiko Aoyama, Nobuhiro Oda, and Chiyo Yamauchi-Kawaura. "Radiation dose evaluation in tomosynthesis and C-arm cone-beam CT examinations with an anthropomorphic phantom." American Institute of Physics, 2009. http://hdl.handle.net/2237/14184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Sangmok. "Improved source code editing for effective ad-hoc code reuse." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67583.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 111-113).
Code reuse is essential for productivity and software quality. Code reuse based on abstraction mechanisms in programming languages is a standard approach, but programmers also reuse code by taking an ad-hoc approach, in which text of code is reused without abstraction. This thesis focuses on improving two common ad-hoc code reuse approaches-code template reuse and code phrase reuse because they are not only frequent, but also, more importantly, they pose a risk to quality and productivity in software development, the original aims of code reuse. The first ad-hoc code reuse approach, code template reuse refers to programmers reusing an existing code fragment as a structural template for similar code fragments. Programmers use the code reuse approach because using abstraction mechanisms requires extra code and preplanning. When similar code fragments, which are only different by several code tokens, are reused just a couple of times, it makes sense to reuse text of one of the code fragments as a template for others. Unfortunately, code template reuse poses a risk to software quality because it requires repetitive and tedious editing steps. Should a programmer forget to perform any of the editing steps, he may introduce program bugs, which are difficult to detect by visual inspection, code compilers, or other existing bug detection methods. The second ad-hoc code reuse approach, code phrase reuse refers to programmers reusing common code phrases by retyping them, often regularly, using code completion. Programmers use the code reuse approach because no abstraction mechanism is available for reusing short yet common code phrases. Unfortunately, code phrase reuse poses a limitation on productivity because retyping the same code phrases is time-consuming even when a code completion system is used. Existing code completion systems completes only one word at a time. As a result, programmers have to repeatedly invoke code completion, review code completion candidates, and select a correct candidate as many times as the number of words in a code phrase. This thesis presents new models, algorithms, and user interfaces for effective ad-hoc code reuse. First, to address the risk posed by code template reuse, it develops a method for detecting program bugs in similar code fragments by analyzing sequential patterns of code tokens. To proactively reduce program bugs introduced during code template reuse, this thesis proposes an error-preventive code editing method that reduces the number of code editing steps based on cell-based text editing. Second, to address the productivity limitation posed by code phrase reuse, this thesis develops an efficient code phrase completion method. The code phrase completion accelerates reuse of common code phrases by taking non-predefined abbreviated input and expanding it into a full code phrase. The code phrase completion method utilizes a statistical model called Hidden Markov model trained on a corpus of code and abbreviation examples. Finally, the new methods for bug detection and code phrase completion are evaluated through corpus and user studies. In 7 well-maintained open source projects, the bug detection method found 87 previously unknown program bugs. The ratio of actual bugs to bug warnings (precision) was 47% on average, eight times higher than previous similar methods. The code phrase completion method is evaluated on the basis of accuracy and time savings. It achieved 99.3% accuracy in a corpus study and achieved 30.4% time savings and 40.8% keystroke savings in a user study when compared to a conventional code completion method. At a higher level, this work demonstrates the power of a simple sequence-based model of source code. Analyzing vertical sequences of code tokens across similar code fragments is found useful for accurate bug detection; learning to infer horizontal sequences of code tokens is found useful for efficient code completion. Ultimately, this work may aid the development of other sequence-based models of source code, as well as different analysis and inference techniques, which can solve previously difficult software engineering problems.
by Sangmok Han.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
4

Dunsmore, Alastair Peter. "Investigating effective inspection of object-oriented code." Thesis, University of Strathclyde, 2002. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=9349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Soares, Maria Rosangela. "Avaliação dosimétrica de protocolos de exame de tomografia computadorizada de feixe cônico." Universidade Federal de Sergipe, 2016. https://ri.ufs.br/handle/riufs/5242.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This PhD thesis, addresses the issue of evaluation of cone beam computed tomography, CBCT, scanning protocols, was introduced in dental radiology at the end of the 1990s, and it quickly became a fundamental examination for various procedures. Its main characteristic, the difference of medical CT is the beam shape. This study aimed to calculate the absorbed dose in eight tissues / organs of the head and neck, and to estimate the effective dose in 13 protocols and two techniques (stitched FOV e single FOV) of 5 equipment of different manufacturers of cone beam CT. For that purpose, a female anthropomorphic phantom was used, representing a default woman, in which were inserted thermoluminescent dosimeters at several points, representing organs / tissues with weighting values presented in the standard ICRP 103. The results were evaluated by comparing the dose according to the purpose of the tomographic image. Among the results, there is a difference up to 325% in the effective dose in relation to protocols with the same image goal. In relation to the image acquisition technique, the stitched FOV technique resulted in an effective dose of 5.3 times greater than the single FOV technique for protocols with the same image goal. In the individual contribution, the salivary glands are responsible for 31% of the effective dose in CT exams. The remaining tissues have also a significant contribution, 36%. The results drew attention to the need of estimating the effective dose in different equipment and protocols of the market, besides the knowledge of the radiation parameters and equipment manufacturing engineering to obtain the image.
Na presente tese de doutoramento foi abordada a temática da avaliação de protocolos de exame de tomografia computadorizada de feixe cônico - TCFC, que foi iniciada na radiologia odontológica no fim da década de 1990 e rapidamente tornou-se um exame fundamental para diversos procedimentos. Sua principal característica, que a diferencia da tomografia computadorizada médica, é a forma do feixe. Assim, este estudo objetivou calcular a dose absorvida em 8 tecidos/órgãos da cabeça e pescoço e estimar a dose efetiva em 13 protocolos e duas técnicas (stitched FOV e single FOV) de 5 equipamentos diferentes fabricantes de tomografia computadorizada de feixe cônico. Para isto, foi utilizado um simulador antropomórfico feminino, representando uma mulher padrão, onde foram inseridos dosímetros termoluminescentes em diversos pontos, representando órgãos e tecidos com valores de ponderação apresentados na norma ICRP 103. Os resultados foram avaliados, comparando-se a dose de acordo com o objetivo da imagem tomográfica. Dentre os resultados, observou-se uma diferença de até 325 % de dose efetiva em relação a protocolos com o mesmo objetivo de imagem. Em relação à técnica de obtenção de imagem, a técnica stitched FOV resultou em uma dose efetiva até 5,3 vezes maior que a single FOV para protocolos com o mesmo objetivo de imagem. Na contribuição individual, as glândulas salivares são responsáveis por 31% da dose efetiva, nos exames tomográficos. Os tecidos restantes também apresentaram uma contribuição significativa, 36 %. Os resultados apontam a necessidade de se estimar a dose efetiva nos diversos equipamentos e protocolos presentes no mercado, além de conhecer os parâmetros de radiação e a engenharia de fabricação dos equipamentos para a obtenção da imagem.
APA, Harvard, Vancouver, ISO, and other styles
6

Daqing, Huang, and Xie Qiu-Cheng. "THE TIME-ASSISTING CODE TECHNIQUE THAT IS AN EFFECTIVE COUNTERMEASURE TO REPEAT JAMMING." International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613486.

Full text
Abstract:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
In this Paper, the time-assisting code techique capable of defeating the repeat jamming is presented. The construction and antijamming performance of this technique are described and analyzed. This technique not only is robust to repeat jamming of Remote Control/Telemetring and Communication Systems, but also is used in multi-address remote control/ telemetring, multi-address communication and radar systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Mitic, Ljiljana. "Enviropreneurial management : an effective approach to cope with the ecological challenge." Thesis, University of Plymouth, 2000. http://hdl.handle.net/10026.1/2521.

Full text
Abstract:
Humankind is the major force influencing our planet earth. Irreversible environmental degradation are a widespread problem. Atmospheric changes, worsening climate, ozone depletion, etc. is accompanying our daily life. From the ecological perspective the future of the 21's century is endangered. A change of consumption pattern, material thinking, lifestyles have to change fundamentally. It may even require to break with 'business-asusual'. In the age of continuously and rapidly changing competitive environments, companies are increasingly forced to be highly flexible and responsive to changes having an impact on their competitiveness or even affecting the firm's viability. "Entrepreneurship" is an emerging practice, which involves the application of an entrepreneurial spirit to established businesses. The management style is seen to embody the appropriate characteristics for surviving or even growing in a constantly changing environment. A major objective of this research is to determine whether an entrepreneurial management style has an impact on the ecological approach a firm may adopt. For this purpose a mail survey of 500 German firms across all industries was undertaken in the first phase. The aim is to further determine whether firms adopting a proactive ecological approach meet the ecological challenge in a strategic manner. In order to achieve this objective a case study approach was chosen in the second phase based on ten interviews conducted in the food & allied industry. The survey aimed at examining the management style, organisational structure and the business environment of 212 firms to determine firms' nature and style of strategic response to their business environment. Moreover, firms' ecological orientation and ecological environment is measured to determine to which degree firms are proactively oriented. Based on this, the relationship between the management style adopted by firms and the ecological approach is analysed. The results of the survey suggest that firms' response to the ecological issue is strongly influenced by the way in which they respond to business challenges or changes in the business environment. Furthermore, the case study aimed at identifying the degree to which firms integrate the ecological issue into their strategic behaviour. Another aim is to analyse if the relationship between management style and ecological approach can be confirmed further, thus supporting the results of the first phase. The results indicate that a proactive ecological approach demands a comprehensive way of realisation. The ecological issue should be an integral part of the firms' strategic management process and be approached in a strategic manner. Thus, the research project strongly suggests that an entrepreneurial style supported by organic organisational structures is seen as the appropriate approach to follow the path of an ecologically sustainable future. An entrepreneurial approach will enable firms to be innovative and thus inducing fundamental changes with regard to ecological matters. Far-reaching environmental improvements are needed to take a large step towards a sustainable society. An entrepreneurial environmental approach enables firms to anticipate and give fresh impetus to the ecological development. However, it has to be kept in mind that all forces upsetting the equilibrium of the global system have to be handled sustainably.
APA, Harvard, Vancouver, ISO, and other styles
8

Flanders, Melanie Good Glenn E. "Characteristics of effective mid-level leaders in higher education." Diss., Columbia, Mo. : University of Missouri--Columbia, 2008. http://hdl.handle.net/10355/7106.

Full text
Abstract:
Title from PDF of title page (University of Missouri--Columbia, viewed on Feb. 22, 2010). The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Dr. Glenn E. Good, Dissertation Supervisor. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
9

Malatji, Tsholofelo M. "The development of an effective jam code against the conical-scan seeker." Diss., University of Pretoria, 2020. http://hdl.handle.net/2263/73210.

Full text
Abstract:
There remains a wide proliferation of second-generation frequency-modulated conical-scan seekers in the hands of irregular forces, while the understanding of what makes a jam signal effective remains unclear. It is generally known that the jam-to-signal (J/S) ratio, the jam signal frequency, and the duty cycle are the parameters that need consideration when developing an effective jam code, but the effect of using different jammer waveforms is not generally known. The general consensus in the literature seems to indicate that the effective jam signal parameters should be close to those of the target signal. It is known that the jam signal that matches the target signal will only beacon the target and not provide protection, therefore the jam signal should not perfectly match the target signal for effective jamming. However it is not clear which parameters should be close to and which should differ with the target signal. The literature also generally uses the low frequency type of jam signal and the effect of other types of waveforms is not known. Due to the sensitive nature of this topic, a simulation model and a hardware model of the conical scan seeker was not available to the author and as a result a representative simulation model was designed for conducting the experiments. The simulation model was extensively tested and validated to ensure representative behaviour. This study investigated the effect of the critical jam signal parameters against different jammer waveforms namely: the fixed carrier, low frequency, amplitude modulation (AM), frequency-modulation (FM) and the AM-FM jam codes. The study tested the effect of the critical parameters across the different jam waveforms and a comparison of the tested waveforms was conducted. The parameters used to compare the jam signals were the maximum achieved seeker error, the minimum J/S ratio required to achieve a significant effect, the range of effective frequencies or modulation indices and the lowest effective duty cycle. The AM jam signal achieved the greatest seeker error when compared to the other jam waveforms with a maximum error of 1.1°. The AM jam signal however achieves this error, with a J/S ratio of 50. The AM-FM jam signal achieved an error of 0.97° at a J/S ratio of 20 which is less than half of the required J/S ratio with the AM jam signal. The AM-FM hybrid jam signal was found to be the most robust in a wide range of modulation indices. This jam waveform was found to be the least sensitive against changes in the modulation index. The jam signal was found to be less power intensive when compared with other waveforms since significant jam effect was achieved at low J/S ratios. The best parameter combination for this jam signal was a J/S ratio of 20, a modulation index of 2.5, a modulation frequency of 100 Hz and a duty cycle of 50%. The maximum seeker error induced by this parameter combination is 0.97°. With the stated advantages, the AM-FM hybrid jam signal was found to be the most effective jam signal against the conical-scan seeker. Contrary to the general guide provided in the literature, the most effective jam signal does not contain parameters that are similar to the target induced parameters. The conclusion of this work was therefore that the most effective jam signal does not necessarily have to be similar to the target signal to be effective against the conical-scan seeker. The unique result found in this study is attributed to the wide range of jam signal waveforms that were tested. The results show that the effects of the critical parameters (J/S ratio, frequency and duty cycle) vary with the change in jam waveform.
Dissertation (MEng)--University of Pretoria, 2020.
Electrical, Electronic and Computer Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
10

Jowah, Enoch Larry. "Critical core competencies for effective strategic leadership in project management." Thesis, Nelson Mandela Metropolitan University, 2013. http://hdl.handle.net/10948/d1017230.

Full text
Abstract:
Project management is undeniably the fastest growing discipline as organizations move into the euphoria of projectification of their operations. Though projects have been a part of human life since time immemorial, there is a sudden realisation of the effectiveness of the methods used in project management. The enrolment of students studying for project management in tertiary institutions has shown tremendous increase. Yet the project execution process is mired by high failure rates and absence of clarity on the necessary skills required for effective project execution. The authority-gap in project management presents political and operational conflicts, and new innovative ways of authority-gap reduction need to be identified and taught in training programs. Simultaneously there is a realisation by both academics and practitioners that there is a difference between managers and leaders. Extensive studies on leadership have not allowed for a one-stop-leadership-style to be used in leadership of any form, let alone project leadership. In fact there is no standard definition of leadership as this has been heavily contextualized and thereby disallowing the creation of a universal definition. No cast-in-stone leadership styles are known and thereby leaving the research on leadership to concentrate on critical competencies required for effective leadership of projects. This study seeks to establish the core competencies needed by the project leaders and other practitioners to reduce the failure rate and maximise the benefits currently sought after by organisations. Studies have shown that the matrix structure within which the embedded projects work is a contributing factor to the failure of projects. Because projects are executed by people, it would be the proper utilisation of people’s talents and competencies that are expected to yield favourable results. Thus, whilst the matrix structure creates the authority-gap that presents a problem for effective project execution, management-by-projects still remains the best way known to add economic value to performance and productivity. The study therefore focuses on those characteristics of project leaders that will most likely make the difference in the way people perform in the workplace. The research findings emphasised the importance of empowerment of project managers and the development of their interpersonal skills of the project leader with special emphasis on extroversion, genuineness of senior management, and the responsiveness of the project leaders as important requirements for effective authority- gap reduction. These critical competencies will therefore facilitate the project execution process and enhance the empowered project leader’s ability to reduce the high project failure rate and high cost overruns. These competencies apply specifically to the human element as it relates to the role of the project leader and the interaction with the team members, this new knowledge needs to be introduced into training programs and project practitioners.
APA, Harvard, Vancouver, ISO, and other styles
11

Malevris, N. "An effective approach for testing program branches and linear code sequences and jumps." Thesis, University of Liverpool, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tarasca, Nicola. "Geometric cycles on moduli spaces of curves." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16518.

Full text
Abstract:
Ziel dieser Arbeit ist die explizite Berechnung gewisser geometrischer Zykel in Modulräumen von Kurven. In den letzten Jahren wurden Divisoren auf $\Mbar_{g,n}$ ausgiebig untersucht. Durch die Berechnung von Klassen in Kodimension 1 konnten wichtige Ergebnisse in der birationalen Geometrie der Räume $\Mbar_{g,n}$ erzielt werden. In Kapitel 1 geben wir einen Überblick über dieses Thema. Im Gegensatz dazu sind Klassen in Kodimension 2 im Großen und Ganzen unerforscht. In Kapitel 2 betrachten wir den Ort, der im Modulraum der Kurven vom Geschlecht 2k durch die Kurven mit einem Büschel vom Grad k definiert wird. Da die Brill-Noether-Zahl hier -2 ist, hat ein solcher Ort die Kodimension 2. Mittels der Methode der Testflächen berechnen wir die Klasse seines Abschlusses im Modulraum der stabilen Kurven. Das Ziel von Kapitel 3 ist es, die Klasse des Abschlusses des effektiven Divisors in $\Mbar_{6,1}$ zu berechnen, der durch punktierte Kurven [C, p] gegeben ist, für die ein ebenes Modell vom Grad 6 existiert, bei dem p auf einen Doppelpunkt abgebildet wird. Wie Jensen gezeigt hat, erzeugt dieser Divisor einen extremalen Strahl im pseudoeffektiven Kegel von $\Mbar_{6,1}$. Ein allgemeines Ergebnis über gewisse Familien von Linearsystemen mit angepasster Brill-Noether-Zahl 0 oder -1 wird eingeführt, um die Berechnung zu vervollständigen.
The aim of this thesis is the explicit computation of certain geometric cycles in moduli spaces of curves. In recent years, divisors of $\Mbar_{g,n}$ have been extensively studied. Computing classes in codimension one has yielded important results on the birational geometry of the spaces $\Mbar_{g,n}$. We give an overview of the subject in Chapter 1. On the contrary, classes in codimension two are basically unexplored. In Chapter 2 we consider the locus in the moduli space of curves of genus 2k defined by curves with a pencil of degree k. Since the Brill-Noether number is equal to -2, such a locus has codimension two. Using the method of test surfaces, we compute the class of its closure in the moduli space of stable curves. The aim of Chapter 3 is to compute the class of the closure of the effective divisor in $\M_{6,1}$ given by pointed curves [C,p] with a sextic plane model mapping p to a double point. Such a divisor generates an extremal ray in the pseudoeffective cone of $\Mbar_{6,1}$ as shown by Jensen. A general result on some families of linear series with adjusted Brill-Noether number 0 or -1 is introduced to complete the computation.
APA, Harvard, Vancouver, ISO, and other styles
13

Preston, Angela I., Charles L. Wood, and Sara Beth Hitt. "Using Effective Strategies to Enhance Core Math Instruction." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etsu-works/4063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gibson, Maryika Ivanova. "Effective Strategies for Recognition and Treatment of In-Hospital Strokes." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6756.

Full text
Abstract:
In-hospital onset strokes represent 4% to 20% of all reported strokes in the United States. The variability of treatment protocols and workflows as well as the complex etiology and multiple comorbidities of the in-hospital stroke subpopulation often result in unfavorable outcomes and higher mortality rates compared to those who experience strokes outside of the hospital setting. The purpose of this project was to conduct a systematic review to identify and summarize effective strategies and practices for prompt recognition and treatment of in-hospital strokes. The results of the literature review with leading-edge guidelines for stroke care were corelated to formulate recommendations at an organizational level for improving care delivery and workflow. Peer-reviewed publications and literature not controlled by publishers were analyzed. An appraisal of 24 articles was conducted, using the guide for classification of level of evidence by Fineout-Overholt, Melnyk, Stillwell, and Williamson. The results of this systematic review revealed that the most effective strategies and practices for prompt recognition and treatment of in-hospital strokes included: staff education, creating a dedicated responder team, analysis and improvement of internal processes to shorten the time from discovery to diagnosis, and offering appropriate evidence-based treatments according to acute stroke guidelines. Creating organizational protocols and quality metrics to promote timely and evidence-based care for in-hospital strokes may result in a positive social change by eliminating the existing care disparities between community and in-hospital strokes and improving the health outcomes of this subpopulation of strokes.
APA, Harvard, Vancouver, ISO, and other styles
15

Peltonen, Joanna. "Effective Spatial Mapping for Coupled Code Analysis of Thermal–Hydraulics/Neutron–Kinetics of Boiling Water Reactors." Doctoral thesis, KTH, Kärnkraftsäkerhet, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122088.

Full text
Abstract:
Analyses of nuclear reactor safety have increasingly required coupling of full three dimensional neutron kinetics (NK) core models with system transient thermal–hydraulics (TH) codes.  In order to produce results within a reasonable computing time, the coupled codes use two different spatial description of the reactor core.  The TH code uses few, typically 5 to 20 TH channels, which represent the core.  The NK code uses explicit one node for each fuel assembly.  Therefore, a spatial mapping of a coarse grid TH and a fine grid NK domain is necessary.  However, improper mappings may result in loss of valuable information, thus causing inaccurate prediction of safety parameters. The purpose of this thesis is to study the effectiveness of spatial coupling (channel refinement and spatial mapping) and develop recommendations for NK/TH mapping in simulation of safety transients.  Additionally, sensitivity of stability (measured by Decay Ratio and Frequency) to the different types of mapping schemes, is analyzed against OECD/NEA Ringhals–1 Stability Benchmark data. The research methodology consists of spatial coupling convergence study, by increasing the number of TH channels and varying mapping approaches, up to and including the reference case.  The reference case consists of one-to-one mapping: one TH channel per one fuel assembly.  The comparisons of the results are done for steady–state and transient results.  In this thesis mapping (spatial coupling) definition is formed and all the existing mapping approaches were gathered, analyzed and presented.  Additionally, to increase the efficiency and applicability of spatial mapping convergence, a new mapping methodology has been proposed.  The new mapping approach is based on hierarchical clustering method; the method of unsupervised learning that is adopted by many researchers in many different scientific fields, thanks to its flexibility and robustness.  The proposed new mapping method turns out to be very successful for spatial coupling problem and can be fully automatized allowing for significant time reduction in mapping convergence study. The steady–state results obtained from three different plant models for all the investigated cases are presented.  All models achieved well converged steady–state and local parameters were compared and it was concluded that solid basis for further transient analysis was found.  Analyzing the mapping performance, the best predictions for steady–state conditions are the mappings that include the power peaking factor feature alone or with any combination of other features.  Additionally it is of value to keep the core symmetry (symmetry feature).  The big part of this research is devoted to transient analysis.  The selection of transients was done such that it covers a wide range of transients and gathered knowledge may be used for other types of transients.  As a representative of a local perturbation, Control Rod Drop Accident was chosen.  A specially prepared Feedwater Transient was investigated as a regional perturbation and a Turbine Trip is an example of a global one.  In the case of local perturbation, it has been found that a number of TH channels is less important than the type of mapping, so a high number of TH channels does not guarantee improved results.  To avoid unnecessary averaging and to obtain the best prediction, hot channel and core zone where accident happens should be always separated from the rest.  The best performance is achieved with mapping according power peaking factors, and therefore this one is recommended for such type of perturbation. The regional perturbation has been found to be more challenging than the others.  This kind of perturbation is strongly dependent on mapping type that affects the power increase rate, SCRAM time, onset of instability, development of limit cycle, etc.  It has been also concluded that a special effort is needed for input model preparation.   In contrast to the regional perturbation, the global perturbation is found to be the least demanding transient.  Here, the number of TH channels and type of mapping do not have significant impact on average plant behaviour – general plant response is always well recreated.  A special effort has also been paid to investigate the core stability performance, in both global and regional mode.  It has been found that in case of unstable cores, a low number of TH channels significantly suppresses the instability.  For these cases number of TH channels is very important and therefore at least half of the core has to be modeled to have a confidence in predicted DR and FR.  In case of regional instability in order to get correct performance of out-of-phase oscillations, it is recommended to use full-scale model.  If this is not possible, the mapping which is a mixture of 1st power mode and power peaking factors, should be used. The general conclusions and recommendations are summarized at the end of this thesis.  Development of these recommendations was one of the purposes of this investigation and they should be taken into consideration while designing new coupled TH/NK models and choosing mapping strategy for a new transient analysis.

QC 20130516

APA, Harvard, Vancouver, ISO, and other styles
16

Young, Whitney Nash. "Supporting Elementary Teachers In Effective Writing Instruction Through Professional Development." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/1637.

Full text
Abstract:
Common Core State Standards (CCSS) for writing have created a challenge for teachers at an urban elementary school as they struggled to provide effective writing instruction to support the rigorous expectations of the standards. The purpose of this study was to explore elementary teachers' lived experiences of instruction and better understand instructional writing procedures and strategies. The conceptual framework of this study was based on Dennick's work for incorporating educational theory into teaching practices, which combined elements of constructivist, experiential, and humanist learning theories. Research questions investigated how teachers perceived the impact of the CCSS writing standards on their practice and what kinds of support they needed in order to effectively support writing instruction. A phenomenological design was selected to capture the lived experiences of participants directly associated with CCSS writing instruction. The study included 6 individual teacher interviews and a focus group session of 6 teachers who met the criteria for experience in Grades 3-5 at the elementary school. Data were coded and then analyzed to determine common themes that surfaced from the lived experiences of teachers including the need for training in writing instruction, the impact of common core standards on the increased rigor of current writing instruction, a lack of PD at the local school, and instructor challenges with differentiated writing instruction. A job-embedded professional development model was designed to support teachers with effective writing instruction and improve teacher practice at the local school, the district, and beyond. When fully implemented, this professional development may provide elementary teachers with research-based writing strategies that will support the rigor of CCSS standards and college and career readiness.
APA, Harvard, Vancouver, ISO, and other styles
17

Johnson, Stuart Clark. "Section 103(b) (4) (A) of the internal revenue code: can the tax code provide an efficient and effective low income "housing program"? ; (an economic analysis)." Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/94470.

Full text
Abstract:
Section 103(b)(4)(A) the Internal Revenue Code was examined to determine its e££ectiveness in helping to achieve the goal of the federal government's low income housing policy--"a decent home and a suitable living environment for every American family." A theoretical analysis of the general excise subsidy model on which this program is based highlighted certain empirical factors on which to focus to determine the potential effectiveness of the program. A theoretical analysis of the particular mechanism used resulted in a measure of effectiveness of providing a subsidy through tax-exempt bond financing. Empirical analysis basically showed that the mechanism is ineffective. Therefore, recent recommendations to abolish Section 103(b)(4)(A) are sound.
M.A.
APA, Harvard, Vancouver, ISO, and other styles
18

Diany, Mohammed. "Détermination de la largeur effective des joints d'étanchéité utilisée dans le code ASME pour le calcul des brides /." Montréal : École de technologie supérieure, 2005. http://wwwlib.umi.com/cr/etsmtl/fullcit?pMR06030.

Full text
Abstract:
Thèse (M. Ing.)--École de technologie supérieure, Montréal, 2005.
"Mémoire présenté à l'École de technologie supérieure comme exigence partielle à l'obtention de la maîtrise en génie mécanique". Bibliogr.: f. [139]-141. Également disponible en version électronique.
APA, Harvard, Vancouver, ISO, and other styles
19

Diany, Mohammed. "Détermination de la largeur effective des joints d'étanchéité utilisée dans le code ASME pour le calcul des brides." Mémoire, École de technologie supérieure, 2005. http://espace.etsmtl.ca/351/1/DIANY_Mohammed.pdf.

Full text
Abstract:
La conception et le calcul des assemblages à brides boulonnées munies de joints d'étanchéité sont gouvernés par des codes normalisés établis suite à des recherches ou par expériences des utilisateurs. Cependant ces codes sont susceptibles en permanence à des critiques et à des remarques suite à la découverte de leurs faiblesses. Dans l'actuelle procédure du code ASME pour la conception des assemblages à brides boulonnées, le concept de la largeur effective est présenté pour tenir compte de l'effet de la non-uniformité de distribution radiale de la contrainte de contact due à la rotation des brides. Le code fixe une valeur seuil de la largeur du joint au-dessus de laquelle un ajustement de la largeur de contact du joint est nécessaire. L'origine de ce concept n'a jamais été révélée et la validité de ce seuil n'a jamais été vérifiée. Dans les conditions de fonctionnement normal des brides utilisées avec les joints plats, la définition de ce seuil est indépendante de la charge des boulons, de la contrainte moyenne sur le joint, de la pression interne et de la flexibilité de l'assemblage. Dans ce travail, une nouvelle approche pour le calcul de cette largeur effective, basée sur les résultats d'une recherche expérimentale et entreprise en parallèle à une étude numérique par éléments finis, est présentée. Une étude sur les nouvelles limites du concept de la largeur effective du joint est entreprise. Cette approche tient compte de la distribution non uniforme de la contrainte du joint, de la flexibilité des brides et du comportement mécanique vis-à-vis les fuites de l'assemblage. Finalement un modèle mathématique approché est proposé pour calculer la valeur de la largeur effective en fonction de la contrainte moyenne appliquée sur le joint, de la rotation des brides et de la largeur de contact du joint.
APA, Harvard, Vancouver, ISO, and other styles
20

Clark, Donald S. "Components of effective reading instruction for reading disabled students, an evaluation of a program combining code- and strategy-instruction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ63618.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Van, Zyl A. J. P. (Andries Jakobus Petrus). "Synthesis, characterization and testing of nano-structured particles for effective impact modification of glassy amorphous polymers." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53609.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2003.
ENGLISH ABSTRACT: The synthesis of structured nanoparticles, in particular core/shells, IS of great technological and economical importance to modem materials science. One of the advantages of structured particles is that they can be synthesized with either a solid core (albeit soft or hard) or a liquid core (of varying viscosity). This adds to the versatility of structured particles and their relevance to a majority of industrial and commercial endapplications. The synthesis of core/shell particles with liquid cores was investigated for the effective impact modification of glassy amorphous polymers. Polybutyl acrylate was chosen as the shell due to its rubbery nature. Hexadecane functioned as the core oil and facilitated osmotic stability by being a suitable hydrophobe for the miniemulsion synthesis. Polymer synthesis was preceded by the prediction of particle morphology by using thermodynamic prediction models. Core/shell particles with liquid cores were synthesized via miniemulsion polymerization. This resulted in the direct introduction of core-oil and monomer into the miniemulsion droplets. Polymerization was achieved in situ, resulting in the formation of particles with the desired morphology. For additional strength, stability and matrix mixing capabilities, methyl methacrylate (MMA) was grafted onto the initial core/shell particles. The obtained morphology was in contradiction with the predicted morphology, thus pointing to strong kinetic influences during the polymerization process. These influences could be attributed to surface anchoring of polymer chains due to the initiator (KPS) used, the establishment of the polymerization locus as well as the increase in viscosity at the polymerization locus. To test these influences a surface-inactive initiating species (AIBN) and an interfacial redox initiating species (cumyl hydroperoxide/Fe/") were used. Use of the former resulted in the formation of solid polymer particles due to homogeneous polymerization throughout the droplet, thus leading to an inverse core/shell morphology as a result of thermodynamic considerations. The redox initiator promoted kinetic influences as a result of fast polymerization kinetics at the droplet/water interface. This, as well as the increase in viscosity, facilitated the production of core/shell particles. To obtain core/shell particles with the desired size, the influence of surfactant concentration was investigated. Capillary hydrodynamic fractionation (CHDF) was used to determine the particle size of the initial core/shell particles as well as the size of the MMA-grafted core/shell particles. The area stabilized per surfactant molecule was calculated stoichiometrically and compared to "classical" miniemulsion results, i.e. data generated from the synthesis of polymeric latexes in the presence of a hydrophobe, but at a much lower hydrophobe:monomer ratio than was used here. The influence of methanol as well as the possibility of scaling-up the process was also investigated. The study was further expanded to the investigation of living miniemulsion polymerization techniques to control the molecular architecture of synthesized core/shell latexes. The influence of different RAFT agents, initiators and monomers were investigated on the core/shell formation properties of the investigated systems. The combined effects of establishing the polymerization locus as well as increased polymerization kinetics, thus increasing the viscosity at the polymerization locus, lead to the successful formation of liquid- filled core/shell particles. To conclude, the ability of the synthesized core/shell particles to induce impact modification in glassy amorphous polymers was investigated. Results showed that incorporation of these particles could effectively modify the intrinsic properties of the investigated polymers, resulting in a brittle-to-ductile transition. Improved impact results of the investigated glassy matrix were obtained. Keywords: core/shell, liquid-filled, RAFT, miniemulsion, impact modification
AFRIKAANSE OPSOMMING: Die sintese van gestruktureerde nano-partikels, meer spesifiek kern/skil partikels, is van onskatbare tegnologiese en ekonomiese belang vir moderne materiaalkunde. Een van die voordele van hierdie tipe partikels is dat sintese kan geskied met 'n soliede kern (hard of sag) of vloeistofkern (met wisselende viskositeit). Dit dra by tot die veelsydigheid van gestruktureerde partikels en dus tot grootskaalse aanwending in industriële en kommersiële toepassings. Die sintese van kern/skiI partikels met vloeistofkerne is ondersoek met die oog op effektiewe slagsterkte modifikasie van glasagtige amorfe polimere. Polibutielakrilaat is gekies as skil-polimeer op grond van sy rubberige voorkoms. Heksadekaan moes funksioneer as die kern-olie, maar het ook bykomende osmotiese stabiliteit verleen tydens die miniemulsie-polimerisasie proses. Dit is as gevolg van die gepaste hidrofobiese eienskappe van heksadekaan. Polimeer sintese is voorafgegaan deur die voorspelling van partikel morfologie met behulp van termodinamies gebaseerde voorspellingsmodelle. Kern/skil partikels is gesintetiseer deur middel van 'n miniemulsie-polimerisasie reaksie wat die direkte inkorporering van kern-olie en monomeer in die miniemulsiedruppel teweeg bring. Polimerisasie vind in situ (lat. vir in die oorspronklike plek, m.a.w. binne-in die druppel) plaas en lei tot die vorming van partikels met die gewenste morfologie. Metielmetakrilaat is ge-ent op die oorspronklike kern/skil partikels om addisionele sterkte, stabiliteit en vermenging met die matriks polimeer te bewerkstellig. Die verkrygde morfologie is teenstrydig met die voorspelde morfologie, wat dus die teenwoordigheid van sterk kinetiese invloede aandui. Hierdie invloede kan toegeskryf word aan die oppervlak-aktiewe afsetter (KPS, kaliumpersulfaat) wat gebruik is, die daarstelling van die polimerisasie lokus asook die toename in viskositeit by die lokus van polimerisasie. Om hierdie invloede te toets is 'n oppervlak-onaktiewe afsetter (AIBN, asobisisobutironitriel) en intervlak redoks-afsetter (kumielhidroperoksied/Pe'") gebruik. Gebruik van eersgenoemde het die vorming van soliede partikels teweeg gebring. Dit is as gevolg van homogene polimerisasie in die druppel en dus die ontstaan van omgekeerde kern/skiI partikels weens termodinamiese oorwegings. Die redoks-afsetter het egter die kinetiese oorwegings bevoordeel as gevolg van vinnige polimerisasiekinetika by die druppel/water intervlak. Dit, tesame met die toename in viskositeit, maak die produksie van kern/skil partikels moontlik. Vir die verkryging van kern/skiI partikels met die gewenste partikelgrootte is die invloed van die seep konsentrasie ondersoek. CHDF (eng. capillary hydrodynamic fractionation) is gebruik om die partikelgrootte van die oorspronklike kern/skiI partikels, sowel as dié ge-ent met metielmetakrilaat, te bepaal. Die area gestabiliseer per seepmolekule is bereken d.m.v. stoichiometrie en vergelyk met "klassieke" miniemuisie data, d.i. data verkry deur die sintese van latekse in die teenwoordigheid van 'n hidrofoob, maar teen 'n baie laer hidrofoob:monomeer-verhouding as wat hier gebruik is. Die invloed van metanol, asook die moontlikheid om die reaksie op te skaal, is ondersoek. Die studie is verder uitgebrei om die invloed van lewende miniemulsie-polimerisasie tegnieke in te sluit, om sodoende beheer uit te oefen oor die molekulêre argitektuur van die gesintetiseerde latekse. Die invloed van verskeie RAFT (eng. reversible additionfragmentation chain transfer) agente, afsetters en monomere op die kern/skiI vormingsmoontlikhede van die bestudeerde stelsels, is ondersoek. Die gesamentlike effek van die daarstelling van die polimerisasie lokus en dus die verhoging van die viskositeit by die lokus, lei tot die suksesvolle vorming van vloeistof-gevulde kern/skiI partikels. Laastens is die invloed van die gesintetiseerde kern/skil partikels op die slagsterkte van glasagtige amorfe polimere ondersoek. Resultate dui daarop dat die insluiting van hierdie partikels kan lei tot die effektiewe verandering van die intrinsieke eienskappe van die bestudeerde polimere, en dus 'n oorgang van bros na rekbaar kan veroorsaak. 'n Verbetering in die slagsterkte resultate van die bestudeerde glasagtigte matriks is ook waargeneem.
APA, Harvard, Vancouver, ISO, and other styles
22

Mbele, Nomalizo Constance. "The psychological experiences of learners affected by HIV/AIDS pandemic / Nomalizo Constance Mbele." Thesis, North-West University, 2005. http://hdl.handle.net/10394/3107.

Full text
Abstract:
This study focuses on investigating the psychological needs of orphans affected by HIV/AIDS and how these learners can be supported in order to cope effectively with the challenges posed by the HIV/AIDS pandemic. The study needed to understand the psychological well being of learners affected or orphaned by HIV/AIDS, their general performance at school, the nature and extent of social support they get from their families, communities and societies and their physical well being. Suggestions for an ecosystemic theoretical framework to be infused in all psycho-social support programmes geared to strengthen the psycho-social well-being of AIDS orphans were made. Orphans are affected by the HIV/AIDS pandemic emotionally, physically, spiritually and socially. Affected learners have fewer opportunities for schooling and education, may suffer from malnutrition. They are themselves often highly vulnerable to HIV infection and are at higher risk of developing psychological problems. In this study, a case study design was followed. Interviews were conducted with a sample of participants including orphaned learners living in a child-headed household, class-educator, an aunt and a health worker in Soweto. The researcher recruited participants by means of snowball sampling. Results revealed that learners orphaned by HIV/AIDS suffer emotional trauma and grief, illness and stress. They have scholastic problems, suffer stigmatization and discrimination, miss out on educational opportunities and experience poverty. This is an indication of a need for social support. It is for this reason that an ecosystemic support programme which schools can adopt and adapt in order to develop the psychological and social resilience of learners affected and orphaned by the HIV/AIDS pandemic is proposed.
Thesis (M.Ed.)--North-West University, Vaal Triangle Campus, 2006.
APA, Harvard, Vancouver, ISO, and other styles
23

Chanel, Clément. "Diffusion électromagnétique par un sol : Prise en compte d'un fil enfoui par l'introduction d'une impédance effective dans un code FDTD." Nantes, 2015. https://archive.bu.univ-nantes.fr/pollux/show/show?id=504ca20a-3305-424a-9871-beabcbbf294e.

Full text
Abstract:
Le contexte de cette thèse est l’étude de la diffusion d’une onde électromagnétique par un sol faiblement rugueux en présence d’un fil enfoui. Le problème est supposé bidimensionnel (les surfaces rugueuses ne dépendent que d’une variable d’espace) et les milieux, de part et d’autre de la surface, sont considérés homogènes. Tout d’abord, un dispositif expérimental a été mis en oeuvre afin de mesurer des profils de sols rugueux pour en déterminer les grandeurs caractéristiques, en particulier l’écart-type des hauteurs et la longueur de corrélation. Les valeurs de ces grandeurs nous permettent ainsi de choisir le modèle de diffusion électromagnétique asymptotique : la Méthode des Petites Perturbations (MPP), valide pour des variations des hauteurs de la surface très faibles devant la longueur d’onde du Radar. Puis, à partir de cette méthode, les courants de surface ont été exprimés analytiquement en fonction de l’ordre du développement perturbatif et de la nature de l’onde incidente. Le cas d’une onde plane a été étudié en particulier. Ensuite, les résultats numériques ont été comparés à ceux obtenus par une méthode numérique rigoureuse : la Méthode des Moments. Enfin, les expressions analytiques du champ diffusé, obtenues par la MPP, nous ont permis de calculer le coefficient de réflexion cohérent et l’impédance de surface effective associée. Cette impédance a pour vocation d’être implantée dans une plateforme de calcul 3D FDTD, dans laquelle le fil sera pris en compte
This thesis studies the electromagnetic wave scattering from a slightly rough soil in the presence of a buried wire. The problem is assumed to be two-dimensional (2D) (the rough surface depends only on one space variable) and the media separating the boundaries are assumed to be homogeneous. Firstly, a sensor was built up to measure the profiles of the rough soil in order to determine its statistical characteristics, such as the standard deviation of the heights and the correlation length. These parameters allow us to choose the adequate asymptotic electromagnetic scattering model devoted to our application: The Small Perturbation Method (SPM), valid for surface heights much smaller than the Radar wavelength. Then, from the SPM, the surface currents are expressed analytically, according to the order of expansion of the method and to the nature of the incident wave. The specific case of a plane incident wave is studied. In addition, the numerical results are compared to those obtained by a rigorous numerical method: the Method of Moments. Finally, the analytical expressions of the scattered field, obtained by SPM, allow us to derive the coherent reflection coefficient and the associated effective surface impedance. The purpose is to implement this impedance into a 3D FDTD simulation platform, in which the buried wire is taken into account
APA, Harvard, Vancouver, ISO, and other styles
24

BENSADOUN, GILBERT. "Etude statistique de vingt parametres biologiques effectuee chez six cent trente-sept nouveau-nes en cote d'ivoire." Aix-Marseille 2, 1989. http://www.theses.fr/1989AIX22978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Strentz, Thomas. "An Evaluation of Two Training Programs Designed to Enable Hostages to Cope More Effectively with Captivity Stress." VCU Scholars Compass, 1986. https://scholarscompass.vcu.edu/etd/5532.

Full text
Abstract:
In the present study, airline employees undergoing highly realistic but simulated captivity as hostages were given one of three types of prestress training programs. One group of subjects was given Problem (P)-focused training, which emphasized activities which would be useful in actively manipulating the stress situation. A second group was given Emotion (E)-focused training which emphasized techniques designed to help them directly modulate fear and anxiety associated with the situation. A third (control) group was given no specific stress management training. Retrospective data from the Ways of Coping Check List indicated that subjects tended to engage in the type of coping activity for which they were trained Data from the STAI State -Anxiety scale indicated that stress levels fluctuated dramatically over the course of the experiment, with the greatest changes observed for subjects classified as externals on the Locus of Control Scale who had received P-focused training. This group of subjects also showed the poorest adjustment as measured by the SCL-90). Overall, subjects who received E-focused training showed the best adjustment (as measured by the SCL-90 and the PIP behavioral rating scale). Better adjusting subjects also tended to be perceived as high in Friendliness and Dominance and low in Submissiveness and Hostility by their captors, and they tended to perceive their captors as Friendly and Dominant (as measured by the Impact Message Inventory). The findings were discussed in terms of the stress and coping literature, and their implications for implementation in future stress management programs for potential hostages.
APA, Harvard, Vancouver, ISO, and other styles
26

Patel, Seema, Hallie Rhoads, Bre Stuart, and Haley DeRosa. "Effectively Navigating Your Way Through the Death of a Child Using Family Stress Theory." Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/secfr-conf/2019/schedule/22.

Full text
Abstract:
This overview was made to discuss coping with families who have lost a child/sibling, specifically children in preschool and elementary school. This subject can be daunting and difficult to navigate for parents however, understanding the importance of communication, involvement, and proper coping techniques is vital to the child’s development and perception of death. This educational poster discusses ways to tackle the issues that come when losing a child and give parents further insight into young minds dealing with tragedy. We look at Family Stress Theory to further explain assumptions about families, how families manage conflict and stress, stressors family systems undergo, and other related concepts.
APA, Harvard, Vancouver, ISO, and other styles
27

Isildak, Murat. "Use Of Helical Wire Core Truss Members In Space Structures." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610553/index.pdf.

Full text
Abstract:
In an effort to achieve lighter and more economical space structures, a new patented steel composite member has been suggested and used in the construction of some steel roof structures. This special element has a sandwich construction composed of some strips of steel plates placed longitudinally along a helical wire core. The function of the helical core is to transfer the shear between the flange plates and increase the sectional inertia of the resulting composite member by keeping the flange plates at a desired distance from each other. Because of the lack of research, design engineers usually treat such elements as a solid member as if it has a full shear transfer between the flanges. However, a detailed analysis shows that this is not a valid assumption and leads to very unsafe results. In this context, the purpose of this study is to investigate the behavior of such members under axial compression and determine their effective sectional flexural rigidity by taking into account the shear deformations. This study applies an analytical investigation to a specific form of such elements with four flange plates placed symmetrically around a helical wire core. Five independent parameters of such a member are selected for this purpose. These are the spiral core and core wire diameters, the pitch of the spiral core, and the flange plate dimensions. Elements with varying combinations of the selected parameters are first analyzed in detail by finite element method, and some design charts are generated for the determination of the effective sectional properties to be used in the structural analysis and the buckling loads. For this purpose, an alternative closed-form approximate analytical solution is also suggested.
APA, Harvard, Vancouver, ISO, and other styles
28

Di, Chicco Augusto. "Optimization of a calculation scheme through the parametric study of effective nuclear cross sections and application to the estimate of neutronic parameters of the ASTRID fast nuclear reactor." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
This thesis presents the project for the optimization of the APOLLO3® neutronic calculation scheme applied to the 4th generation fast neutron reactor ASTRID. APOLLO3® is the new multipurpose neutronic platform developed by the CEA. It incorporates many of the previous generation codes used in the French reactor core design supply chain. Like all deterministic codes, APOLLO3® solves the neutron transport equation with a discretization of the variables of interest: multi-group method for the energy, discrete ordinates and spherical harmonics for the angular variable, collision probabilities and characteristics methods for the spatial variable. The resolution of the transport equation handles useful quantities such as the neutron flux and multiplication factor, fission rates and cross sections to understand the physical behaviour of the reactor core. Currently it is not possible to use deterministic codes to simulate an entire reactor with a heterogeneous 3D geometry and a fine energy description, so to simplify the study of complete neutron field at core level, the calculation scheme is divided into two phases: lattice and core calculation. The main purpose of this work is to find an optimal degree of approximations of the calculation scheme for the evaluation of a desired physical effect and of the user constraints. In order to reach this optimum, several studies have been carried out with different levels of approximations. The results have been benchmarked with the ones obtained using the stochastic code TRIPOLI4®, used as a reference and to ensure a good accuracy. Furthermore, several sensitivity studies have been carried out to understand how the different approximations affect the macroscopic cross sections evaluation, because these dependences are not yet fully understood.
APA, Harvard, Vancouver, ISO, and other styles
29

Raddo, Thiago Roberto. "Next generation access networks: flexible OCDMA systems and cost-effective chaotic VCSEL sources for secure communications." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18155/tde-31082017-093005/.

Full text
Abstract:
The significant advances in fiber-optic technology have broadened the optical network\'s reach into end-user business premises and even homes, allowing new services and technologies to be delivered to the customers. The next wave of innovation will certainly generate numerous opportunities provided by the widespread popularity of emerging solutions and applications such as tactile Internet, telemedicine and real time 3-D content generation, making them part of everyday life. Nevertheless, to support such an unprecedented and insatiable demand of data traffic, higher capacity and security, flexible bandwidth allocation and cost-efficiency have become crucial requirements for technologies candidate for future optical access networks. To this aim, optical code-division multiple-access (OCDMA) technology is considered as a prospective candidate, particularly due to features like asynchronous transmissions, flexible as well as conscious bandwidth resource distribution and support to differentiated services at the physical layer, to name but a few. In this context, this thesis proposes new mathematical formalisms for bit error rate, packet throughput and packet delay to assess the performance of flexible OCDMA networks capable of providing multiservice multirate transmissions according to users\' requirements. The proposed analytical formalisms do not require the knowledge a priori of the users\' code sequences, which means that the network performance can be addressed in a simple and straightforward manner using the code parameters only. In addition, the developed analytical formalisms account for a general number of distinct users\' classes as well as general probability of interference among users. Hence, these formalisms can be successfully applied for performance evaluation of flexible OCDMA networks not only under any number of users\' classes in a network, but also for most spreading codes with good correlation properties. The packet throughput expression is derived assuming Poisson, binomial and Markov chain approaches for the composite packet arrivals with the latter defined as benchmark. Then, it is shown via numerical simulation the Poisson-based expression is not appropriate for a reliable throughput estimate when compared to the benchmark (Markov) results. The binomial-based throughput equation, by its turn, provides results as accurate as the benchmark. In addition, the binomial-based throughput is numerically more convenient and computationally more efficient than the Markov chain approach, whereas the Markov-based one is computationally expensive, particularly if the number of users is large. The bit error rate (BER) expressions are derived considering gaussian and binomial distributions for the multiple-access interference and it is shown via numerical simulations that accurate performance of flexible OCDMA networks is only obtained with the binomial-based BER expression. This thesis also proposes and investigates a network architecture for Internet protocol traffic over flexible OCDMA with support to multiservice multirate transmissions, which is independent of the employed spreading code and does not require any new optical processing technology. In addition, the network performance assumes users transmitting asynchronously using receptors based on intensity-modulation direct-detection schemes. Numerical simulations shown that the proposed network performs well when its users are defined with high-weight code or when the channel utilization is low. The BER and packet throughput performance of an OCDMA network that provides multirate transmissions via multicode technique with two codes assigned to each single user is also addressed. Numerical results show that this technique outperforms classical techniques based on multilength code. Finally, this thesis addresses a new breakthrough technology that might lead to higher levels of security at the physical layer of optical networks. This technology consists in the generation of deterministic chaos from a commercial free-running vertical-cavity surface-emitting laser (VCSEL). The chaotic dynamics is generated by means of mechanical strains loaded onto an off-the-shelf quantum-well VCSEL using a simple and easily replicable holder. Deterministic chaos is then achieved, for the first time, without any additional complexity of optical feedback, parameter modulation or optical injection. The simplicity of the proposed system, which is based entirely on low-cost and easily available components, opens the way to the widespread use of commercial and free-running VCSEL devices for chaos-based applications. This off-the-shelf and cost-effective optical chaos generator has the potential for not only paving the way towards new security platforms in optical networks like, for example, successfully hiding the user information in an unpredictable, random-like signal against eventual eavesdroppers, but also for influencing emerging chaos applications initially limited or infeasible due to the lack of low-cost solutions. Furthermore, it leads the way to future realization of emerging applications with high-integrability and -scalability such as two-dimensional arrays of chaotic devices comprising hundreds of individual sources to increase requirements for random bit generation, cryptography or large-scale quantum networks.
Os avanços relacionados a tecnologia fotônica ampliaram o alcance das redes de comunicação óptica tanto em instalações de estabelecimentos comerciais quanto em residências, permitindo que novos serviços e tecnologias fossem entregues aos clientes. A próxima onda de inovação certamente gerará inúmeras oportunidades proporcionadas pela popularidade de soluções emergentes e aplicações como a Internet tátil, a telemedicina e a geração de conteúdo 3-D em tempo real, tornando-os parte da vida cotidiana. No entanto, para suportar a crescente demanda de tráfego atual, uma maior capacidade e segurança, alocação flexível de largura de banda e custo-eficiência tornaram-se requisitos cruciais para as tecnologias candidatas a futuras redes de acesso óptico. Para este fim, a tecnologia de acesso múltiplo por divisão de código óptico (OCDMA) é considerada um candidato em potencial, particularmente devido a características como transmissões assíncronas, distribuição flexível de banda larga e suporte a serviços diferenciados na camada física, para citar apenas alguns. Neste contexto, esta tese propõe novos formalismos matemáticos para a taxa de erro de bits, taxa de transferência de pacotes e atraso de pacotes para avaliar o desempenho de redes OCDMA flexíveis capazes de fornecer transmissões em múltiplas qualidades de serviço (QoS) de acordo com as necessidades dos usuários. Os formalismos analíticos propostos não requerem o conhecimento a priori das sequências de código dos usuários, o que significa que o desempenho da rede pode ser abordado de forma simples e direta usando apenas os parâmetros de código. Além disso, os formalismos analíticos desenvolvidos representam um número geral de classes de usuários distintos, bem como a probabilidade geral de interferência entre os usuários. Portanto, esses formalismos podem ser aplicados com sucesso na avaliação de desempenho de redes OCDMA flexíveis não apenas em qualquer número de classes de usuários em uma rede, mas também para a maioria dos códigos de espalhamento com boas propriedades de correlação. A expressão de taxa de transferência de pacotes é derivada assumindo aproximações de Poisson, binomial e de cadeia de Markov para as chegadas de pacotes compostos, com a última definida como benchmark. Em seguida, é mostrado via simulação numérica que a expressão baseada em Poisson não é apropriada para uma estimativa confiável de taxa de transferência quando comparada aos resultados de benchmark (Markov). A equação de taxa de transferência binomial, por sua vez, fornece resultados tão precisos quanto o benchmark. Além disso, a taxa de transferência binomial é numericamente mais conveniente e computacionalmente eficiente quando comparada com abordagem de Markov, enquanto esta última é computacionalmente dispendiosa, particularmente se o número de usuários é grande. As expressões de taxa de erro de bit (BER) são derivadas considerando distribuições gaussianas e binomiais para a interferência de acesso múltiplo e é mostrado por meio de simulações numéricas que o desempenho exato de redes OCDMA flexíveis é obtido somente com a expressão binomial de BER. Esta tese também propõe e investiga uma arquitetura de rede para o tráfego de protocolo de Internet sobre OCDMA flexível com suporte a transmissões de QoS e de múltiplas taxas, que é independente do código de espalhamento empregado e não requer qualquer nova tecnologia de processamento óptico. Além disso, o desempenho da rede assume que os usuários transmitem de forma assíncrona usando receptores baseados em esquemas de detecção direta de modulação de intensidade. As simulações numéricas mostraram que a rede proposta possui melhor desempenho quando seus usuários são definidos com peso de código alto ou quando a utilização do canal é baixa. O desempenho da BER e da taxa de transferência de pacotes de uma rede OCDMA que fornece transmissões de múltiplas taxas por meio de uma técnica multi-código com dois códigos atribuídos a cada usuário é também abordado. Os resultados numéricos mostram que esta técnica supera as técnicas clássicas baseadas no código de comprimento múltiplo. Finalmente, esta tese aborda uma nova tecnologia que pode levar a níveis mais elevados de segurança na camada física de redes ópticas. Esta tecnologia consiste na geração de caos determinístico a partir de um laser de emissão superficial com cavidade vertical (VCSEL). A dinâmica caótica é gerada através da aplicação de forças mecânicas em um VCSEL comercial usando um suporte simples e facilmente replicável. O caos determinístico é então alcançado, pela primeira vez, sem qualquer complexidade adicional de realimentação óptica, modulação de parâmetros ou injeção óptica. A simplicidade do sistema proposto, o qual se baseia inteiramente em componentes de baixo custo e que são facilmente encontrados, abre o caminho para o uso de dispositivos VCSEL comerciais para aplicações baseadas em caos. Este gerador de caos óptico tem o potencial não só de pavimentar o caminho para novas plataformas de segurança em redes ópticas, como, por exemplo, ocultar com êxito as informações do usuário em um sinal imprevisível e aleatório contra eventuais invasores, como também tem o potencial de influenciar aplicações de caos emergentes inicialmente limitadas ou inviáveis devido à falta de soluções de baixo custo. Além disso, ele conduz o caminho para a realização futura de aplicações emergentes com alta integridade e escalabilidade, tais como matrizes bidimensionais de dispositivos caóticos que compreendem centenas de fontes individuais para aumentar as necessidades de geração de bit aleatória, criptografia ou redes quânticas de grande escala.
APA, Harvard, Vancouver, ISO, and other styles
30

Abdul, Hamid Nor Hayati. "Seismic damage avoidance design of warehouse buildings constructed using precast hollow core panels." Thesis, University of Canterbury. Civil Engineering, 2006. http://hdl.handle.net/10092/1153.

Full text
Abstract:
Precast prestressed hollow core units are commonly used in the construction of the flooring system in precast buildings. These units without transverse reinforcement bars are designed to resist seismic loading as replacement for fixed-base precast wall panels in the construction of warehouse buildings. Thus, this research seeks to investigate the seismic performance of the units constructed as a subassemblage (single wall) subjected to biaxial loading and as a superassemblage (multi-panel) subjected to quasi-static lateral loading. A design procedure for warehouse building using precast hollow core walls under Damage Avoidance Design (DAD) is proposed. In addition, a risk assessment under Performance-Based Earthquake Engineering (PBEE) is evaluated using the latest computational tool known as Incremental Dynamic Analysis (IDA). A comparative risk assessment between precast hollow core walls and fixed-base monolithic precast wall panels is also performed. Experimental results demonstrate that rocking precast hollow core walls with steelarmouring do not suffer any non-structural damage up to 2.0% drift and minor structural damage at 4.0% drift. Results revealed that the wall with unbonded fuse-bars and 50% initial prestressing of unbonded tendons performed the best compared with other types of energy dissipators. Furthermore, 12mm diameter of fuse-bar is recommended as there is no uplifting of the foundation beam during ground shaking. Hence, this type of energy dissipator is used for the construction of seismic wall panels in warehouse buildings. One of the significant findings is that the capacity reduction factor (Ø ) which relates to global uncertainty of seismic performance is approximately equal to 0.6. This value can be used to estimate the 90th percentile of the structures without performing IDA. Therefore, the structural engineers are only required to compute Rapid-IDA curve along with the proposed design procedure.
APA, Harvard, Vancouver, ISO, and other styles
31

Khan, Alamgir 1979. "Cálculo eficiente de alta qualidade Ab Initio e DFT das atividades Raman de espalhamento dependentes da frequência de moléculas de interesse ambiental." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/249082.

Full text
Abstract:
Orientador: Pedro Antonio Muniz Vazquez
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Química
Made available in DSpace on 2018-08-23T04:27:58Z (GMT). No. of bitstreams: 1 Khan_Alamgir_D.pdf: 2306325 bytes, checksum: a39e71c04424c25645e57f5fe0ab453c (MD5) Previous issue date: 2013
Resumo: Neste trabalho novas metodologias para cálculo das intensidades Raman absolutas de moléculas na fase gasosa foram desenvolvidas para uma série de moléculas pequenas, usando o método ab initio (CCSD) e a teoria do funcional de Densidade (DFT) (PBE0, LB94 e CAM-B3LYP) dentro da Teoria de Polarizabilidade de Placzek. A velocidade em cálculo junto com economia nos recursos computacionais foram estudadas usando dos conjuntos de bases polarizadas (potential efetivo de caroço) pSBKJC e pStuttgart desenvolvidos pelo nosso grupo através do procedimento "polarização eléctrica de Sadlej". Os resultados da metodolgia proposta em comparação com conjunto de bases Sadlej-pVTZ como referência, em níveis CCSD e DFT mostram acordos quantitativos nas propriedades e uma redução no tempo computacional. Na segunda parte deste trabalho, estas metodologias foram aplicadas para uma série de moléculas grandes de pesticidas organoclorados, ou seja; DDT e cinco análogos estruturas e de cinco pesticidas que contêm o grupo norborneno, utilizando métodos DFT (PBE0 e CAMB3LYP) para o cálculo das propriedades Raman. O conjunto de bases permitiram a redução o número de elétróns de 6 para 4 para carbono, 8 para 6 para oxigênio, 16 para 6 para o enxofre e 17 para 7 para o cloro. Assim, estas reduções de número de elétrons dá 50% de economia em recursos computacionais e tempo para os cálculos das propriedades ópticas das moléculas estudadas
Abstract: In this work new methodologies for calculating the absolute Raman intensities originating from gas phase molecules were developed for a set of small tests molecules, using ab initio quantum-mehancial (CCSD) and Density functional (PBE0, LB94 and CAMB3LYP functionals) methods within Placzek¿s Polarizability Theory. The speed-up in computation along with economy in the computational resources were studied using a newly polarized effective core potential basis set pSBKJC and pStuttgart developed by our group through Sadlej¿s electric polarization procedure. The results of the proposed methodology in comparison with Sadlej-pVTZ as reference basis set at CCSD and DFT levels show quite a good quantitative agreements in the properties with a valuable reduction in computational time and resources. In the second part of this work, the methodologies being assessed were applied for a series of large Organochlorinated pesticides ou seja, DDT and five structurally related pesticides and of five pesticides containing the Norbornene group, using DFT methods (PBE0 and CAMB3LYP functionals). The basis set allowed the reduction of the number of electrons from 6 to 4 for carbon, 8 to 6 for oxygen, 16 to 6 for sulfur and 17 to 7 for chlorine atoms. Thus, these reductions in electrons give more than 50% of savings in computer resources and time for the calculations of optical properties of reference molecules of environmental interests
Doutorado
Físico-Química
Doutor em Ciências
APA, Harvard, Vancouver, ISO, and other styles
32

Introïni, Clément. "Interaction entre un fluide à haute température et un béton : contribution à la modélisation des échanges de masse et de chaleur." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0074/document.

Full text
Abstract:
Lors d'un hypothétique accident grave de réacteur à eau sous pression, un mélange de matériaux fondus, appelé corium, issu de la fusion du cœur peut se relocaliser dans le puits de cuve constitué par un radier en béton. Les codes d'évaluation réacteur pour simuler la phénoménologie de l'interaction corium-béton sont basés sur une description à grande échelle des échanges qui soulève de nombreuses questions, tant sur la prise en compte des phénomènes multi-échelles mis en jeu que sur la structure adoptée de la couche limite au voisinage du front d'ablation. Dans ce contexte, l'objectif principal de ce travail consiste à aborder le problème de la structure de la couche limite par simulation numérique directe. Ce travail s'inscrit dans le cadre plus général d'une description et d'une modélisation multi-échelle des échanges, c'est-à-dire de l'échelle locale associée au voisinage du front d'ablation jusqu'à l'échelle du code d'évaluation réacteur. Une telle description multi-échelle des échanges soulève le problème de la description locale de l'écoulement multiphasique multiconstituant mais aussi le problème du changement d'échelle et en particulier le passage de l'échelle locale à l'échelle de description supérieure dite macroscopique associée aux mouvements convectifs dans le bain de corium. Parmi les difficultés associées au changement d'échelle, nous nous intéressons à la problématique de la construction de conditions aux limites effectives ou lois de parois pour les modèles macroscopiques. Devant la complexité du problème multiphasique multiconstituant posé au voisinage du front, cette contribution a été abordée sur un problème modèle. Des conditions aux limites dites effectives ont été construites dans le cadre d'une méthode de décomposition de domaine puis testées pour un problème d'écoulement laminaire de convection naturelle sur parois rugueuses. Mˆeme si le problème traité reste encore éloigné des applications visées, cette contribution offre de nombreuses perspectives et constitue une première étape d'une modélisation multiéchelle des échanges pour la problématique de l'interaction corium-béton. Dans le cas plus complexe des écoulements multiphasiques multiconstituants et devant les difficultés expérimentales associées, le développement de lois de parois pour les outils existants aux échelles de description supérieures nécessite, au préalable, de disposer d'un outil de simulation numérique directe de l'écoulement au voisinage du front d'ablation. L'outil développé dans ce travail correspond à un modèle de Cahn-Hilliard/Navier-Stokes pour un mélange diphasique (liquide-gaz) compositionnel (corium-béton fondu) s'appuyant sur une description du système selon trois paramètres d'ordre associés respectivement aux fractions volumiques du gaz et aux deux espèces miscibles de la phase liquide ainsi que sur une décomposition de l'énergie libre selon une contribution diphasique et compositionnelle. Les équations de transport sont dérivées dans le cadre de la thermodynamique des processus irréversibles et résolues sur la base d'une application éléments finis de la plate-forme PELICANS. Plusieurs expériences numériques illustrent la validité et les potentialités d'application de cet outil sur des problèmes diphasiques et/ou compositionnels. Enfin, à partir de l'outil développé, nous abordons par simulation numérique directe une étude de la structure de la couche limite au voisinage du front d'ablation pour des bétons siliceux et silico-calcaire
In the late phases of some scenario of hypothetical severe accident in Pressurized Water Reactors, a molten mixture of core and vessel structures, called corium, comes to interact with the concrete basemat. The safety numerical tools are lumped parameter codes. They are based on a large averaged description of heat and mass transfers which raises some uncertainties about the multi-scale description of the exchanges but also about the adopted boundary layer structure in the vicinity of the ablation front. In this context, the aim of this work is to tackle the problem of the boundary layer structure by means of direct numerical simulation. This work joins within the more general framework of a multi-scale description and a multi-scale modeling, namely from the local scale associated with the vicinity of the ablation front to the scale associated with the lumped parameter codes. Such a multi-scale description raises not only the problem of the local description of the multiphase multicomponent flow but also the problem of the upscaling between the local- and the macro-scale which is associated with the convective structures within the pool of corium. Here, we are particularly interested in the building of effective boundary conditions or wall laws for macro-scale models. The difficulty of the multiphase multicomponent problem at the local scale leads us to consider a relatively simplified problem. Effective boundary conditions are built in the frame of a domain decomposition method and numerical experiments are performed for a natural convection problem in a stamp shaped cavity to assess the validity of the proposed wall laws. Even if the treated problem is still far from the target applications, this contribution can be viewed as a first step of a multi-scale modeling of the exchanges for the molten core concrete issue. In the more complicated case of multiphase multicomponent flows, it is necessary to have a direct numerical simulation tool of the flow at the local scale to build wall laws for macro-scale models. Here, the developed tool corresponds to a Cahn-Hilliard/Navier-Stokes model for a two-phase compositional system. It relies on a description of the system by three volume fractions and on a free energy composed by a two-phase part and a compositional part. The governing equations are derived in the frame of the thermodynamic of irreversible processes. They are solved on the basis of a finite element application of the object-oriented software component library PELICANS. Several numerical experiments illustrate the validity and the potentialities of application of this tool on two-phase compositional problems. Finally, using the developed tool, we tackle by means of direct numerical simulation the problem boundary layer structure in the vicinity of the ablation front for limestone-sand and siliceous concretes
APA, Harvard, Vancouver, ISO, and other styles
33

Tran, Chi Thanh. "The Effective Convectivity Model for Simulation and Analysis of Melt Pool Heat Transfer in a Light Water Reactor Pressure Vessel Lower Head." Doctoral thesis, Stockholm : Division of Nuclear Power Safety, Royal Institute of Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Okonkwo, Ejike C. "An Investigation of the Skill Sets Needed by Information Systems Managers to Cope Effectively with the Transition from Legacy Systems to Client/Server and Distributed Computing Environments." NSUWorks, 2003. http://nsuworks.nova.edu/gscis_etd/756.

Full text
Abstract:
The problem investigated in this study was the specific nature of management issues in the information system (IS) data conversion process: extended project time, high staff turnover, cost overrun, adherence to procedure and user disagreement. Data conversion involves the transfer of computer programs and data files from one computer system to another. Managing data conversion projects has posed problems and difficulties. A thorough comprehension of these issues has systematically eluded information technology (IT) professionals, and this may be related to unsuccessful outcomes of data conversion. Presently, most successful data conversion outcomes are ad hoc solutions rather than a more permanent strategy that will improve success rate of the conversion outcomes. Little of these data have been analyzed concerning the human elements of the organization. Reports from the IS literature have indicated that data conversion tends to have more managerial than technical problems. Secondly, IT experts have warned that automated tools and experience alone may not guarantee immunity from data conversion headaches. In addition, studies have shown that the cyclical nature of the IT industry suggests that data conversion traumas (problems and difficulties) still lurk ahead. The researcher's goal in this dissertation was to investigate management issues during the information systems change process in order to determine the relationships between attribution factors and styles. A second goal was to analyze relationships (if any), among the study variables. The researcher used attribution theory to investigate various relationships among management issues. The researcher used a validated instrument called the Occupational Attribution Style Questionnaire (OASQ), developed by Adrian Furnham, Valda Sadka and Chris Brewin. The validity and reliability of this instrument were established previously with Chronbach's alpha of 0.92. Mail-in questionnaires were distributed to 300 stratified IT managers and professionals from companies, government agencies, colleges and universities. The survey results were analyzed using descriptive and inferential statistics to determine the relationships between attribution factors and styles. Analysis of the descriptive data indicated that the factors were perceived to be very important with mean scores ranging from a minimum of2.14 to a maximum of 4.56. A factor analysis resulted in the identification of 5 items that loaded significantly on three factors: (1) internality, (2) externality, (3) chance. Correlation analysis was conducted to test the hypotheses and to identify associations between these stated factors. Conclusive evidence from these analyses showed the following: (a) there was a positive correlation between attribution factors and management attribution, (b) there was a positive correlation between attribution style and project success, (c) there was a positive correlation between salary and position, (d) there was a negative correlation between gender and education, (e) there was a positive correlation between salary and education. The conclusions of the researcher in this study contributed to the base of knowledge by providing empirically tested information for assisting management in industry, academicians and government in implementing data conversion programs. In addition, results of this research provided a variety of interesting decision-making skills and professional practices among IT professionals. These results can be used to implement techniques and strategies for increasing the success rate of data conversion projects.
APA, Harvard, Vancouver, ISO, and other styles
35

Tran, Chi Thanh. "Development, validation and application of an effective convectivity model for simulation of melt pool heat transfer in a light water reactor lower head." Licentiate thesis, Stockholm : Fysik, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Riou, Jerôme. "Étude de l'influence de l'enseignement du code alphabétique sur la qualité des apprentissages des élèves de cours préparatoire." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAL024/document.

Full text
Abstract:
Notre recherche doctorale porte sur l’influence des pratiques d’enseignement du code alphabétique sur les progrès des élèves de cours préparatoire. Elle a pour objectif d’identifier des pratiques pédagogiques efficaces et de contribuer à la réflexion sur la formation professionnelle des enseignants. Elle constitue l’un des volets d’une enquête collective de grande ampleur dirigée par Roland Goigoux qui visait à évaluer l’influence des pratiques d’enseignement de la lecture et de l’écriture sur la qualité des apprentissages.La première partie de notre recherche est consacrée à la mise en évidence de relations causales entre les pratiques d’enseignement du code alphabétique et les performances des élèves en décodage et en orthographe. Nous nous intéressons tout d’abord à la question de la planification de l’enseignement, plus précisément à la vitesse d’étude des correspondances entre les graphèmes et les phonèmes (tempo) et à la part déchiffrable des textes utilisés comme supports d’enseignement de la lecture (rendement effectif). Nos résultats soulignent l’influence significative de ces deux variables sur la qualité des apprentissages, cette influence s’exerçant de manière différenciée selon le niveau des élèves à l’entrée du cours préparatoire. En outre, nous proposons une progression de l’étude du code alphabétique fondée sur la fréquence théorique des correspondances graphèmes-phonèmes des textes écrits en français standard pouvant servir de référence aux enseignants. Nous étudions également les effets du temps d’enseignement de l’encodage sur les acquisitions scolaires, effets qui se révèlent significatifs et positifs mais qui varient selon la nature des tâches proposées et les publics ciblés.Dans la seconde partie de notre thèse, nous nous attachons à comprendre et à documenter la conduite de l’activité de maitres expérimentés de cours préparatoire à des fins de formation professionnelle. Nous analysons une situation de référence de l’enseignement du lire-écrire à partir des enregistrements vidéo de trente-six séances de lecture collectives. Puis, nous décrivons des scénarios pédagogiques prototypiques et nous posons les bases d’une formation destinée à développer les compétences professionnelles des enseignants. Nous soulevons notamment la problématique de l’articulation de la résolution de tâches de code et de compréhension et celle de l’autonomie de déchiffrage offerte aux élèves. Nous présentons enfin la plateforme numérique que nous avons élaborée et qui permet de déterminer la part déchiffrable des textes utilisés lors des séances de lecture collectives. Cette plateforme nommée Anagraph aide les enseignants à planifier l’étude des correspondances graphophonémiques et à choisir des textes adaptés à l’enseignement de la lecture
Our doctoral research focuses on the influence of phonics instruction on first-grade students’ progress. Its purpose is to identify effective teaching practices and to contribute to the training of teachers. This research is part of a larger study conducted by Roland Goigoux, which aimed to assess the influence of reading and writing on the quality of learning.The first part of our research examines causal relationships between the characteristics of phonics instruction and students’ performances in decoding and spelling. First, we study the influence of the speed of teaching of grapheme-phoneme relationships (tempo) and of the decodable part of texts used to teach reading (rendement effectif). Our results reveal a significant influence of these two variables on the quality of learning, this influence being different according to students’ initial levels. Besides, we propose a planning of the phonics instruction based on the theoretical frequency of the grapheme-phoneme correspondences in texts written in standard French which can serve as references for the teachers. We also study the effects of the teaching time allocated to encoding tasks on reading achievement, effects which appear to be significant and positive but which vary according to the nature of the tasks and to students’ characteristics.In the second part of our dissertation, we attempt to analyze and document teaching practices of experienced first-grade teachers for training purposes. We analyze a reference situation of the teaching of reading and writing from the video recordings of thirty six collective sessions of reading. Then, we describe prototypical teaching scenarios and lay the foundations for a training intended to develop the professional skills of the teachers. Specifically, we raise the issue of the relationship between the resolution of decoding and understanding tasks and the autonomy that decoding success afforded the students. We finally present the digital platform we designed, which allows calculating the decodable part of texts used during reading instruction. This platform named Anagraph has been designed to help teachers plan the study of the grapheme-phoneme correspondences and to choose texts adapted to their teaching
APA, Harvard, Vancouver, ISO, and other styles
37

Pokharel, Narayan. "Behaviour and design of sandwich panels subject to local buckling and flexural wrinkling effects." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15890/1/Narayan_Pokharel_Thesis.pdf.

Full text
Abstract:
Sandwich panels comprise a thick, light-weight plastic foam such as polyurethane, polystyrene or mineral wool sandwiched between two relatively thin steel faces. One or both steel faces may be flat, lightly profiled or fully profiled. Until recently sandwich panel construction in Australia has been limited to cold-storage buildings due to the lack of design methods and data. However, in recent times, its use has increased significantly due to their widespread structural applications in building systems. Structural sandwich panels generally used in Australia comprise of polystyrene foam core and thinner (0.42 mm) and high strength (minimum yield stress of 550 MPa and reduced ductility) steel faces bonded together using separate adhesives. Sandwich panels exhibit various types of buckling behaviour depending on the types of faces used. Three types of buckling modes can be observed which are local buckling of plate elements of fully profiled faces, flexural wrinkling of flat and lightly profiled faces and mixed mode buckling of lightly profiled faces due to the interaction of local buckling and flexural wrinkling. To study the structural performance and develop appropriate design rules for sandwich panels, all these buckling failure modes have to be investigated thoroughly. A well established analytical solution exists for the design of flat faced sandwich panels, however, the design solutions for local buckling of fully profiled sandwich panels and mixed mode buckling of lightly profiled sandwich panels are not adequate. Therefore an extensive research program was undertaken to investigate the local buckling behaviour of fully profiled sandwich panels and the mixed mode buckling behaviour of lightly profiled sandwich panels. The first phase of this research was based on a series of laboratory experiments and numerical analyses of 50 foam-supported steel plate elements to study the local buckling behaviour of fully profiled sandwich panels made of thin steel faces and polystyrene foam core covering a wide range of b/t ratios. The current European design standard recommends the use of a modified effective width approach to include the local buckling effects in design. However, the experimental and numerical results revealed that this design method can predict reasonable strength for sandwich panels with low b/t ratios (< 100), but it predicts unconservative strengths for panels with slender plates (high b/t ratios). The use of sandwich panels with high b/t ratios is very common in practical design due to the increasing use of thinner and high strength steel plates. Therefore an improved design rule was developed based on the numerical results that can be used for fully profiled sandwich panels with any practical b/t ratio up to 600. The new improved design rule was validated using six full-scale experiments of profiled sandwich panels and hence can be used to develop safe and economical design solutions. The second phase of this research was based on a series of laboratory experiments and numerical analyses on lightly profiled sandwich panels to study the mixed mode buckling behaviour due to the interaction of local buckling and flexural wrinkling. The current wrinkling formula, which is a simple modification of the methods utilized for flat panels, does not consider the possible interaction between these two buckling modes. As the rib depth and width of flat plates between the ribs increase, flat plate buckling can occur leading to the failure of the entire panel due to the interaction between local buckling and wrinkling modes. Experimental and numerical results from this research confirmed that the current wrinkling formula for lightly profiled sandwich panels based on the elastic half-space method is inadequate in its present form. Hence an improved equation was developed based on validated finite element analysis results to take into account the interaction of the two buckling modes. This new interactive buckling formula can be used to determine the true value of interactive buckling stress for safe and economical design of lightly profiled sandwich panels. This thesis presents the details of experimental investigations and finite element analyses conducted to study the local buckling behaviour of fully profiled sandwich panels and the mixed mode buckling behaviour of lightly profiled sandwich panels. It includes development and validation of suitable numerical and experimental models, and the results. Current design rules are reviewed and new improved design rules are developed based on the results from this research.
APA, Harvard, Vancouver, ISO, and other styles
38

Pokharel, Narayan. "Behaviour and Design of Sandwich Panels Subject to Local Buckling and Flexural Wrinkling Effects." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15890/.

Full text
Abstract:
Sandwich panels comprise a thick, light-weight plastic foam such as polyurethane, polystyrene or mineral wool sandwiched between two relatively thin steel faces. One or both steel faces may be flat, lightly profiled or fully profiled. Until recently sandwich panel construction in Australia has been limited to cold-storage buildings due to the lack of design methods and data. However, in recent times, its use has increased significantly due to their widespread structural applications in building systems. Structural sandwich panels generally used in Australia comprise of polystyrene foam core and thinner (0.42 mm) and high strength (minimum yield stress of 550 MPa and reduced ductility) steel faces bonded together using separate adhesives. Sandwich panels exhibit various types of buckling behaviour depending on the types of faces used. Three types of buckling modes can be observed which are local buckling of plate elements of fully profiled faces, flexural wrinkling of flat and lightly profiled faces and mixed mode buckling of lightly profiled faces due to the interaction of local buckling and flexural wrinkling. To study the structural performance and develop appropriate design rules for sandwich panels, all these buckling failure modes have to be investigated thoroughly. A well established analytical solution exists for the design of flat faced sandwich panels, however, the design solutions for local buckling of fully profiled sandwich panels and mixed mode buckling of lightly profiled sandwich panels are not adequate. Therefore an extensive research program was undertaken to investigate the local buckling behaviour of fully profiled sandwich panels and the mixed mode buckling behaviour of lightly profiled sandwich panels. The first phase of this research was based on a series of laboratory experiments and numerical analyses of 50 foam-supported steel plate elements to study the local buckling behaviour of fully profiled sandwich panels made of thin steel faces and polystyrene foam core covering a wide range of b/t ratios. The current European design standard recommends the use of a modified effective width approach to include the local buckling effects in design. However, the experimental and numerical results revealed that this design method can predict reasonable strength for sandwich panels with low b/t ratios (< 100), but it predicts unconservative strengths for panels with slender plates (high b/t ratios). The use of sandwich panels with high b/t ratios is very common in practical design due to the increasing use of thinner and high strength steel plates. Therefore an improved design rule was developed based on the numerical results that can be used for fully profiled sandwich panels with any practical b/t ratio up to 600. The new improved design rule was validated using six full-scale experiments of profiled sandwich panels and hence can be used to develop safe and economical design solutions. The second phase of this research was based on a series of laboratory experiments and numerical analyses on lightly profiled sandwich panels to study the mixed mode buckling behaviour due to the interaction of local buckling and flexural wrinkling. The current wrinkling formula, which is a simple modification of the methods utilized for flat panels, does not consider the possible interaction between these two buckling modes. As the rib depth and width of flat plates between the ribs increase, flat plate buckling can occur leading to the failure of the entire panel due to the interaction between local buckling and wrinkling modes. Experimental and numerical results from this research confirmed that the current wrinkling formula for lightly profiled sandwich panels based on the elastic half-space method is inadequate in its present form. Hence an improved equation was developed based on validated finite element analysis results to take into account the interaction of the two buckling modes. This new interactive buckling formula can be used to determine the true value of interactive buckling stress for safe and economical design of lightly profiled sandwich panels. This thesis presents the details of experimental investigations and finite element analyses conducted to study the local buckling behaviour of fully profiled sandwich panels and the mixed mode buckling behaviour of lightly profiled sandwich panels. It includes development and validation of suitable numerical and experimental models, and the results. Current design rules are reviewed and new improved design rules are developed based on the results from this research.
APA, Harvard, Vancouver, ISO, and other styles
39

McAllister, Steve Randolph. "Implementation of Food Safety Regulations in Food Service Establishments." ScholarWorks, 2018. https://scholarworks.waldenu.edu/dissertations/5902.

Full text
Abstract:
Food service businesses in the United States have experienced millions of dollars in losses caused by foodborne illness outbreaks, which can lead to bankruptcy and business closures. More than 68% of all foodborne illness outbreaks occur in food service establishments. The purpose of this descriptive case study was to explore the strategies leaders of food service establishments use to implement food safety regulations. Force field analysis was the conceptual framework for this study. The population for the study consisted of 3 leaders of food service establishments located in the southeastern region of the United States. Data were collected using semistructured interviews and a review of the business policies and procedures that support compliance with critical food safety regulations. The methodological triangulation approach was used to assist in correlating the interview responses with company policies and procedures during the data analysis process. Yin's 5-step data analysis approach resulted in 3 themes: (a) organizational performance analysis for improvements in food safety, (b) strategies applied to improve food safety, and (c) stability of new strategies for food safety. The key strategies identified included adhering to the guidelines of food code and regulation, conducting employee training and awareness building, and working closely with food safety inspectors. The implications for positive social change include the potential to add knowledge to businesses, employees, and communities on the use of effective food safety strategies to minimize foodborne illnesses. Such results may lead to the improvement of service performance and long-term growth and sustainability of food service establishments.
APA, Harvard, Vancouver, ISO, and other styles
40

Warnock, Teresa Georgeanne. "School System Improvement through Building Leadership, Adult Learning, and Capacity: A Consideration of Instructional Rounds as a Systemic Improvement Practice." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062801/.

Full text
Abstract:
The problem of the study was determining the supportive conditions related to instructional rounds (rounds) to understand better what conditions may allow for sustained systemic improvement over time. Three Texas school districts were studied to understand the perceptions of district leaders, principals, teacher leaders, and teachers with regard to the sustainability of instructional rounds as a systemic improvement practice, the supportive conditions necessary for sustainability, the salient characteristics that differentiated rounds from other improvement practices, and the potential of rounds to build organizational capacity. Observation of network rounds visits and document analysis was conducted to determine alignment of perception with observation and documents. Findings include perceptions, themes, and critical factors for the sustainability of rounds as an effective systemic improvement practice. Supportive conditions emerged as the most significant perception expressed by the participants. Implications for action for school districts beginning or continuing implementation of instructional rounds are suggested based upon findings from participant perceptions and observation of networks. Suggestions for future research are shared. With supportive conditions in place, instructional rounds has the potential to serve as an effective systemic improvement practice.
APA, Harvard, Vancouver, ISO, and other styles
41

Sarai, Leandro. "Análise jurídica das medidas prudenciais preventivas no âmbito do sistema financeiro nacional." Universidade Presbiteriana Mackenzie, 2014. http://tede.mackenzie.br/jspui/handle/tede/1117.

Full text
Abstract:
Made available in DSpace on 2016-03-15T19:34:09Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-02-14
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The actual stage of capitalism is characterized by financialization of economy. This fact associated to the importance which financial institutions already had in the financial system strengthens their relevance at the same time that attract a lot of issues about the appropriated treatment they have receipt in order to continue in a normal operation and, in the eventual and natural crisis, the manner to reduce its negative effects and to contain the contagion. The universal character of the financial activity struggles with the local nature of sovereignty, which controls money and the operations of the institution in its territory. An international consensus leads to a pursuit for convergence in financial regulation, in order to avoid regulatory arbitrage and competitive problems, what is shown manly through the Basel Committee on Banking Supervision recommendations. Among these recommendations, there are the Core Principles for Effective Banking Supervision, which, in its turn, supports the need of flexible and quick instruments to supervisors adopt prompt measures to maintain the institution of the financial system operating in a prudential manner at the same time that these measures intend to avoid situation in what a special regime be the only alternative, with the problems associated with it. These are the preventive prudential measures, which will be analyzed in this dissertation, according to Brazilian law.
A fase presente do capitalismo é caracterizada pela financeirização da economia. Esse fato somado à importância que as instituições financeiras já possuíam no sistema financeiro reforça sua relevância ao mesmo tempo em que atrai uma série de preocupações com o tratamento apropriado que devem receber para que se mantenham em adequado funcionamento e para que, nas eventuais e naturais crises, sejam minoradas as consequências danosas e contidos o efeito de contágio. O caráter universal da atividade financeira se choca com a natureza local da soberania que controla em seu limitado território a moeda e o funcionamento das instituições. Um consenso internacional surge para buscar uma convergência na regulação dessa atividade, de modo a evitar arbitragem regulatória e problemas concorrenciais, o que se vê principalmente pelas recomendações oriundas do Comitê de Basileia de Supervisão Bancária. Entre essas recomendações, encontram-se os Princípios Básicos para uma Supervisão Bancária Eficaz, que, por sua vez, pregam a necessidade de instrumentos flexíveis e ágeis para as autoridades supervisoras adotarem prontas medidas para que as instituições do sistema financeiro se mantenham dentro dos limites prudenciais, com o intuito de evitar situações em que a decretação de um regime especial seja a única alternativa, com os males que lhe são inerentes. Essas são as medidas prudenciais preventivas, cuja análise, sob o ponto de vista jurídico, será realizada no presente trabalho.
APA, Harvard, Vancouver, ISO, and other styles
42

Hliwa, Mohamed. "Traitement simplifie des interactions moleculaires en chimie quantique." Toulouse 3, 1988. http://www.theses.fr/1988TOU30038.

Full text
Abstract:
Calculs ab initio sur le systeme hautement degenere cr h: mise en evidence d'un fort couplage entre etats ioniques et neutres et analyse des fonctions d'onde dans une description diabatique. Proposition d'une methode perturbative pour calcul des energies de dispersion entre un systeme versatil a (decrit dans une grande base) et un systeme quasi passif b (traite a l'approximation en coeur gele et caracterise par sa polarisabilite); calcul scf + ci de (a + b gele), du champ electrique exerce par a sur b, et de ses fluctuations, a l'aide d'un hamiltonien effectif; application a l'etude des courbes de potentiel des premiers etats excites des molecules diatomiques de ar avec na, k ou mg. Emploi de la theorie des pseudopotentiels et des potentiels modeles pour le calcul de potentiels impulsifs d'atomes inertes transferables a des systemes moleculaires; a partir de ces potentiels, calcul d'energies de dispersion applicable a la spectroscopie d'atomes alcalins en matiere de gaz rare
APA, Harvard, Vancouver, ISO, and other styles
43

Gold, Daniel. "Lobbying Regulation in Canada and the United States: Political Influence, Democratic Norms and Charter Rights." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40908.

Full text
Abstract:
Lobbying should be strictly regulated – that is the major finding of this thesis. The thesis presents many reasons to enact stricter regulations. The principle one being that, as lightly regulated as it is, lobbying is corroding democracy in both Canada and the United States. The thesis opens with a deep investigation of how lobbying works in both countries. There are examples taken from the literature, as well as original qualitative interviews of Canadian lobbyists, former politicians, and officials. Together, these make it clear that there is an intimate relationship between lobbying and campaign financing. The link between the two is sufficiently tight that lobbying and campaign financing should be considered mirrors of each other for the purposes of regulatory design and constitutional jurisprudence. They both have large impacts on government decision-making. Left lightly regulated, lobbying and campaign financing erode the processes of democracy, damage policy-making, and feed an inequality spiral into plutocracy. These have become major challenges of our time. The thesis examines the lobbying regulations currently in place. It finds the regulatory systems of both countries wanting. Since stricter regulation is required to protect democracy and equality, the thesis considers what constitutional constraints, if any, would stand in the way. This, primarily, is a study of how proposed stronger lobbying regulations would interact with the Canadian Charter of Rights and Freedoms, s. 2 (free expression and association rights) and s. 3 (democratic rights). The principal findings are that legislation which restricted lobbying as proposed would probably be upheld by the Canadian court, but struck down by the American court, due to differences in their constitutional jurisprudence. The thesis contends that robust lobbying regulations would align with Canadian Charter values, provide benefits to democracy, improve government decision-making, increase equality, and create more room for citizen voices. The thesis concludes with a set of proposed principles for lobbying reform and an evaluation of two specific reforms: limits on business lobbying and funding for citizen groups. Although the thesis focuses on Canadian and American lobbying regulations, its lessons are broadly applicable to any jurisdiction that is considering regulating lobbying.
APA, Harvard, Vancouver, ISO, and other styles
44

Fujdiak, Radek. "Analýza a optimalizace datové komunikace pro telemetrické systémy v energetice." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-358408.

Full text
Abstract:
Telemetry system, Optimisation, Sensoric networks, Smart Grid, Internet of Things, Sensors, Information security, Cryptography, Cryptography algorithms, Cryptosystem, Confidentiality, Integrity, Authentication, Data freshness, Non-Repudiation.
APA, Harvard, Vancouver, ISO, and other styles
45

"The effective cone on symmetric powers of curves." STATE UNIVERSITY OF NEW YORK AT STONY BROOK, 2009. http://pqdtopen.proquest.com/#viewpdf?dispub=3338163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ho, Chang-Hung, and 何常弘. "Estimation of the Effective Dose of Cone Beam CT Using the CTDI Phantom." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/18273474415557072327.

Full text
Abstract:
碩士
中臺科技大學
醫學影像暨放射科學系暨研究所
103
CBCT is a new modality in dental radiology that has rapidly become popular. When selecting an appropriate examination technique for each patient, the ALARA principle should be followed. The purpose of study was evaluated three methods for calculating effective dose, CT dose index (CTDI), dose–area product (DAP) and TLD for a cone beam CT (CBCT) device.   CTDI100 measurements were performed in a CT head dose phantom with a pencil ion chamber. The DAP value was determined with a plane-parallel transmission ionization chamber connected to an electrometer. Organ dose measurements were performed using TLD dosimeters that were in the 19 most radiosensitive organs in the maxillofacial and neck area.   The results showed that the effective dose measured by using CDTI phantom and DAP was approached to the outcome of TLD. CTDI100 measurements: (43.51- 188.05 µSv ), DAP value: (55.06- 195.8µSv), TLD measurements: (68.3- 218.28µSv) .   The results reinforce need for constant review of protocols and rational use of new technologies.
APA, Harvard, Vancouver, ISO, and other styles
47

Phanzu, Bwanga. "Effective dose of radiation on the eye, thyroid and pelvic region resulting from exposures to the Galileos comfort cone beam computerized tomographic scanner." Thesis, 2015. http://hdl.handle.net/10539/17496.

Full text
Abstract:
Degree of Master of Science in Dentistry by coursework and dissertation A research report submitted to the Faculty of Health Sciences, University of the Health Sciences. University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Dentistry Johannesburg, 2014
Introduction: Dental Cone beam CT has encountered great success in diagnostics and treatment planning in dentistry. However, it makes use of ionizing radiation. Lots of concern on the effects of x-rays on vital organs of the head and neck region has been raised. Clarity on the amount of radiation received on these specific organs will be a contribution to a better use of the emergent technology. Aim: The aim of this study is to determine the potential dose of radiation received on the eye and thyroid and to quantify the amount of potential scatter on the gonads during CBCT examinations. Material and Methods: Calibrated Lithium- Fluoride thermoluminescent dosimeters were inserted inside an anthropomorphic phantom, on sites of the eye, thyroid and the gonads. After its submission to a CBCT examination, using the high and standard resolution for a similar scanning protocol, the dose of radiation received on each organ was calculated according to the ICRP guidelines. Results: An equivalent dose of 0.059 mGy was calculated for the eye. Compared to the threshold dose of 0.5 Gy fixed by the ICRP 2007, this can be considered as relatively low. The thyroid with an effective dose of 23.5 μSv represented 20% of the full body effective dose existing in literature. The gonads absorbed an effective dose of 0.05 μSv, which was considered as negligible. Conclusion: The doses calculated were considered as relatively low. However, dentists must be aware of risks of cumulative exposure. Therefore adherence to the ALARA principle and consideration of clinical indication for CBCT remain a priority.
APA, Harvard, Vancouver, ISO, and other styles
48

Sung, Chun-Ying, and 宋純潁. "The comparison of effective dose in residul thyroid gland between Cone Beam CT and Spiral CT in SPECT/CT imaging in patients of thyroid cancer with status post I-131 treatment." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/38697595281819531839.

Full text
Abstract:
碩士
高雄醫學大學
醫學影像暨放射科學系碩士在職專班
105
Purpose: Hybrid imaging system becomes widely used in nuclear medicine in recent years. This imaging system offers better functional and anatomical information and thus increases the sensitivity and specifity of examinations. However, patients may have higher radiation exposure of SPECT/CT than those with traditional SPECT , this study evaluates effective dose in residul thyroid gland in patients of thyroid cancer status post I-131 treatment between different types of SPECT/CT and assess secondary cancer risks. Material and methods: In this study, we used TLD-100H to measure the absorbed dose of thyroid. Before the study, we calibrate the TLDs with Elekta Axesse linac and acquired the calibration curve. Single absorbed dose under CT exposure were acquired by placing TLDs on thyroid surface of the RT Humanoid phantom to simulate the protocol of examination with SPECT/CBCT and SPECT/spiral CT respectively. To assess internal dose, 40 patients of differentiated thyroid carcinoma receiving I-131 treatment were selected. Half received the examination by the SPECT/CBCT of Bright view XCT Imaging system (Philips Healthcare, Cleveland, OH) while the other by the SPECT/spiral CT of Discovery NM/CT 670 system(GE Health, USA). The TLDs were placed in the surface of patient''s thyroid gland and measured for 30 minutes. The results (internal dose and effective dose of thyroid) were combined to assess the effective dose and calculate the risk of secondary cancer. Result: The patient who undergoing Bright view XCT SPECT / CBCT scanner, the effective dose of thyroid remnant tissue was 0.56 ± 0.08 mSv, the secondary cancer risk was 1.8  10-6 in the whole population and 5.0 10-7 in the working population. The patient who undergoing Discovery NM/CT 670 SPECT/spiral CT scanner, the effective dose of thyroid remnant tissue was 0.33 ± 0.08 mSv, and the risk of secondary cancer was 1.1 10-6 in the whole population and 3.0  10-7 in the working population. Conclusion: In these two types of SPECT / CT examinations, the effective dose of CT and I-131 given to patients is lower than that of the Atomic Energy Commission in Taiwan, who can only accepted 1 mSv dose limitation for the general population during one year. However, medical exposure is reasonable to diagnosis and treatment patients’ disease and the advantages outweigh the disadvantages, that we can use these tools reasonable.
APA, Harvard, Vancouver, ISO, and other styles
49

Chang, Shih-Sheng, and 張仕昇. "Superconductor layer-reduced TMR effectin NiFe/CoFe/AlO/CoFe/Nb." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/80633817999408111596.

Full text
Abstract:
碩士
國立臺灣大學
物理研究所
92
Considering the electron transport in the magnetic tunnel junction, solving the wave function in quantum physics is a tedious work. By means of the transfer matrix, much easier relations correlated the transmission amplitude of the tunneling electron can be derived. In the double barrier tunnel junction, the TMR ratio which calculated with the concept of spin dependent transport varies oscillatorily with the thickness of the middle ferromagnetic layer indicating that the TMR ratio can be modified by adjusting the middle layer thickness. And we correlate the effective spin polarization of the electron with the Fermi wave number in this system. Not only electrical properties but also magnetic structure strongly influence the value of TMR. A superconducting layer is sputtered on the pseudo spin valve magnetic tunnel junction to analyze the influence of superconducting ordering on the magnetization of the FM layer. The electrical transport in superconductor is dominated by cooper pair possessing two electrons of opposite spin. In the experiment different thickness and shape of magnetic and superconducting layers are designed for investigating how they interact with each other. By monitoring the resistance of junction and superconducting layer, apparent decrease of TMR ratio with 500nm thick superconducting layer is observed in spite of 2nm or 30nm thick of the top ferromagnetic layer. Pair breaking effect is observed in the experiment by changing the magnetization orientation of the two ferromagnetic layers. When the applied magnetic field is large, the superconducting state is more unstable which results in the increase of Nb resistance than that the applied field is zero.
APA, Harvard, Vancouver, ISO, and other styles
50

De, Beer Morris. "Aspects of the design and behaviour of road structures incorporating lightly cementitious layers." Thesis, 1990. http://hdl.handle.net/2263/26753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography