Dissertations / Theses on the topic 'Mixed scheme'

To see the other types of publications on this topic, follow the link: Mixed scheme.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mixed scheme.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dimagiba, Richard Raymond N. "Application of the Boundary Element Method to three-dimensional mixed-mode elastoplastic fracture mechanics." Thesis, University of Sheffield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Toner, Michael F. "MADBIST : a scheme for built-in self-test of mixed analog-digital integrated circuits." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=40451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Consumers are demanding more and more value for each dollar spent on new electronic equipment. Built-In Self-Test (BIST) of electronic circuits and equipment will help to satiate the demand for self test, self diagnostics, and self repair. This dissertation explores a technique for a Mixed Analog Digital BIST (MADBIST) on a mixed-signal Integrated Circuit (IC). Specifically, on-chip tests for the Analog-to-Digital Converter and Digital-to-Analog Converter on the mixed-signal IC are developed. (The digital portion of the IC can be tested using digital BIST techniques). The tests implemented include Frequency Response, Signal-to-Noise Ratio, Gain Tracking, Inter-Modulation Distortion, and Harmonic Distortion. A precision analog test stimulus is efficiently generated on-chip using digital circuitry. The test stimulus itself is encoded within a Pulse-Density-Modulated bit stream. A narrow-band digital filter is employed to extract the measurement results. Experimental results from a test chip and a prototype circuit board are provided. Some of the engineering and economical trade-offs associated with the design of the tests are considered. The overhead required to implement several types of tests is dealt with. We also explore the relationship between the accuracy achieved by the test and the amount of resources required to implement it.
3

Hanson, Coral Lucy. "Advancing understanding of effective exercise on referral : a mixed methods evaluation of the Northumberland scheme." Thesis, Durham University, 2017. http://etheses.dur.ac.uk/12162/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Exercise on Referral Schemes (ERS) are a widespread community intervention in which health professionals refer patients to a programme of supervised exercise at leisure facilities. National guidance states routine data should be collected and made available for analyses, and that there is a need to better understand what elements of ERS work, for which subgroups of the population. This thesis examines what elements of behaviour change provision within ERS work, for whom and in what circumstances, in order to gain a better understanding of what influences referral to, engagement with, and adherence to such schemes. First the thesis presents a quantitative evaluation study of 2233 referrals to the 24-week Northumberland ERS in nine leisure facilities between July 2009 and September 2010. Main outcome measures were uptake, 12-week adherence, 24-week completion, and changes in self-reported physical activity, blood pressure, body mass index (BMI) and waist circumference. Two qualitative studies follow, one examining pre-scheme perceptions of 15 referrals and the second following them through the scheme. Data from semi-structured interviews conducted in both studies are presented as three narrative typologies of the referral journey. This research demonstrated that demographics and other factors related to referral minimally increased ability to predict engagement. Completion resulted in significant increases in self-reported physical activity and significant, but small, reductions in BMI and waist circumference. Participants had complex social circumstances, multiple personal reasons for referral and high expectations of positive health changes. Staff and peer support were influential to success, especially if expectations were not met. The narrative typologies help to identify those for whom ERS currently works well, those for whom ERS works but who may struggle with sustained behaviour change, and those for whom it does not work. This novel approach to classifying likelihood of success is used to discuss potential improvements to ERS.
4

Mills, Hayley. "A mixed method investigation into the perception and measurement of success in the Healthwise Exercise Referral Scheme." Thesis, University of Gloucestershire, 2008. http://eprints.glos.ac.uk/3173/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Owusu-Asamoah, Kwasi. "Modelling an information management system for the National Health Insurance Scheme in Ghana." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The National Health Insurance Scheme (NHIS) in Ghana was introduced to alleviate the problem of citizens having to pay for healthcare at the point of delivery, given that many did not have the financial resources needed to do so, and as such were unable to adequately access healthcare services. The scheme is managed from the national headquarters in the capital Accra, through satellite offices located in districts right across the length and breadth of the country. It is the job of these offices to oversee the operations of the scheme within that particular district. Current literature however shows us that there is a digital divide that exists between the rural and urban areas of the country which has led to differences in the management of information within urban-based and rural-based districts. This thesis reviews the variables affecting the management of information within the scheme, and proposes an information management model to eliminate identified bottlenecks in the current information management model. The thesis begins by reviewing the theory of health insurance, information management and then finally the rural-urban digital divide. In addition to semi-structured interviews with key personnel within the scheme and observation, a survey questionnaire was also handed out to staff in nine different district schemes to obtain the raw data for this study. In identifying any issues with the current information management system, a comparative analysis was made between the current information management model and the real-world system in place to determine the changes needed to improve the current information management system in the NHIS. The changes discovered formed an input into developing the proposed information management system with the assistance of Natural Conceptual Modelling Language (NCML). The use of a mixed methodology in conducting the study, in addition to the employment of NCML was an innovation, and is the first of its kind in studying the NHIS in Ghana. This study is also the first to look at the differences in information management within the NHIS given the rural-urban digital divide.
6

Jeong, Minsoo. "Asymptotics for the maximum likelihood estimators of diffusion models." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schütz, Jochen [Verfasser]. "A hybrid mixed finite element scheme for the compressible Navier-Stokes equations and adjoint-based error control for target functionals / Jochen Schütz." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2011. http://d-nb.info/1018186158/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hurst, Gemma Louise. "'Walk to Beijing' : a mixed methods evaluation of a financial incentive scheme aimed at encouraging physical activity participation in Sandwell, West Midlands." Thesis, Staffordshire University, 2013. http://eprints.staffs.ac.uk/2003/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background. The many health benefits of physical activity are well established. In response to the low levels of activity in Sandwell, UK, the ‗Walk to Beijing‘ (WTB) intervention aimed to increase lifestyle physical activity using financial incentives (in combination with a health assessment, pedometer and brief advice). Aim. To examine the benefits of a financial incentive scheme to promote physical activity, specifically walking, in sedentary adults. Methods. A mixed methods evaluation comprised: (1) outcome evaluation employing a pre-post intervention design to measure three- and six- month changes in physical activity, physiological and self-reported health; (2) process evaluation using semi-structured interviews to explore participant experiences, motivations towards physical activity, incentivised health schemes and WTB participation; and (3) mixed methods case-study approach using data at collected at six- and 12-month follow-up to further explore sustainability of behaviour change. Results. Three-month data were available for 1082 participants (64.5% of baseline sample). A statistically significant positive change from baseline to three-month follow-up was observed for stage of change (p<.001, d=.63), which was maintained (but not further improved) at six-months (p<.001, d=.64). Significant three- and six-month improvements were also found in objective (e.g., BMI, waist-hip ratio, waist circumference and blood pressure) and subjective (e.g., EQ-5D, SF12v2 and Theory of Planned Behaviour constructs) measures of health status. At baseline, 41.7% of participants cited the financial incentive as influencing their decision to take part. Qualitative data also identified that the financial incentive was the primary motivator for some, but not all, individuals; other intervention components were also motivators. Conclusion. Data suggested that financial incentives may promote participation in lifestyle physical activity through aiding uptake and sustaining engagement, however, other intervention components were also important. This research is the first to conduct an evaluation of a financial incentive scheme to promote physical activity comprising a combination of quantitative, qualitative and longitudinal case study methods to gain a unique and detailed insight into the area. Important implications for future research and practice were identified.
9

Malik, Muhammad Usman. "Learning multimodal interaction models in mixed societies A novel focus encoding scheme for addressee detection in multiparty interaction using machine learning algorithms." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMIR18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les travaux de recherche proposés se situe au carrefour de deux domaines de recherche, l'interaction humain-agent et l'apprentissage automatique. L’interaction humain-agent fait référence aux techniques et concepts impliqués dans le développement des agents intelligents, tels que les robots et les agents virtuels, capables d'interagir avec les humains pour atteindre un objectif commun. L’apprentissage automatique, d'autre part, exploite des algorithmes statistiques pour apprendre des modèles de donnée. Les interactions humaines impliquent plusieurs modalités, qui peuvent être verbales comme la parole et le texte, ainsi que les comportements non-verbaux, c'est-à-dire les expressions faciales, le regard, les gestes de la tête et des mains, etc. Afin d'imiter l'interaction humain-humain en temps réel en interaction humain-agent, plusieurs modalités d'interaction peuvent être exploitées. Avec la disponibilité de corpus d'interaction multimodales humain-humain et humain-agent, les techniques d'apprentissage automatique peuvent alors être utilisées pour développer des modèles interdépendants participant à l'interaction humain-agent. À cet égard, nos travaux de recherche proposent des modèles originaux pour la détection de destinataires d'énoncés, le changement de tour de parole et la prédiction du prochain locuteur, et enfin la génération de comportement d'attention visuelle en interaction multipartie. Notre modèle de détection de destinataire prédit le destinataire d'un énoncé lors d'interactions impliquant plus de deux participant. Le problème de détection de destinataires a été traité comme un problème d'apprentissage automatique multiclasse supervisé. Plusieurs algorithmes d'apprentissage ont été entrainés pour développer des modèles de détection de destinataires. Les résultats obtenus montrent que ces propositions sont plus performants qu'un algorithme de référence. Le second modèle que nous proposons concerne le changement de tour de parole et la prédiction du prochain locuteur dans une interaction multipartie. La prédiction du changement de tour est modélisée comme un problème de classification binaire alors que le modèle de prédiction du prochain locuteur est considéré comme un problème de classification multiclasse. Des algorithmes d'apprentissage automatique sont entraînés pour résoudre ces deux problèmes interdépendants. Les résultats montrent que les modèles proposés sont plus performants que les modèles de référence. Enfin, le troisième modèle proposé concerne le problème de génération du comportement d'attention visuelle (CAV) pour les locuteurs et les auditeurs dans une interaction multipartie. Ce modèle est divisé en plusieurs sous-modèles qui sont entraînés par l'apprentissage machine ainsi que par des techniques heuristiques. Les résultats attestent que les systèmes que nous proposons sont plus performants que les modèles de référence développés par des approches aléatoires et à base de règles. Le modèle de génération de comportement CAV proposé est mis en œuvre sous la forme d’une série de quatre modules permettant de créer différents scénarios d’interaction entre plusieurs agents virtuels. Afin de l’évaluer, des vidéos enregistrées pour les modèles de génération de CAV pour les orateurs et les auditeurs, sont présentées à des évaluateurs humains qui évaluent les comportements de référence, le comportement réel issu du corpus et les modèles proposés de CAV sur plusieurs critères de naturalité du comportement. Les résultats montrent que le comportement de CAV généré via le modèle est perçu comme plus naturel que les bases de référence et aussi naturel que le comportement réel
Human -Agent Interaction and Machine learning are two different research domains. Human-agent interaction refers to techniques and concepts involved in developing smart agents, such as robots or virtual agents, capable of seamless interaction with humans, to achieve a common goal. Machine learning, on the other hand, exploits statistical algorithms to learn data patterns. The proposed research work lies at the crossroad of these two research areas. Human interactions involve multiple modalities, which can be verbal such as speech and text, as well as non-verbal i.e. facial expressions, gaze, head and hand gestures, etc. To mimic real-time human-human interaction within human-agent interaction,multiple interaction modalities can be exploited. With the availability of multimodal human-human and human-agent interaction corpora, machine learning techniques can be used to develop various interrelated human-agent interaction models. In this regard, our research work proposes original models for addressee detection, turn change and next speaker prediction, and finally visual focus of attention behaviour generation, in multiparty interaction. Our addressee detection model predicts the addressee of an utterance during interaction involving more than two participants. The addressee detection problem has been tackled as a supervised multiclass machine learning problem. Various machine learning algorithms have been trained to develop addressee detection models. The results achieved show that the proposed addressee detection algorithms outperform a baseline. The second model we propose concerns the turn change and next speaker prediction in multiparty interaction. Turn change prediction is modeled as a binary classification problem whereas the next speaker prediction model is considered as a multiclass classification problem. Machine learning algorithms are trained to solve these two interrelated problems. The results depict that the proposed models outperform baselines. Finally, the third proposed model concerns the visual focus of attention (VFOA) behaviour generation problem for both speakers and listeners in multiparty interaction. This model is divided into various sub-models that are trained via machine learning as well as heuristic techniques. The results testify that our proposed systems yield better performance than the baseline models developed via random and rule-based approaches. The proposed VFOA behavior generation model is currently implemented as a series of four modules to create different interaction scenarios between multiple virtual agents. For the purpose of evaluation, recorded videos for VFOA generation models for speakers and listeners, are presented to users who evaluate the baseline, real VFOA behaviour and proposed VFOA models on the various naturalness criteria. The results show that the VFOA behaviour generated via the proposed VFOA model is perceived more natural than the baselines and as equally natural as real VFOA behaviour
10

Hao, Chengcheng. "Explicit Influence Analysis in Crossover Models." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-107703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation develops influence diagnostics for crossover models. Mixed linear models and generalised mixed linear models are utilised to investigate continuous and count data from crossover studies, respectively. For both types of models, changes in the maximum likelihood estimates of parameters, particularly in the estimated treatment effect, due to minor perturbations of the observed data, are assessed. The novelty of this dissertation lies in the analytical derivation of influence diagnostics using decompositions of the perturbed mixed models. Consequently, the suggested influence diagnostics, referred to as the delta-beta and variance-ratio influences, provide new findings about how the constructed residuals affect the estimation in terms of different parameters of interest. The delta-beta and variance-ratio influence in three different crossover models are studied in Chapters 5-6, respectively. Chapter 5 analyses the influence of subjects in a two-period continuous crossover model. Possible problems with observation-level perturbations in crossover models are discussed. Chapter 6 extends the approach to higher-order crossover models. Furthermore, not only the individual delta-beta and variance-ratio influences of a subject are derived, but also the joint influences of two subjects from different sequences. Chapters 5-6 show that the delta-beta and variance-ratio influences of a particular parameter are decided by the special linear combination of the constructed residuals. In Chapter 7, explicit delta-beta influence on the estimated treatment effect in the two-period count crossover model is derived. The influence is related to the Pearson residuals of the subject. Graphical tools are developed to visualise information of influence concerning crossover models for both continuous and count data. Illustrative examples are provided in each chapter.
11

Moore, Graham Francis. "Developing a mixed methods framework for process evaluations of complex interventions : the case of the National Exercise Referral Scheme Policy Trial in Wales." Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/55051/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Where possible, policies to improve public health should be evidence-based. Where political pressures and shortage of evidence force action in advance of evidence, effectiveness can be evaluated during policy rollout. Because the aetiology of public health issues is complex, successful policies will likely be complex in their design, their implementation and their interaction with their contexts and target audiences. Process evaluation is therefore crucial in order to inform consistent implementation, and alongside outcomes evaluation, in order to understand how outcomes are produced. However, limited methodological guidance exists for process evaluation. This thesis develops a mixed-method framework exploring programme theory, diffusion, implementation, participant experiences and reach, which is applied to the evaluation of the Welsh National Exercise Referral Scheme (NERS). A logic model is developed via discussions with policy representatives. Diffusion is explored via qualitative interviews with policy representatives and local coordinators. Implementation checks draw on routine data, observation and self-report. Participant experiences are explored via qualitative interviews. Social patterning in reach is explored using routine monitoring data. The study identifies challenges diffusing NERS into local practice, in relation to communication structures, support, training provision and the mutual adaptation of the scheme and its contexts. Implementation checks indicate a common core of discounted, supervised, group-based exercise, though some divergence from programme theory emerged, with unfamiliar activities such as motivational interviewing and patient follow-up protocols delivered poorly. Nevertheless, relatively high adherence rates were achieved. Key perceived active ingredients in practice included professional supervision, enabling patients to build confidence and learn to exercise safely, and the patient-only environment, seen as providing an empathic context and realistic role models. However, lower uptake emerged amongst non-car owners, with higher adherence amongst patients already moderately active at baseline, older patients and non-mental health patients. Implications for ERS implementation, outcomes interpretation and process evaluation methodology are discussed.
12

Binous, Mohamed Sabeur. "Simulations numériques d'écoulements anisothermes turbulents : application à la cavité ventilée." Thesis, Perpignan, 2017. http://www.theses.fr/2017PERP0031/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ce travail concerne une étude numérique d’écoulements incompressiblesanisothermes dans une cavité. Dans un premier temps, nous procédons à une modélisation destransferts de chaleur dans une paroi dont l’une de ses faces est recouverte d’une couche dematériau à changement de phase (MCP) de faible épaisseur. Cette modélisation est basée surune condition aux limites de type Signorini. Les équations de transfert sont résolues par uneprocédure itérative spécifique. Cette procédure est ensuite appliquée aux transferts dans unecavité différentiellement chauffée dont l’une des parois est recouverte d’une couche de MCPde faible épaisseur. Les équations qui régissent les transferts d’air sont résolues par uneméthode semi-implicite aux différences finies de second ordre et l’algorithme de projection.Nous validons la procédure en l’appliquant à la cavité entrainée, la marche descendante,l’écoulement autour d’un barreau de section carrée et la convection naturelle dans une cavitédifférentiellement chauffée. Dans un deuxième temps, une étude d’écoulements turbulentsincompressibles dans une cavité ventilée a été effectuée en utilisant un solveur de hauteprécision parallèle développée au LAMPS. Les équations de transfert sont résolues par unschéma compact aux différences finies et l’algorithme de projection. Il est montré notammentque le flux de chaleur appliqué à la paroi inférieure de la cavité influence considérablement lastructure de l’écoulement et les transferts de chaleur ainsi que les champs moyens etfluctuants de la vitesse et de la température
The aim of this work is about a numerical study of anisothermal incompressible flowsconfined in a cavity. We perform a modeling of heat transfer in a wall where one of its faces iscovered with a thin layer of phase change material (PCM). This modeling is based on aSignorini boundary condition. The transfer equations are solved by a specific iterativeprocedure. This procedure is then applied to a differentially heated cavity, one of the walls ofwhich is covered with a thin layer of PCM. The transfer equations are solved by a semi-implicit method with finite second order differences and the projection algorithm. We validatethe procedure by applying it to the lid-driven cavity, downward motion, flow around a squaresection bar and natural convection in a differentially heated cavity. In a second step, the studyof incompressible turbulent flows in a ventilated cavity was carried out using a parallel highprecision solver developed at LAMPS. The transfer equations are solved by a finite differencecompact scheme and the projection algorithm. It is shown in particular that the heat flowapplied to the lower wall of the cavity greatly influences the structure of the flow and the heattransfers, as well as the mean and fluctuating fields of velocity and temperature
13

Ramos, Reynaldo Perez. "Analysis of mixed-use schemes in regeneration areas." Thesis, Ulster University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.675930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mixed-use development is not a new phenomenon in urban planning and real estate management. This development is currently promoted to revitalise inner towns and cities particularly utilising unproductive urban spaces being referred to as 'brownfield' and 'greyfield' (underutilised) land for social, economic and environmental benefits. Mixed-use development is also challenging various players (planners, policy makers, investors and developers) in terms of the diverse of uses, density of the scheme, inclusion of high quality urban form and design, and the delivery of optimum utilisation of the development scheme in terms of highest and best use. This research explores the underlying factors in the promotion of mixed-use schemes (MUS) in response to the emerging challenges of urban regeneration agenda towards achieving sustainable communities. The study presents the findings from the case studies carried out in the UK and the Republic of Ireland using a set of variables identified from the literature towards establishing success indicators that have a strong contribution to the overall occupancy level of MUS against single-use or mono-use developments in the revitalisation of urban centres. Finally, the application of Multiple Regression Modelling proves that the component mix (number of uses), the balance of uses (space allocations), site condition and integration with the neighbouring uses which are essential elements in accomplishing the maximum potential for viability and success of mixed-use developments. Certainly, these findings offered invaluable inputs in carrying out further investigations of various mixed-use schemes to fully understand the determining qualitative and quantitative factors in the feasibility and performance for this type of development in regeneration areas. The results from the MRA also presents relevant judgments in assessing the optimal composition of the mix of uses which enhance the scheme promotion in regeneration areas which could leads to the potential optimising mixed-use and policy decision making.
14

Demay, Charles. "Modélisation et simulation d'écoulements transitoires diphasiques eau-air dans les circuits hydrauliques." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM100/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ce travail est consacré à la modélisation mathématique et numérique des écoulements eau-air en conduite qui interviennent notamment dans les centrales de production d’électricité ou les réseaux d’eaux usées. On s’intéresse particulièrement aux écoulements mixtes caractérisés par la présence de régimes stratifiés pilotés par des ondes gravitaires lentes, de régimes en charge ou secs (conduite remplie d’eau ou d’air) pilotés par des ondes acoustiques rapides, et de poches d’air piégées. Une modélisation précise de ces écoulements est nécessaire afin de garantir le bon fonctionnement du circuit hydraulique sous-jacent. Alors que la plupart des modèles disponibles dans la littérature se concentrent sur la phase eau en négligeant la présence de l’air, un modèle bicouche compressible prenant en compte les interactions eau-air est proposé dans cette thèse. Sa construction réside dans l’intégration des équations d’Euler barotropes sur la hauteur de chaque phase et dans l’application de la contrainte hydrostatique sur le gradient de pression de l’eau. Le modèle obtenu est hyperbolique et satisfait une inégalité d’entropie en plus d’autres propriétés mathématiques notables, telles que l’unicité des relations de saut ou la positivité des hauteurs et densités de chaque phase. Au niveau discret, la simulation d’écoulements mixtes avec le modèle bicouche compressible soulève plusieurs défis en raison de la disparité des vitesses d’ondes caractérisant chaque régime, des processus de relaxation rapide sous-jacents, et de la disparition de l’une des phases dans les régimes en charge ou sec. Une méthode à pas fractionnaires implicite-explicite est alors développée en s’appuyant sur la relaxation rapide en pression et sur le mimétisme avec les équations de Saint-Venant pour la dynamique lente de la phase eau. En particulier, une approche par relaxation permet d’obtenir une stabilisation du schéma en fonction du régime d’écoulement. Plusieurs cas tests sont traités et démontrent la capacité du modèle proposé à gérer des écoulements mixtes incluant la présence de poches d’air piégées
The present work is dedicated to the mathematical and numerical modelling of transient air-water flows in pipes which occur in piping systems of several industrial areas such as nuclear or hydroelectric power plants or sewage pipelines. It deals more specifically with the so-called mixed flows which involve stratified regimes driven by slow gravity waves, pressurized or dry regimes (pipe full of water or air) driven by fast acoustic waves and entrapped air pockets. An accurate modelling of these flows is necessary to guarantee the operability of the related hydraulic system. While most of available models in the literature focus on the water phase neglecting the air phase, a compressible two-layer model which accounts for air-water interactions is proposed herein. The derivation process relies on a depth averaging of the isentropic Euler set of equations for both phases where the hydrostatic constraint is applied on the water pressure gradient. The resulting system is hyperbolic and satisfies an entropy inequality in addition to other significant mathematical properties, including the uniqueness of jump conditions and the positivity of heights and densities for each layer. Regarding the discrete level, the simulation of mixed flows with the compressible two-layer model raises key challenges due to the discrepancy of wave speeds characterizing each regime combined with the fast underlying relaxation processes and with phase vanishing when the flow becomes pressurized or dry. Thus, an implicit-explicit fractional step method is derived. It relies on the fast pressure relaxation in addition to a mimetic approach with the shallow water equations for the slow dynamics of the water phase. In particular, a relaxation method provides stabilization terms activated according to the flow regime. Several test cases are performed and attest the ability of the compressible two-layer model to deal with mixed flows in pipes involving air pocket entrapment
15

Kheriji, Walid. "Méthodes de correction de pression pour les équations de Navier-Stokes compressibles." Phd thesis, Université de Provence - Aix-Marseille I, 2011. http://tel.archives-ouvertes.fr/tel-00804116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur le développement de schémas semi-implicites à pas fractionnaires pour les équations de Navier-Stokes compressibles ; ces schémas entrent dans la classe des méthodes de correction de pression.La discrétisation spatiale choisie est de type "à mailles décalées :éléments finis mixtes non conformes (éléments finis de Crouzeix-Raviart ou Rannacher-Turek) ou schéma MAC classique.Une discrétisation en volumes finis décentrée amont du bilan de masse garantit la positivité de la masse volumique.La positivité de l'énergie interne est obtenue en discrétisant le bilan d'énergie interne continu, par une méthode de volumes finis décentrée amont, enfin, et en couplant ce bilan d'énergie interne discret à l'étape de correction de pression.On effectue une discrétisation particulière en volumes finis sur un maillage dual du terme de convection de vitesse dans le bilan de quantité de mouvement et une étape de renormalisation de la pression; ceci permet de garantir le contrôle au cours du temps de l'intégrale de l'énergie totale sur le domaine.L'ensemble de ces estimations a priori implique en outre, par un argument de degré topologique, l'existence d'une solution discrète. L'application de ce schéma aux équations d'Euler pose une difficulté supplémentaire.En effet, l'obtention de vitesses de choc correctes nécessite que le schéma soit consistant avec l'équation de bilan d'énergie totale, propriété que nous obtenons comme suit. Tout d'abord, nous établissons un bilan discret (local) d'énergie cinétique.Ce dernier comporte des termes sources, que nous compensons ensuite dans le bilan d'énergie interne. Les équations d'énergie cinétique et interne sont associées au maillage dual et primal respectivement, et ne peuvent donc être additionnées pour obtenir un bilan d'énergie totale ; cette dernière équation est toutefois retrouvée, sous sa forme continue, à convergence : si nous supposons qu'une suite de solutions discrètes converge lorsque le pas de temps et d'espace tendent vers 0,, nous montrons en effet, en 1D au moins, que la limite en satisfait une forme faible.Ces résultats théoriques sont confortés par des tests numériques.Des résultats similaires sont obtenus pour les équations de Navier-Stokes barotropes.
16

Shi, Chen Yang. "High order compact schemes for fractional differential equations with mixed derivatives." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Tain, Cyril. "Modelling of type II superconductors : implementation with FreeFEM." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nous présentons dans cette étude quatre modèles pour les supraconducteurs de type II: le modèle de London, le modèle de Ginzburg-Landau dépendant du temps (TDGL), le modèle de Ginzburg-Landau stationnaire et un modèle de type Abelian-Higgs. Pour le modèle de London nous avons étudié un problème à symétrie cylindrique. Nous avons établi une formulation hydrodynamique du modèle grâce à l'introduction d'une fonction courant. Le caractère bien posé du problème a été prouvé. Le champ magnétique extérieur a été calculé pour des domaines 2D et 3D. En 3D une méthode par éléments frontières a été implémentée en utilisant une fonctionalité récente de FreeFem. Pour le modèle TDGL deux codes fondés sur deux formulations variationnelles ont été implémentées et validées sur des cas tests classiques de la littérature en 2D et 3D. Pour le modèle GL stationnaire une méthode de gradient de Sobolev a été utilisée pour trouver l'état d'équilibre. Ces résultats ont été comparés avec ceux du modèle TDGL. Pour le modèle Abelian-Higgs un code Fortran différences finies en 1D a été développé et validé par la construction d'un système manufacturé. Ce modèle a été utilisé pour retrouver certaines propriétés de magnétisation des supraconducteurs
In this thesis we present four models for type II superconductors: the London model, the time dependent Ginzburg-Landau (TDGL) model, the steady state Ginzburg-Landau model and an Abelian-Higgs model. For the London model a problem with cylindrical symmetry was considered. A hydrodynamic formulation of the problem was established through the introduction of a stream function. Well-posedness of the problem was proved. The external magnetic field was computed for 2D and 3D domains. In 3D a boundary element method was implemented using a recent feature of FreeFem. For the TDGL model two codes based on two variational formulations were proposed and tested on classical benchmarks of the literature in 2D and 3D. In the steady state GL model a Sobolev gradient technique was used to find the equilibrium state. The results were compared with the ones given by the TDGL model. In the Abelian-Higgs model a 1D finite differences code written in Fortran was developed and tested with the construction of a manufactured system. The model was used to retrieve some of the properties of magnetization of superconductors
18

ROSSI, ELENA. "Balance Laws: Non Local Mixed Systems and IBVPs." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2016. http://hdl.handle.net/10281/103090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Scalar hyperbolic balance laws in several space dimensions play a central role in this thesis. First, we deal with a new class of mixed parabolic-hyperbolic systems on all R^n: we obtain the basic well-posedness theorems, devise an ad hoc numerical algorithm, prove its convergence and investigate the qualitative properties of the solutions. The extension of these results to bounded domains requires a deep understanding of the initial boundary value problem (IBVP) for hyperbolic balance laws. The last part of the thesis provides rigorous estimates on the solution to this IBVP, under precise regularity assumptions. In Chapter 1 we introduce a predator-prey model. A non local and non linear balance law is coupled with a parabolic equation: the former describes the evolution of the predator density, the latter that of prey. The two equations are coupled both through the convective part of the balance law and the source terms. The drift term is a non local function of the prey density. This allows the movement of predators to be directed towards the regions where the concentration of prey is higher. We prove the well-posedness of the system, hence the existence and uniqueness of solution, the continuous dependence from the initial data and various stability estimates. In Chapter 2 we devise an algorithm to compute approximate solutions to the mixed system introduced above. The balance law is solved numerically by a Lax-Friedrichs type method via dimensional splitting, while the parabolic equation is approximated through explicit finite-differences. Both source terms are integrated by means of a second order Runge-Kutta scheme. The key result in Chapter 2 is the convergence of this algorithm. The proof relies on a careful tuning between the parabolic and the hyperbolic methods and exploits the non local nature of the convective part in the balance law. This algorithm has been implemented in a series of Python scripts. Using them, we obtain information about the possible order of convergence and we investigate the qualitative properties of the solutions. Moreover, we observe the formation of a striking pattern: while prey diffuse, predators accumulate on the vertices of a regular lattice. The analytic study of the system above is on all R^n. However, both possible biological applications and numerical integrations suggest that the boundary plays a relevant role. With the aim of studying the mixed hyperbolic-parabolic system in a bounded domain, we noticed that for balance laws known results lack some of the estimates necessary to deal with the coupling. In Chapter 3 we then focus on the IBVP for a general balance law in a bounded domain. We prove the well-posedness of this problem, first with homogeneous boundary condition, exploiting the vanishing viscosity technique and the doubling of variables method, then for the non homogeneous case, mainly thanks to elliptic techniques. We pay particular attention to the regularity assumptions and provide rigorous estimates on the solution.
19

Casner, Bill. "A Mixed Method Study on Schema-Based Instruction, Mathematical Problem Solving Skills, and Students with an Educational Disability." Thesis, Lindenwood University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10244398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

The purpose of this study was to determine the student outcomes of implementing schema-based instruction on students in grades 3-8 identified with an educational disability and ascertain how students’ developed mathematical problem solving skills. After special education teachers in a metropolitan school district in the Midwest administered a pre-assessment, the researcher used the results to select 21 students with an educational disability to participate in the mixed-methods study. Special education teachers implemented Asha K. Jitendra’s (2007) educational program titled, Solving Math Word Problems: Teaching Students with Learning Disabilities Using Schema-Based Instruction, during the 2013-2014 school year and taught participants using these techniques. The researcher measured student achievement by using both a pre and post-assessment and M-CAP benchmark scores on mathematical problem solving. In addition, the researcher gathered perceptions of schema-based instruction via surveys and interviews with special education teachers, general education teachers, and student participants. The analysis of quantitative data from the pre and post-assessments of students participating in the schema-based program as well as the analysis of qualitative data from student participant surveys supported a positive outcome on the use of schema-based instruction with students with an educational disability; the findings of this study reinforced the then-current literature. However, the student participants' M-CAP assessment data did not demonstrate the same amount of growth as the assessment data from the schema-based program. In addition, the analysis of survey and interview data from the two teacher groups also displayed discrepancies between special education teachers’ and general education teachers’ overall perceptions of the schema-based instructional program. Despite this, the preponderance of evidence demonstrated most students who participated in the study did learn as a result of the schema-based instruction and developed mathematical problem-solving skills. Therefore, the findings of this study corroborated the then-current literature and supported the continual use of the researched program; Solving Math Word Problems: Teaching Students with Learning Disabilities Using Schema-Based Instruction, by Jitendra (2007). The researcher concluded this program a valid research-based intervention to increase mathematical problem solving skills for students with an educational disability.

20

Celebi, Emre. "MODELS OF EFFICIENT CONSUMER PRICING SCHEMES IN ELECTRICITY MARKETS." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Suppliers in competitive electricity markets regularly respond to prices that change hour by hour or even more frequently, but most consumers respond to price changes on a very different time scale, i. e. they observe and respond to changes in price as reflected on their monthly bills. This thesis examines mixed complementarity programming models of equilibrium that can bridge the speed of response gap between suppliers and consumers, yet adhere to the principle of marginal cost pricing of electricity. It develops a computable equilibrium model to estimate the time-of-use (TOU) prices that can be used in retail electricity markets. An optimization model for the supply side of the electricity market, combined with a price-responsive geometric distributed lagged demand function, computes the TOU prices that satisfy the equilibrium conditions. Monthly load duration curves are approximated and discretized in the context of the supplier's optimization model. The models are formulated and solved by the mixed complementarity problem approach. It is intended that the models will be useful (a) in the regular exercise of setting consumer prices (i. e. , TOU prices that reflect the marginal cost of electricity) by a regulatory body (e. g. , Ontario Energy Board) for jurisdictions (e. g. , Ontario) where consumers' prices are regulated, but suppliers offer into a competitive market, (b) for forecasting in markets without price regulation, but where consumers pay a weighted average of wholesale price, (c) in evaluation of the policies regarding time-of-use pricing compared to the single pricing, and (d) in assessment of the welfare changes due to the implementation of TOU prices.
21

Heuer, Christof. "High-order compact finite difference schemes for parabolic partial differential equations with mixed derivative terms and applications in computational finance." Thesis, University of Sussex, 2014. http://sro.sussex.ac.uk/id/eprint/49800/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is concerned with the derivation, numerical analysis and implementation of high-order compact finite difference schemes for parabolic partial differential equations in multiple spatial dimensions. All those partial differential equations contain mixed derivative terms. The resulting schemes have been applied to equations appearing in computational finance. First, we develop and study essentially high-order compact finite difference schemes in a general setting with option pricing in stochastic volatility models on non-uniform grids as application. The schemes are fourth-order accurate in space and second-order accurate in time for vanishing correlation. In the numerical study we obtain high-order numerical convergence also for non-zero correlation and non-smooth payoffs which are typical in option pricing. In all numerical experiment a comparative standard second-order discretisation is significantly outperform. We conduct a numerical stability study which indicates unconditional stability of the scheme. Second, we derive and analyse high-order compact schemes with n-dimensional spatial domain in a general setting. We obtain fourth-order accuracy in space and second-order accuracy in time. A thorough von Newmann stability analysis is performed for spatial domain with dimensions two and three. We prove that a necessary stability condition holds unconditionally without additional restrictions on the choice of the discretisation parameters for vanishing mixed derivative terms. We also give partial results for non-vanishing mixed derivative terms. As first example Black-Scholes Basket options are considered. In all numerical experiments, where the initial conditions were smoothened using the smoothing operators developed by Kreiss, Thomée and Widlund, a comparative standard second-order discretisation is significantly outperformed. As second example the multi-dimentional Heston basket option is considered for n independent Heston processes, where for each Heston process there is a non-vanishing correlation between the stock and its volatility.
22

Al-Shawabkeh, Rami. "The role of sustainable urban design principles in delivering high density mixed use schemes in Jordan : using Amman as a case study." Thesis, University of Brighton, 2015. https://research.brighton.ac.uk/en/studentTheses/35e734bb-741a-4b0d-8a5c-ea72231d19e3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research is about the role of sustainable urban design principles in delivering high density mixed use schemes in Amman. It builds on previous work developed in the 2010 Amman Master Plan to propose, a first for the city, sustainable high density mixed use (HDMU) development in three distinct geographical areas in the city. High density mixed use developments conceived as part of the master plan is a new approach for the city of Amman and for Jordan.
23

Li, Ru. "Numerical simulations of natural or mixed convection in vertical channels : comparisons of level-set numerical schemes for the modeling of immiscible incompressible fluid flows." Phd thesis, Université Paris-Est, 2012. http://tel.archives-ouvertes.fr/tel-00806510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this research dissertation is at studying natural and mixed convections of fluid flows, and to develop and validate numerical schemes for interface tracking in order to treat incompressible and immiscible fluid flows, later. In a first step, an original numerical method, based on Finite Volume discretizations, is developed for modeling low Mach number flows with large temperature gaps. Three physical applications on air flowing through vertical heated parallel plates were investigated. We showed that the optimum spacing corresponding to the peak heat flux transferred from an array of isothermal parallel plates cooled by mixed convection is smaller than those for natural or forced convections when the pressure drop at the outlet keeps constant. We also proved that mixed convection flows resulting from an imposed flow rate may exhibit unexpected physical solutions; alternative model based on prescribed total pressure at inlet and fixed pressure at outlet sections gives more realistic results. For channels heated by heat flux on one wall only, surface radiation tends to suppress the onset of recirculations at the outlet and to unify the walls temperature. In a second step, the mathematical model coupling the incompressible Navier-Stokes equations and the Level-Set method for interface tracking is derived. Improvements in fluid volume conservation by using high order discretization (ENO-WENO) schemes for the transport equation and variants of the signed distance equation are discussed
24

Rue, Robert A. ""Mixed Taste," Cosmopolitanism, and Intertextuality in Georg Philipp Telemann's Opera Orpheus." Bowling Green State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1483456936606681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Aggett, Jonathan Edward. "Financial Analysis of Restoring Sustainable Forests on Appalachian Mined Lands for Wood Products, Renewable Energy, Carbon Sequestration, and Other Ecosystem Services." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/36096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Public Law 95-87, the Surface Mining Control and Reclamation Act of 1977 (SMCRA), mandates that mined land be reclaimed in a fashion that renders the land at least as productive after mining as it was before mining. In the central Appalachian region, where prime farmland and economic development opportunities for mined land are scarce, the most practical land use choices are hayland/pasture, wildlife habitat, or forest land. Since 1977, the majority of mined land has been reclaimed as hayland/pasture or wildlife habitat, which is less expensive to reclaim than forest land, since there are no tree planting costs. As a result, there are now hundreds of thousands of hectares of grasslands and scrublands in various stages of natural succession located throughout otherwise forested mountains in the U.S. The purpose of this study is to develop a framework for understanding/calculating the economic implications of converting these reclaimed mined lands to forests under various silvicultural regimes, and to demonstrate the economic/decision-making implications of an incentive scheme on such a land use conversion. The economic feasibility of a range of land-use conversion scenarios was analyzed for both mixed hardwoods and white pine, under a set of low product prices and under a set of high product prices. Economic feasibility was based on land expectation values. Further, three types of incentive schemes were investigated: 1) lump sum payment at planting (and equivalent series of annual payments), 2) revenue incentive at harvest and 3) payment based on carbon volume.

Mixed hardwood LEVs ranged from -$2416.71/ha (low prices) to $3955.72/ha (high prices). White pine LEVs ranged from -$2330.43/ha (low prices) to $3746.65/ha (high prices). A greater percentage of white pine scenarios yielded economically feasible land-use conversions than did the mixed hardwood scenarios, and it seems that a conversion to white pine forests would, for the most part, be the more appealing option. It seems that, for both mixed hardwoods and white pine, it would be in the best interests of the landowner to invest in the highest quality sites first. For a conversion to mixed hardwood forests, a low intensity level of site preparation seems economically optimal for most scenarios. For a conversion to white pine forests, a medium intensity level of site preparation seems economically optimal for most scenarios.

Mixed hardwoods lump sum payments, made at the time of planting, ranged from $0/ha to $2416.71/ha (low prices). White pine lump sum payments, made at the time of planting, ranged from $0/ha to $2330.53/ha (low prices). Mixed hardwoods benefits based on an increase in revenue at harvest, ranged from $0/ha to $784449.52/ha (low prices). White pine benefits based on an increase in revenue at harvest ranged from $0/ha to $7011.48/ha (high prices). Annual mixed hardwood benefits, based on total stand carbon volume present at the end of a given year, ranged from $0/ton of carbon to $5.26/ton carbon (low prices). White pine benefits based on carbon volume ranged from $0/ton of carbon to $18.61/ton of carbon (high prices). It appears that, for white pine scenarios, there is not much difference between incentive values for lump sum payments at planting, revenue incentives at harvest, and total carbon payments over a rotation. For mixed hardwoods, however, it appears that the carbon payment incentive is by far the cheapest option of encouraging landowners to convert land.
Master of Science

26

Souza, Grazione de. "Modelagem computacional de escoamentos com duas e três fases em reservatórios petrolíferos heterogêneos." Universidade do Estado do Rio de Janeiro, 2008. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro
Considera-se neste trabalho um modelo matemático para escoamentos com duas e três fases em reservatórios petrolíferos e a modelagem computacional do sistema de equações governantes para a sua solução numérica. Os fluidos são imiscíveis e incompressíveis e as heterogeneidades da rocha reservatório são modeladas estocasticamente. Além disso, é modelado o fenômeno de histerese para a fase óleo via funções de permeabilidades relativas. No caso de escoamentos trifásicos água-óleo-gás a escolha de expressões gerais para as funções de permeabilidades relativas pode levar à perda de hiperbolicidade estrita e, desta maneira, à existência de uma região elíptica ou de pontos umbílicos para o sistema não linear de leis de conservação hiperbólicas que descreve o transporte convectivo das fases fluidas. Como conseqüência, a perda de hiperbolicidade estrita pode levar à existência de choques não clássicos (também chamados de choques transicionais ou choques subcompressivos) nas soluções de escoamentos trifásicos, de difícil simulação numérica. Indica-se um método numérico com passo de tempo fracionário, baseado em uma técnica de decomposição de operadores, para a solução numérica do sistema governante de equações diferenciais parciais que modela o escoamento bifásico água-óleo imiscível em reservatórios de petróleo heterogêneos. Um simulador numérico bifásico água-óleo eficiente desenvolvido pelo grupo de pesquisa no qual o autor está inserido foi modificado com sucesso para incorporar a histerese sob as hipóteses consideradas. Os resultados numéricos obtidos para este caso indicam fortes evidências que o método proposto pode ser estendido para o caso trifásico água-óleo-gás. A técnica de decomposição de operadores em dois níveis permite o uso de passos de tempo distintos para os quatro problemas definidos pelo procedimento de decomposição: convecção, difusão, pressão-velocidade e relaxação para histerese. O problema de transporte convectivo (hiperbólico) das fases fluidas é aproximado por um esquema central de diferenças finitas explícito, conservativo, não oscilatório e de segunda ordem. Este esquema é combinado com elementos finitos mistos, localmente conservativos, para a aproximação dos problemas de transporte difusivo (parabólico) e de pressão-velocidade (elíptico). O operador temporal associado ao problema parabólico de difusão é resolvido fazendo-se uso de uma estratégia implícita de solução (Backward Euler). Uma equação diferencial ordinária é resolvida (analiticamente) para a relaxação relacionada à histerese. Resultados numéricos para o problema bifásico água-óleo em uma dimensão espacial em concordância com resultados semi-analíticos disponíveis na literatura foram reproduzidos e novos resultados em meios heterogêneos, em duas dimensões espaciais, são apresentados e a extensão desta técnica para o caso de problemas trifásicos água-óleo-gás é proposta.
We consider in this work a mathematical model for two- and three-phase flow problems in petroleum reservoirs and the computational modeling of the governing equations for its numerical solution. We consider two- (water-oil) and three-phase (water-gas-oil) incompressible, immiscible flow problems and the reservoir rock is considered to be heterogeneous. In our model, we also take into account the hysteresis effects in the oil relative permeability functions. In the case of three-phase flow, the choice of general expressions for the relative permeability functions may lead to the loss of strict hyperbolicity and, therefore, to the existence of an elliptic region or umbilic points for the system of nonlinear hyperbolic conservation laws describing the convective transport of the fluid phases. As a consequence, the loss of hyperbolicity may lead to the existence of nonclassical shocks (also called transitional shocks or undercompressive shocks) in three-phase flow solutions. We present a new, accurate fractional time-step method based on an operator splitting technique for the numerical solution of a system of partial differential equations modeling two-phase, immiscible water-oil flow problems in heterogeneous petroleum reservoirs. An efficient two-phase water-oil numerical simulator developed by our research group was sucessfuly extended to take into account hysteresis effects under the hypotesis previously annouced. The numerical results obtained by the procedure proposed indicate numerical evidence the method at hand can be extended for the case of related three-phase water-gas-oil flow problems. A two-level operator splitting technique allows for the use of distinct time steps for the four problems defined by the splitting procedure: convection, diffusion, pressure-velocity and relaxation for hysteresis. The convective transport (hyperbolic) of the fluid phases is approximated by a high resolution, nonoscillatory, second-order, conservative central difference scheme in the convection step. This scheme is combined with locally conservative mixed finite elements for the numerical solution of the diffusive transport (parabolic) and the pressure-velocity (elliptic) problems. The time discretization of the parabolic problem is performed by means of the implicit Backward Euler method. An ordinary diferential equation is solved (analytically) for the relaxation related to hysteresis. Two-phase water-oil numerical results in one space dimensional, in which are in a very good agreement with semi-analitycal results available in the literature, were computationaly reproduced and new numerical results in two dimensional heterogeneous media are also presented and the extension of this technique to the case of three-phase water-oil-gas flows problems is proposed.
27

Mint, brahim Maimouna. "Méthodes d'éléments finis pour le problème de changement de phase en milieux composites." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0157/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans ces travaux de thèse on s’intéresse au développement d’un outil numérique pour résoudre le problème de conduction instationnaire avec changement de phase dans un milieu composite constitué d’une mousse de graphite infiltrée par un matériau à changement de phase tel que le sel, dans le contexte du stockage de l’énergie thermique solaire.Au chapitre 1, on commence par présenter le modèle sur lequel on va travailler. Il estséparé en trois sous-parties : un problème de conduction de chaleur dans la mousse, un problème de changement de phase dans les pores remplis de sel et une condition de résistance thermique de contact entre les deux matériaux qui est traduite par une discontinuité du champ de température.Au chapitre 2, on étudie le problème stationnaire de conduction thermique dans un milieu composite avec résistance de contact. Ceci permet de se focaliser sur la plus grande difficulté présente dans le problème qui est le traitement de la condition de saut à l’interface.Deux méthodes d’éléments finis sont proposées pour résoudre ce problème : une méthode basée sur les éléments finis Lagrange P1 et une méthode hybride-duale utilisant les éléments finis Raviart-Thomas d’ordre 0 et P0. L’analyse numérique des deux méthodes est effectuée et les résultats de tests numériques attestent des efficacités des deux méthodes [10]. Les matériaux à changement de phase qu’on étudie dans le cadre de cette thèse sont des matériaux pures, par conséquent le changement de phase s’effectue en une valeur de température fixe qui est la température de fusion. Ceci est modélisé par un saut dans la fonction fraction liquide et par conséquent dans la fonction enthalpie du matériau. Cette discontinuité représente une difficulté numérique supplémentaire qu’on propose de surmonter en introduisant un intervalle de régularisation autour de la température de fusion.Cette procédure est présentée dans le chapitre 3 où une étude analytique et numérique montre que l’erreur sur la température se comporte comme " en dehors de la zone de mélange, où " est la largeur de l’intervalle de régularisation. Cependant, à l’intérieur l’erreur se comporte comme p " et on montre que cette estimation est optimale. Cette diminution de vitesse de convergence est due à l’énergie qui reste bloquée dans la zone de mélange [58].Dans le chapitre 4 on présente quatre des schémas les plus utilisés pour le traitement de la non-linearité due au changement de phase: mise à jour du terme source, linéarisation de l’enthalpie, la capacité thermique apparente et le schéma de Chernoff. Différents tests numériques sont réalisés afin de tester et comparer ces quatre méthodes pour différents types de problèmes. Les résultats montrent que le schéma de linéarisation de l’enthalpie est le plus précis à chaque pas de temps tans dis que le schéma de la capacité thermique apparente donne de meilleurs résultats au bout d’un certain temps de calcul. Cela indique que si l’on s’intéresse aux états transitoires du matériaux le premier schéma est lemeilleur choix. Cependant, si l’on s’intéresse au comportement thermique asymptotique du matériau le second schéma est plus adapté. Les résultats montrent également que le schéma de Chernoff est le plus rapide parmi les quatre schémas en terme de temps de calcul et donne des résultats comparables à ceux des deux plus précis.Enfin, dans le chapitre 5 on utilise le schéma de Chernoff avec la méthode d’éléments finis hybride-duale Raviart-Thomas d’ordre 0 et P0 pour résoudre le problème non-linéaire de conduction thermique dans un milieu composite réel avec matériau à changement de phase. Le but étant de déterminer si un matériau composite avec une distribution uniforme de pores est assimilable à un matériau à changement de phase homogènes avec des propriétés thermo-physiques équivalentes. Pour toutes les expériences numériques exposées dans ce manuscrit on a utilisé le logiciel libre d’éléments finis FreeFem++ [41]
In this thesis we aim to develop a numerical tool that allow to solve the unsteady heatconduction problem in a composite media with a graphite foam matrix infiltrated witha phase change material such as salt, in the framework of latent heat thermal energystorage.In chapter 1, we start by explaining the model that we are studying which is separated in three sub-parts : a heat conduction problem in the foam, a phase change problem in the pores of the foam which are filled with salt and a contact resistance condition at the interface between both materials which results in a jump in the temperature field.In chapter 2, we study the steady heat conduction problem in a composite media withcontact resistance. This allow to focus on the main difficulty here which is the treatment of the thermal contact resistance at the interface between the carbon foam and the salt. Two Finite element methods are proposed in order to solve this problem : a finite element method based on Lagrange P1 and a hybrid dual finite element method using the lowest order Raviart-Thomas elements for the heat flux and P0 for the temperature. The numerical analysis of both methods is conducted and numerical examples are given to assert the analytic results. The work presented in this chapter has been published in the Journal of Scientific Computing [10].The phase change materials that we study here are mainly pure materials and as a consequence the change in phase occurs at a single point, the melting temperature. This introduces a jump in the liquid fraction and consequently in the enthalpy. This discontinuity represents an additional numerical difficulty that we propose to overcome by introducing a smoothing interval around the melting temperature. This is explained in chapter 3 where an analytical and numerical study shows that the error on the temperature behaves like " outside of the mushy zone, where _ is the width of the smoothing interval. However, inside the error behaves like p " and we prove that this estimation is optimal due to the energy trapped in the mushy zone. This chapter has been published in Communications in Mathematical Sciences [58].The next step is to determine a suitable time discretization scheme that allow to handle the non-linearity introduced by the phase change. For this purpose we present in chapter 4 four of the most used numerical schemes to solve the non-linear phase change problem : the update source method, the enthalpy linearization method, the apparent heat capacity method and the Chernoff method. Various numerical tests are conducted in order to test and compare these methods for various types of problems. Results show that the enthalpy linearization is the most accurate at each time step while the apparent heat capacity gives better results after a given time. This indicates that if we are interestedin the transitory states the first scheme is the best choice. However, if we are interested in the asymptotic thermal behavior of the material the second scheme is better. Results also show that the Chernoff scheme is the fastest in term of calculation time and gives comparable results to the one given by the first two methods.Finally, in chapter 5 we use the Chernoff method combined with the hybrid-dual finiteelement method with P0 and the lowest order Raviart-Thomas elements to solve thenon-linear heat conduction problem in a realistic composite media with a phase change material. Numerical simulations are realised using 2D-cuts of X-ray images of two real graphite matrix foams infiltrated with a salt. The aim of these simulations is to determine if the studied composite materials could be assimilated to an equivalent homogeneous phase change material with equivalent thermo-physical properties. For all simulationsconducted in this work we used the free finite element software FreeFem++ [41]
28

Abreu, Eduardo Cardoso de. "Modelagem e simulação computacional de escoamentos trifásicos em reservatórios de petróleo heterogêneos." Universidade do Estado do Rio de Janeiro, 2007. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Neste trabalho é apresentado um novo método acurado com passo de tempo fracionário, baseado em uma técnica de decomposição de operadores, para a solução numérica de um sistema governante de equações diferenciais parciais que modela escoamento trifásico água-gás-óleo imiscível em reservatórios de petróleo heterogêneos no qual os efeitos de compressibilidade do gás não foram levados em conta. A técnica de decomposição de operadores em dois níveis permite o uso de passos de tempo distintos para os três problemas definidos pelo procedimento de decomposição: convecção, difusão e pressão-velocidade. Um sistema hiperbólico de leis de conservação que modela o transporte convectivo das fases fluidas é aproximado por um esquema central de diferenças finitas explícito, conservativo, não oscilatório e de segunda ordem. Este esquema é combinado com elementos finitos mistos, localmente conservativos, para a aproximação numérica dos sistemas de equações parabólico e elíptico associados aos problemas de transporte difusivo e de pressão-velocidade, respectivamente. O operador temporal associado ao sistema parabólico é resolvido fazendo-se uso de uma estratégia implícita de solução (Backward Euler). O modelo matemático para escoamento trifásico considerado neste trabalho leva em conta as forças de capilaridade e expressões gerais para as funções de permeabilidade relativa, campos variáveis de porosidade e de permeabilidade e os efeitos da gravidade. A escolha de expressões gerais para as funções de permeabilidade relativa pode levar à perda de hiperbolicidade escrita e, desta maneira, à existência de uma região elíptica ou de pontos umbílicos para o sistema não linear de leis de conservação hiperbólicas que descreve o transporte convectivo das fases fluidas. Como consequência, a perda de hiperbolicidade pode levar à existência de choques não clássicos (também chamados de choques transicionais ou choques subcompressivos) nas soluções de escoamentos trifásicos. O novo procedimento numérico foi usado para investigar a existência e a estabilidade de choques não clássicos, com respeito ao fenômeno de fingering viscoso, em problemas de escoamentos trifásicos bidimensionais em reservatórios heterogêneos, estendendo deste modo resultados disponíveis na literatura para problemas de escoamentos trifásicos unidimensionais. Experimentos numéricos, incluindo o estudo de estratégias de injeção alternada de água e gás (Water-Alternating-Gas (WAG)), indicam que o novo procedimento numérico proposto conduz com eficiência computacional a resultados numéricos com precisão. Perspectivas para trabalhos de pesquisa futuros são também discutidas, tomando como base os desenvolvimentos reportados nesta tese.
We present a new, accurate fractional time-step method based on an operator splitting technique for the numerical solution of a system of partial differential equations modeling three-phase immiscible water-gas-oil flow problems in heterogeneous petroleum reservoirs in which the compressibility effects of the gas was not take into account. A two-level operator splitting technique allows for the use of distinct time steps for the three problems defined by the splitting procedure: convection, diffusion and pressure-velocity. A system of hyperbolic conservation laws modelling the convective transport of the fluid phases is approximated by a high resolution, nonoscillatory, second-order, conservative central difference scheme in the convection step. This scheme is combined with locally conservative mixed finite elements for the numerical solution of the parabolic and elliptic problems associated with the diffusive transport of fluid phases and the pressure-velocity problem, respectively. The time discretization of the parabolic problem is performed by means of the implicit backward Euler method. The mathematical model for the three-phase flow considered in this work takes into account capillary forces and general expressions for the relative permeability functions, variable porosity and permeability fields, and the effect of gravity. The choice of general expressions for the relative permeability functions may lead to the loss of strict hyperbolicity and, therefore, to the existence of an elliptic region of umbilic points for the systems of nonlinear hyperbolic conservation laws describing the convective transport of the fluid phases. As a consequence, the loss of hyperbolicity may lead to the existence of nonclassical shocks (also called transitional shocks or undercompressive shocks) in three-phase flow solutions. The numerical procedure was used in an investigation of the existence and stability of nonclassical shocks with respect to viscous fingering in heterogeneous two-dimensional flows, thereby extending previous results for one-dimensional three-phase flow available in the literature. Numerical experiments, including the study of Water-Alternating-Gas (WAG) injection strategies, indicate that the proposed new numerical procedure leads to computational efficiency and accurate numerical results. Directions for further research are also discussed, based on the developments reported in this thesis.
29

Hsu, Cheng-Tien, and 許政天. "A Global Optimization Scheme for Mixed-Integer Nonlinear Programming Problems." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/23216204565902734184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
逢甲大學
化學工程學所
93
This thesis aims at the development of a global optimization algorithm for the solution of mixed-integer nonlinear programming (MINLP) problems. A novel two-stage global optimization scheme is proposed. In the first stage, the modified Simulated Annealing (SA) algorithm is used to allocate the exact values of the discrete variables, while in the second stage the global optimal solution is obtained through making use of the information theory, a chaotic algorithm and a feasible point strategy. The applicability and effectiveness of the proposed global optimization scheme have been tested with some typical MINLP problems and extensive comparisons with existing SA and/or information theory-based algorithms have also performed in this work. Simulation results reveal that, due to the advantages of the chaotic algorithm and the two-stage solution approach, the proposed global optimization scheme is more efficient and outperforms the conventional SA and/or information theory-based algorithms. To extend the proposed global optimization scheme to the solution of the dynamic MINLP problems, we introduce the orthogonal collocation strategy for converting the original dynamic problem into a conventional MINLP problem. With this conversion, each of the dynamical constraint is reformulated into a set of equivalent discrete constraint forms with decision variables being at the pre-specified collocations points. This effort leads the proposed two-stage global optimization scheme directly applicable to the solution of dynamic MINLP problem and makes the solution procedure quite easy. For demonstration, we applied the solution scheme to solve several optimal control problems of dynamic chemical processes having simultaneously the continuous and discrete variables. Extensive simulation results corroborate again the effectiveness and advantage of the proposed global optimization schemes for the solution of MINLP problems.
30

ZHAN, ZHI-JANG, and 詹志堅. "NUMERICAL INVESTIGATION OF SUPERSONIC MIXED-COMPRESSIONINLET USING A ROBUST UPWIND SCHEME." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/72683206588831123740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
博士
國立成功大學
航空太空工程研究所
79
The problem of a typical supersonic mixed compression inlet that is characterized by multiple oblique shocks reflected in the supersonic diffusser and a terminating normal shock standing in the subsonic diffuser downstream of the throat is difficult to study. The complexities in this knid of intake system of a high-speed vehicle are numerically investigated. The interaction between shocks and the boundary layer plays an important role in the development of the boundary layer which in turn influences the performance of the inlet. The repid increase in the boundary layer thickness due to a large adverse pressure gradient across the shocks may cause flow separation, which can be eliminated by a careful design of the inlet contour and the incorporation of a bleed system. For the inlet studied here, the static pressure ratio through the inlet passage is as high as 31 for a designed flight Mach number 3.0. An improved scheme based on the second-order implicit upwind scheme of Coakley''s is presented. The scheme is a finite volume form, solving the Navier-Stokes equations. Several test cases are carried out to show the accuracy and efficiency of the present method. Calculations of an inviscid, one-dimensional, transonic nozzle flow and of a two-dimensional transoic channel flow show better shock-capturing capability of the scheme than a conventional central differencing scheme, or a flux vector splitting upwind scheme, and are comparable to the TVD scheme. Further verifications of the scheme both on the static pressure distribution and the skin friction distribution along the wall in the shock/laminar flat plate boundary layer interaction problem reveal that the present scheme is accurate and stable. The flowfields of an oblique shock impinging on a convex surface for different types of flow (i.e.,laminar, transitional, and trubulent) and different bleed amounts and locations are investigated for a better understanding of the effects of bleeding for boundary layer control. The objective of simulating a realistic mixed-compression supersonic inlet is finally accomplished. Problems encountered in the simulations, such as inlet unstart and overspeed analysis, are preseted and discussed in detail. The detailed flowfields, surface pressure distributions, the effect f bleed, simulation of a vortex generator (by enhancing a large eddy viscosity can prevent the flow reversal at engine face) and inlet performace parameters such as the total pressure recovery and the flow distortion are presented. The predictions are good in comparing the results with experimental data. It is believed and recommended that the present method, due to its accuracy and robustness, can be used as a good tool for the design of a supersonic intake system. 對於一具有多重斜震波反射於超音速擴散器及正震波形成於喉部區後次音速擴散器內 的超音速進氣道流場的分析是一件困難的工作。本文利用數值方法,藉以研究此類進 氣道系統的複雜流場。由於震波與邊界層之間的交互作用對邊界層的成長構成主要的 影響因素,因此,也進而影響到進氣道的性能。當邊界層承受由震波所引起之較大的 反向壓力梯度時,邊界層極容易流離物體表面,而此種現象可以以改進的內部外形設 計及有效利用吸氣裝置予以消除。針對本文所計算的設計點馬赫數為3.0 的進氣道而 言,出口與入口的靜壓比可以高達31.5,因此可見送向壓力梯度的重要性。 本文根據Clakley 所發展的二階準確隱式上游數值法,針對其缺點提出改良的方法, 並採用有限體積法之技巧以解可壓縮的Navier-Stokes 方程式。為了顯示本法的效率 及其準確性,測試相關的問題。對於非黏性一維及二維穿音速管流的問題,展現出本 法對於解析震波之能力,優於傳統之中央差分法及通向量分叉上游法,並且可以與先 進之TVD 法比擬。進一步的黏性流模擬於震波與平板層流邊界層交互作用之問題,更 顯現本法之準確性及穩定性。 最後本文完成了模擬一實際混合壓縮進氣道的流場及其分析。詳細的流場分析,壓力 在壁表面的分佈,吸流對於控制邊界層及正震波之穩定性的影響,渦流產生器的數學 模型模擬以及各種進氣道性能的參數指標,如全壓回復及流體扭曲都有詳盡的研究。 結果與實驗值所量測比較都相當吻合。我們相信並推薦本法,由於它本身的準確性及 穩定性,實可作為研究超音速進氣道系統之一有效分析工具。
31

Hong, Jing-neng, and 洪敬能. "A Novel PWM/PAM Mixed Laser Modulation Scheme for Laser Scanning Projection System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/89628896222360097524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立臺灣科技大學
光電工程研究所
98
Owing to the advancement of projection technology, the micro-projector is becoming more popular. Now, the main technology applied to projection display areDigital Light Processing(DLP), Liquid Crystal on Silicon(LCoS), and MEMS Mirror Scanning. The MEMS Mirror Scanning technology gets some advantages over others, such as lower energy consumption, more compact volume, and lower cost. The previous laser modulation technology applied to MEMS Mirror Scanning is mainly amplitude modulation(AM/PAM). This modulation type will cause the variation of color and brightness because the laser output power depends on the environment temperature. Our thesis proposes an novel method of laser modulation for the purpose of improving the previous defects and it can also enhancement the uniformity of display. We try to design a new laser driver circuits and builds up the digital video pattern conversion system. At last, we use this experimental platform to verify the innovative theory of laser modulation and prove that the innovative technology is better than amplitude modulation(AM/PAM) while environment temperature changing.
32

Alzahrani, Hasnaa H. "Mixed, Nonsplit, Extended Stability, Stiff Integration of Reaction Diffusion Equations." Thesis, 2016. http://hdl.handle.net/10754/617606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A tailored integration scheme is developed to treat stiff reaction-diffusion prob- lems. The construction adapts a stiff solver, namely VODE, to treat reaction im- plicitly together with explicit treatment of diffusion. The second-order Runge-Kutta- Chebyshev (RKC) scheme is adjusted to integrate diffusion. Spatial operator is de- scretised by second-order finite differences on a uniform grid. The overall solution is advanced over S fractional stiff integrations, where S corresponds to the number of RKC stages. The behavior of the scheme is analyzed by applying it to three simple problems. The results show that it achieves second-order accuracy, thus, preserving the formal accuracy of the original RKC. The presented development sets the stage for future extensions, particularly, to multidimensional reacting flows with detailed chemistry.
33

Hsieh, Chih-Chiang, and 謝智強. "Design of A Mixed Mode Code Acquisition Scheme for Direct-Sequence Spread Spectrum CDMA Systems." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/57978520366371765475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立臺灣科技大學
電機工程系
89
Matched filters are widely used by receiver design of a direct-sequence spread spectrum (DS-SS) system to achieve despreader of pseudo noise (PN) code. Among various types of Matched filter, serial-parallel architecture yields better trade off in between acquisition speed and circuit complexity. The main purpose of this thesis is to design a fast PN code acquisition detector to rapidly achieve initial synchronization between locally generated PN code and received PN code in a DS-SS system. There are two methods commonly employed in serial search technique: single dwell search and double dwell search. The double dwell search method is superior to single dwell search method in acquisition performance when input SNR is relatively good. While single dwell search method outperforms double dwell search method in acquisition when input SNR is poor. Since input SNR is closely related to the number of users accessing or noise interference the DS-SS system, and it is always pursued fast acquisition in a DS-SS system, it is therefore desired to have a code synchronizer that can achieve rapid code acquisition in both peak and non-peak hours. To this end, the other object of this thesis proposes a mixed mode code synchronizer employing both single and double dwell search methods in which single dwell search is adopted when input SNR is poor while double dwell search is applied when input SNR is relatively good. This thesis also presents the SNR estimation circuit to determine the timing to employ single dwell search or double dwell search.
34

Chang, Chih-Chien, and 張志健. "A Mixed-Signal Calibration Scheme for the Fully Differential Successive Approximation Analog-to-Digital Converter." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/36862163230714711690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立交通大學
電控工程研究所
99
It is the mismatched capacitors due to process variation that limit the resolution of a conventional capacitive SAR ADC. To address this issue, this thesis proposes a mixed-signal calibration scheme for the fully differential SAR ADC. The proposed calibration scheme first estimates the ratio errors of the capacitors under calibration in the binary weighted capacitor array. Then, the errors will be digitized and stored. When the SAR ADC operates in the normal conversion, the calibration scheme will compensate the errors caused by the DAC in an analog way. The calibration scheme proposed in this thesis is able to calibrate the ratio errors of the capacitors, no matter the ratio error is positive or negative. With the proposed calibration scheme, the effective number of bits of the SAR ADC can be enhanced. Simulation results show that the INL values are improved from -11~+11 LSB to -0.9~+0.9 LSB after calibration, and the SNDR and ENOB values are enhanced from 51.7dB and 8.3bits to 71.6dB and 11.6bits after calibration. The results show that the ADC’s performance can be significantly improved with the proposed calibration scheme. We implemented a 12-bit 1.8V SAR ADC with the proposed calibration scheme in TSMC 0.18µm 1P6M CMOS process. Measurement results show that the SAR ADC achieves SNDR of 53.3dB and ENOB of 8.6 bits at 10 MS/s before calibration. The ADC consumes 5.94mW at 1.8-V. The measurement results show this calibration method can be improved. The main issue is the mismatch between the main DAC and the sub DAC in the segmented structure. It is our future work to address this issue.
35

Zhang, Ming Hua, and 張明化. "The coexistance analysis of CDMA and TDMA scheme with overlaid/underlaid mixed macrocell/microcell cellular structures." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/92530178896708832622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jhong, Ren-Kai, and 鍾仁凱. "Study on Mixed Supercritical and Subcritical Flows Using Explicit Finite Analytic Method and MacCormack Hybrid Scheme." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/67690760628981645748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立交通大學
土木工程系所
97
This study extends Hsu and Yeh’s (1996) one-dimensional explicit finite analytic model (EFA) for simulating supercritical and mixed supercritical and subcritical flows. The essence of the EFA is the adoption of the concept of method of characteristics to the momentum equation for solving the local analytic solution of the dependent variables (i.e., discharge and cross-section area of flow). To ensure stability of the scheme, Courant condition should be obeyed. The dependent variables at the upstream and downstream boundaries are obtained through the method of characteristics. For the interior boundary condition at mixed supercritical and subcritical flows, the locations of the occurrences of hydraulic jumps are determined according to the values of Froude numbers. And water depths for supercritical regime at downstream boundaries were calculated. This was done through the method of MacCormack scheme and method of characteristics, by utilizing the water surface elevations of the interior neighboring computational points. The mixed supercritical and subcritical flow fields in laboratory flumes and natural rivers will be simulated and evaluated by the proposed model.
37

簡勁舟. "Run-to-run control scheme for a mixed-run process with single tool and multiple products." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/94620779000262420122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立清華大學
統計學研究所
99
Exponentially weighted moving average (EWMA) controller has been widely used in semiconductor manufacturing processes. In the literatures, several papers addressed the efforts of long-term stability and short-term performance of EWMA control scheme. However, all the existing models are only valid when focusing on single-product and single-tool production style. In practical applications, multiple-product and multiple-tool production style is common for the practical implementation of run to run (R2R) control scheme. Recently, product-based approach and tool-based approach have been proposed to handle this mixed-run production problem. Although these control schemes will eventually bring the process output to the desired targets; they may suffer from a larger rework rate (RR) due to larger process variations. To overcome this difficulty, we propose a modified single EWMA controller to tackle this problem. We first address the stability conditions of the proposed controller. The result demonstrates that the stability conditions of the proposed controller will be held under some specific assumptions. Furthermore, we also use simulation study to investigate the performance of the proposed controller via total mean square error (TMSE) and RR criterion. The results demonstrate proposed controller outperforms existing models.
38

Kang, Jun Won 1975. "A mixed unsplit-field PML-based scheme for full waveform inversion in the time-domain using scalar waves." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-05-1263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We discuss a full-waveform based material profile reconstruction in two-dimensional heterogeneous semi-infinite domains. In particular, we try to image the spatial variation of shear moduli/wave velocities, directly in the time-domain, from scant surficial measurements of the domain's response to prescribed dynamic excitation. In addition, in one-dimensional media, we try to image the spatial variability of elastic and attenuation properties simultaneously. To deal with the semi-infinite extent of the physical domains, we introduce truncation boundaries, and adopt perfectly-matched-layers (PMLs) as the boundary wave absorbers. Within this framework we develop a new mixed displacement-stress (or stress memory) finite element formulation based on unsplit-field PMLs for transient scalar wave simulations in heterogeneous semi-infinite domains. We use, as is typically done, complex-coordinate stretching transformations in the frequency-domain, and recover the governing PDEs in the time-domain through the inverse Fourier transform. Upon spatial discretization, the resulting equations lead to a mixed semi-discrete form, where both displacements and stresses (or stress histories/memories) are treated as independent unknowns. We propose approximant pairs, which numerically, are shown to be stable. The resulting mixed finite element scheme is relatively simple and straightforward to implement, when compared against split-field PML techniques. It also bypasses the need for complicated time integration schemes that arise when recent displacement-based formulations are used. We report numerical results for 1D and 2D scalar wave propagation in semi-infinite domains truncated by PMLs. We also conduct parametric studies and report on the effect the various PML parameter choices have on the simulation error. To tackle the inversion, we adopt a PDE-constrained optimization approach, that formally leads to a classic KKT (Karush-Kuhn-Tucker) system comprising an initial-value state, a final-value adjoint, and a time-invariant control problem. We iteratively update the velocity profile by solving the KKT system via a reduced space approach. To narrow the feasibility space and alleviate the inherent solution multiplicity of the inverse problem, Tikhonov and Total Variation (TV) regularization schemes are used, endowed with a regularization factor continuation algorithm. We use a source frequency continuation scheme to make successive iterates remain within the basin of attraction of the global minimum. We also limit the total observation time to optimally account for the domain's heterogeneity during inversion iterations. We report on both one- and two-dimensional examples, including the Marmousi benchmark problem, that lead efficiently to the reconstruction of heterogeneous profiles involving both horizontal and inclined layers, as well as of inclusions within layered systems.
text
39

Su, Jiun-Jang, and 蘇俊彰. "Design and Performance Analysis of A Mixed Mode Code Acquisition Scheme for Direct-Sequence Spread Spectrum CDMA Systems." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/04463634739109821463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立臺灣科技大學
電機工程系
89
The purpose of this thesis is to design a fast PN code acquisition detector to rapidly achieve initial synchronization between locally generated PN code and the received PN code in a direct-sequence spread-spectrum (DS-SS) code division multiple access (CDMA) system. There are two methods commonly employed in serial search technique: single dwell and multiple dwell (mostly double dwell). The double dwell search method is superior to single dwell search method in acquisition performance when input SNR is relatively good. While single dwell search method outperforms double dwell search method in acquisition when input SNR is poor. Since input SNR is closely related to the number of users accessing the CDMA system, and it is always pursued fast acquisition in a DS-SS CDMA system, it is therefore desired to have a code synchronizer that can achieve rapid code acquisition in both peak and non-peak hours. To this end, this research proposes a mixed mode code synchronizer employing both single and double dwell search methods in which single dwell search is adopted when input SNR is poor while double dwell search is applied when input SNR is relatively good. This thesis will analyze the acquisition performance of the proposed code acquisition scheme in terms of SNR, detection probability and false alarm rate.
40

Yan, Shiang-Kun, and 顏翔崑. "Large Eddy Simulations of Turbulent Mixed Convection in a Vertical Plane Channel Using a New Fully-Conservative Scheme." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/k3d46b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
博士
國立成功大學
航空太空工程學系碩博士班
90
A central difference (dissipation-free) scheme has to conserve kinetic energy to avoid numerical instability when a long-term time integration for incompressible flow such as large eddy simulation or direct numerical simulation is performed. A scheme is called fully conservative if it can simultaneously conserve mass, momentum, and kinetic energy in the discrete sense. A theoretical analysis is performed to extend the fully conservative schemes to non-uniform grid systems without sacrificing any conservative properties. The main step is to design a convective scheme which conserves momentum and kinetic energy simultaneously. An analysis is made for the second-order accurate scheme of Harlow and Welsh for staggered grid systems and it is found that the flux velocities need to be viewed as mass fluxes across control surfaces to conserve kinetic energy. To extend the analysis to higher order schemes, it is necessary to work in computational space. The contravariant-Cartensian velocity formulation for the convection term in computational space has the similar structure for the proposed fully conservative second-order scheme in physical space. Using the velocity formulation, the higher order convective schemes of Morinishi are extended to non-uniform staggered grid systems for the advective, divergence and skew-symmetric forms. The higher order schemes for scalar variables which conserve the square of the scalar variables are also derived. Several numerical tests are used to validate the conservative properties, accuracy and performance of the proposed higher order schemes. A series of LESs of turbulent heat transfer in channel flow to study the contributions of SGS motions and the influences of grid number on turbulent statistics. Large eddy simulations are performed to study fully developed turbulent mixed convection in a vertical plane channel, Re_b = 5600 and Pr = 0.71, with uniform heating or cooling from both walls. The main features of turbulent mixed convection are produced. For aiding flow, a transition Gr_q number, Gr_q = 1.40x10^8, exists. Before the transition number, the turbulence is generated mostly by the shear force driven by the pressure gradient. The turbulent statistics are similar in shape to those for forced convection while the magnitudes reduce slightly in the near-wall region for all turbulent statistics and the friction coefficient and the Nusselt number also decrease gradually. The buoyancy production term in the budget of turbulent kinetic energy remain small and negative over the whole channel. Around the transition Gr_q number, the regeneration process of near-wall structures are destroyed mostly. Second order statistics show the severest deterioration in the near-wall region and the turbulence generated by buoyancy becomes apparent on turbulent statistics away from the wall. The friction coefficient and the Nusselt number decline to 85% and 45%, respectively, of that at Gr_q=0. The point of the maximum mean velocity begins to shift away from the channel center and the Reynolds shear stress and streamwise turbulent heat flux change sign nearly at the location of the maximum mean velocity. The buoyancy production term changes sign, and thus the term becomes a main producing term while y is larger than the zero point. The similarity between u' and theta' begins to deteriorate. After the transition Gr_q number, turbulence generated by buoyancy gradually increases its influence on turbulent statistics. The magnitudes increase gradually in the near-wall region for all turbulent statistics, the friction coefficient and the Nusselt number with increasing Gr_q number. The dissimilarity between u' and theta' increases gradually and the thermal plumes become the main structures at highest simulated Gr_q. For opposing flow, the contributions of the buoyant force and Reynolds shear stress are in the opposite direction, and thus the turbulence intensity increases as the buoyant force increases. The turbulent statistics are similar in shape to those for forced convection while the magnitudes increase in the near-wall region for all turbulent statistics except for the mean streamwise velocity and the friction coefficient, and the Nusselt number also increases gradually with increasing Gr_q number. The near-wall streaky structures are similar to those of Gr_q = 0, but the dissimilarity between u' and theta' is observed at the highest simulated Gr_q.
41

Shih, Wun-Cai, and 施文財. "Digital Signal Processing Scheme for Wearable Devices-Using Mixed Fixed-Point and IEEE-754 Floating-Point Digital Filter Implementations." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/40791590197410585385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
亞洲大學
光電與通訊學系碩士在職專班
104
This thesis presents the hybrid architecture of floating-point and fixed-point com-putations using normal-form transformation for finite-precision infinite impulse response (IIR) digital filter state-space implementations. The proposed method pro-vides for well compromise between operational speed and output performance. To obtain higher operational speed, the state equation of a digital filter is implemented by fixed-point format. In contrary, to reduce the distortion occurred from underflow effect, floating-point format is implemented in the output equation. The developed digital filter implementation method is suitable for the use in narrow band imple-mentations, such as the narrow stop band IIR digital filter for eliminating power line interferences, or an extremely low pass band filter to reject the baseline wander in a wearable mini-ECG. We have found that the underflow effect is associated with the bandwidth and the sampling frequency in digital filter implementations. Some nu-merical examples illustrate the effectiveness of our proposed approach.
42

Shobiye, Hezekiah Olayinka. "The determinants of insurance participation: a mixed-methods study exploring the benefits, challenges and expectations among healthcare providers in Lagos, Nigeria." Thesis, 2018. https://hdl.handle.net/2144/32684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
BACKGROUND: In order to accelerate universal health coverage, Nigeria’s National Health Insurance Scheme (NHIS) decentralized the implementation of government health insurance to the States in 2014. Lagos has passed its State Health Scheme (LSHS) into law with a statewide roll out set to commence in 2018. The LSHS aims to improve access to quality care by reducing the financial burden of obtaining care for Lagos residents. Public and private healthcare providers are a critical component of this ambitious insurance roll out. Yet, little or no understanding exists on how to engage providers, the factors that influence their participation in insurance and expectations from the LSHS. In addition, little is known about the geographic distribution of NHIS accredited facilities and enrollees in Lagos State. METHODS: This study used a mixed-methods cross sectional design to analyze primary and secondary data. Primary data included both quantitative and qualitative data and were collected from representatively selected 60 healthcare providers in 6 Local Government Areas (LGAs) in Lagos State through questionnaires probing issues on the challenges and benefits of insurance participation, capacity pressure, resource availability and changes in financial management. Secondary data were obtained from NHIS and Lagos State inventory of health facilities, and household survey reports, and were visually mapped using a geographic information system (GIS) software. RESULTS: Facilities participating in insurance were more likely to be bigger with mid to very high patient volume and workforce. In addition, private were more likely than public facilities to participate in insurance. Furthermore, increase in patient volume and revenue were motivating factors for providers to participate in insurance, while low tariffs, delay and denial of payments, and patients’ unrealistic expectations were inhibiting factors. Also, NHIS enrollees were more likely to be located in the urban than rural LGAs. However, many urban LGAs have larger population sizes and as a result, were also characterized with higher number of non-NHIS enrollees and fewer NHIS accredited facilities. For the LSHS, many private facilities anticipate an increased patient volume and revenue but also worry that low tariffs without guaranteeing a high patient volume would be a major challenge. For many public facilities, inadequate infrastructure, lack of workforce, and insufficient drugs and commodities remain major challenges. CONCLUSION: For the LSHS to be successful, effective contracting of healthcare providers especially those in the low income and densely populated LGAs is essential. However, this would require that provider payment is adequate and regular. In addition, the government would need to invest heavily in improving the infrastructure and the amount of workforce, drugs and commodities available to public facilities.
43

Gau, Chi-Yuan, and 高啟元. "Impulse, Gaussian, and Mixed Noise Suppression Schemes for Digital Images." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/22696020259766744626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立交通大學
電機與控制工程系
89
The noise is commonly observed in many practical system, and the system’s output is greatly affected by the existence of the noise. Therefore, it is necessary and important to remove the noise. The thesis introduces several schemes for the image noise removal: the noise discussed here contains the impulse and Gaussian noise distribution, and both. First filters were designed to process some single noise type. Observe that the noise usually appears in mixed mode in a practical system. When the noise appears in mixed mode, the filters for single noise type mentioned above can not facilitate an effective filter action. Different from the design of filters above, we propose a method that can remove the noise mixed with impulse and Gaussian noise from an image. First, we detect the image pixel to check whether the image pixel is impulse noise contaminated or not. If the pixel is corrupted by the impulse noise, a new median filtering is applied to replace the noisy pixel, which will prevent to distract the original image pixel. Then the processed image is filtered by the use of the fuzzy rule-based filter to remove the remaining noise, whose noise contains Gaussian type mostly. The fuzzy rule-based filter’s output is a weighted-average sum. It is based on the three parameters: the gray level difference between pixels, the spatial distance and direction between pixels, and the variance in the local window. Using the LMS algorithm we can determine the membership function for the filter. From the simulation results, our proposal scheme has demonstrated the effectiveness and robustness, in comparison with other filters in image noise removal.
44

Dalmia, Kamal. "Built-in jitter test schemes for mixed-signal integrated circuits." Thesis, 1996. http://hdl.handle.net/2429/7771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent years have seen an unparalleled growth in the speed and complexity of VLSI circuits. Analog and mixed-signal circuits are going through a resurgence and continue to pose new challenges to VLSI test engineers. The state-of-the-art in the mixed-signal and analog test domain is to use application-specific test methodologies to tackle individual problems. The same is true for testing the high-speed clock signals used in present day integrated circuits (ICs) for their analog attributes. Jitter is one of the ways of quantifying the accuracy of a clock signal. Present day digital automatic test equipment (ATE) does not possess enough resolution to be suitable for jitter tests of high-speed clock signals such as SONET's (Synchronous Optical Network) 155.52 MHz and 622.08 MHz. In this thesis, the jitter test problem of high-speed clocks is approached with a built-in self-test (BIST) perspective. A BIST scheme is presented for the jitter tolerance test of clock and data recovery units typically found in data transceiver ICs. A cost-effective scheme based on the utilization of existing components for test purposes is presented. Some possible variations of the presented scheme are discussed. A second BIST scheme, focused on jitter testing of clock signals in a sampling-based digital signal processing (DSP) environment, is presented. Again, the focus is on the re-use of typically existing blocks on such ICs.
45

Knight, Emma. "Improved iterative schemes for REML estimation of variance parameters in linear mixed models." 2008. http://hdl.handle.net/2440/49425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Residual maximum likelihood (REML) estimation is a popular method of estimation for variance parameters in linear mixed models, which typically requires an iterative scheme. The aim of this thesis is to review several popular iterative schemes and to develop an improved iterative strategy that will work for a wide class of models. The average information (AI) algorithm is a computationally convenient and efficient algorithm to use when starting values are in the neighbourhood of the REML solution. However when reasonable starting values are not available, the algorithm can fail to converge. The expectation-maximisation (EM) algorithm and the parameter expanded EM (PXEM) algorithm are good alternatives in these situations but they can be very slow to converge. The formulation of these algorithms for a general linear mixed model is presented, along with their convergence properties. A series of hybrid algorithms are presented. EM or PXEM iterations are used initially to obtain variance parameter estimates that are in the neighbourhood of the REML solution, and then AI iterations are used to ensure rapid convergence. Composite local EM/AI and local PXEM/AI schemes are also developed; the local EM and local PXEM algorithms update only the random effect variance parameters, with the estimates of the residual error variance parameters held fixed. Techniques for determining when to use EM-type iterations and when to switch to AI iterations are investigated. Methods for obtaining starting values for the iterative schemes are also presented. The performance of these various schemes is investigated for several different linear mixed models. A number of data sets are used, including published data sets and simulated data. The performance of the basic algorithms is compared to that of the various hybrid algorithms, using both uninformed and informed starting values. The theoretical and empirical convergence rates are calculated and compared for the basic algorithms. The direct comparison of the AI and PXEM algorithms shows that the PXEM algorithm, although an improvement over the EM algorithm, still falls well short of the AI algorithm in terms of speed of convergence. However, when the starting values are too far from the REML solution, the AI algorithm can be unstable. Instability is most likely to arise in models with a more complex variance structure. The hybrid schemes use EM-type iterations to move close enough to the REML solution to enable the AI algorithm to successfully converge. They are shown to be robust to choice of starting values like the EM and PXEM algorithms, while demonstrating fast convergence like the AI algorithm.
Thesis (Ph.D.) - University of Adelaide, School of Agriculture, Food and Wine, 2008
46

Knight, Emma Jane. "Improved iterative schemes for REML estimation of variance parameters in linear mixed models." 2008. http://digital.library.adelaide.edu.au/dspace/handle/2440/49425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

"Redefining Situation Schema Under Chronic Stress: A Mixed Methods Construct Validation of Positive Cognitive Shift." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.50113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
abstract: Cognitive reappraisal, or redefining the meaning of a stressful circumstance, is useful in regulating emotional responses to acute stressors and may be mobilized to up- or down- regulate the stressors’ emotional salience. A conceptually-related but more targeted emotion regulation strategy to that offered by cognitive reappraisal, termed positive cognitive shift, was examined in the current study. Positive cognitive shift (“PCS”) is defined as a point of cognitive transformation during a chronic, stressful situation that alters the meaning and emotional salience of the situation for the individual. Key aspects of the PCS that differentiate it from the broader reappraisal construct are that it 1) is relevant to responses to chronic (versus acute) aversive events, 2) is deployed when there is a mismatch between coping and stressors, and 3) involves insight together with redefinition in meaning of the situation generating stress. The current study used qualitative and quantitative analyses to 1) examine whether PCS is an observable, reliable, and valid experience in response to a stressful event that occurred in the past year, and 2) test whether PCS moderates the relations between the number of past-year stressful life circumstances and subsequent emotional well-being and functional health. A community sample of 175 middle-aged individuals were interviewed regarded a past chronic stressor and completed questionnaires regarding number of past year stressors and health outcomes. Theory-based coding of interviews was conducted to derive reliable scores for PCS, and findings indicated that PCS was evident in 37.7 % of participant responses. Furthermore, PCS scores were related positively to openness, personal growth from one’s most difficult lifetime event, and affect intensity-calm, in line with predictions. Also in line with prediction, PCS moderated the relations between number of past-year life events and health outcomes, such that the deleterious relations between past year stressful events and cognitive functioning, wellbeing, positive affect, and negative affect were weaker among individuals higher versus lower in PCS. Of note, PCS moderation effects diminished as the number of stressful events increased.
Dissertation/Thesis
Doctoral Dissertation Psychology 2018
48

Wu, Lingyi, and 吳玲儀. "A simulation Study of Various Run-to-Run Control Schemes for a Mixed-Run Production System." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/95057108326736547055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
靜宜大學
財務與計算數學系
101
Exponentially weighted moving average (EWMA) is commonly used in Run-to-run (R2R) control of the semiconductor manufacturing process. The EWMA feedback controller is a popular model-based controller which primarily uses data from past process run to adjust the input recipe for the next run. In the literatures, several papers addressed the efforts of long-term stability and short-term performance of EWMA control scheme. However, in practical applications, multiple-tool and multiple- product production style is common for the practical implementation of R2R control scheme. A single EWMAcontroller can not be directly used to monitor the production process. Thus, for a such mixing production style, product-based controller, tool-based controller, MTB controller, modified sing EWMA controller and DREWMA controller have been proposed. Therefore, in this paper, we adopt the Distrubance retuned EWMA (DREWMA) controller to discuss the improvement of total means square error (TMSE), to explore different combinations of process disturbances under various controllers and its relative efficiency and to make comparison and analysis of the merits and drawbacks so as to efficiently use the existing controller to significantly reduce TMSE.
49

Lu, Ting-Yuan, and 陸庭元. "The Coexistance Analysis of FH/DS Schemes in Overlaid/Underlaid Architectures with Mixed Macrocell/Microcell Cellular Structure." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/86926369931472780966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Liaw, Cai-Pin, and 廖采頻. "The Null-Field Methods and Conservative schemes of Laplace’s Equation for Dirichlet and Mixed Types Boundary Conditions." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/13133549896617901995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立中山大學
應用數學系研究所
99
In this thesis, the boundary errors are defined for the NFM to explore the convergence rates, and the condition numbers are derived for simple cases to explore numerical stability. The optimal convergence (or exponential) rates are discovered numerically. This thesis is also devoted to seek better choice of locations for the field nodes of the FS expansions. It is found that the location of field nodes Q does not affect much on convergence rates, but do have influence on stability. Let δ denote the distance of Q to ∂S. The larger δ is chosen, the worse the instability of the NFM occurs. As a result, δ = 0 (i.e., Q ∈ ∂S) is the best for stability. However, when δ > 0, the errors are slightly smaller. Therefore, small δ is a favorable choice for both high accuracy and good stability. This new discovery enhances the proper application of the NFM. However, even for the Dirichlet problem of Laplace’s equation, when the logarithmic capacity (transfinite diameter) C_Γ = 1, the solutions may not exist, or not unique if existing, to cause a singularity of the discrete algebraic equations. The problem with C_Γ = 1 in the BEM is called the degenerate scale problems. The original explicit algebraic equations do not satisfy the conservative law, and may fall into the degenerate scale problem discussed in Chen et al. [15, 14, 16], Christiansen [35] and Tomlinson [42]. An analysis is explored in this thesis for the degenerate scale problem of the NFM. In this thesis, the new conservative schemes are derived, where an equation between two unknown variables must satisfy, so that one of them is removed from the unknowns, to yield the conservative schemes. The conservative schemes always bypasses the degenerate scale problem; but it causes a severe instability. To restore the good stability, the overdetermined system and truncated singular value decomposition (TSVD) are proposed. Moreover, the overdetermined system is more advantageous due to simpler algorithms and the slightly better performance in error and stability. More importantly, such numerical techniques can also be used, to deal with the degenerate scale problems of the original NFM in [15, 14, 16]. For the boundary integral equation (BIE) of the first kind, the trigonometric functions are used in Arnold [3], and error analysis is made for infinite smooth solutions, to derive the exponential convergence rates. In Cheng’s Ph. Dissertation [18], for BIE of the first kind the source nodes are located outside of the solution domain, the linear combination of fundamental solutions are used, error analysis is made only for circular domains. So far it seems to exist no error analysis for the new NFM of Chen, which is one of the goal of this thesis. First, the solution of the NFM is equivalent to that of the Galerkin method involving the trapezoidal rule, and the renovated analysis can be found from the finite element theory. In this thesis, the error boundary are derived for the Dirichlet, the Neumann problems and its mixed types. For certain regularity of the solutions, the optimal convergence rates are derived under certain circumstances. Numerical experiments are carried out, to support the error made.

To the bibliography