Dissertations / Theses on the topic 'CHP model'

To see the other types of publications on this topic, follow the link: CHP model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'CHP model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Brofman, Eduardo Gus. "Estudo de cogeração em hotéis." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/108526.

Full text
Abstract:
Este trabalho é um estudo da aplicação de um sistema que utiliza a cogeração para hotéis localizados na cidade de Porto Alegre, RS, Brasil. Foi analisada a implantação desse tipo de sistema, também chamado de CHP (Combined Heat and Power), de um ponto de vista econômico e energético. A questão econômica foi determinada pela viabilidade através de métodos de análise quantitativa, neste caso, dando enfoque ao tempo de retorno do investimento. Para a análise energética foi realizado o estudo dos consumos e demandas anuais da operação do prédio através da ferramenta de simulação térmo-energética de edificações. O software escolhido foi o EnergyPlus. Essas análises, energéticas e econômicas, foram realizadas através de uma comparação entre o hotel sem o sistema de CHP e o hotel com o sistema de CHP. O hotel hipotético simulado foi definido através de um levantamento de informações a respeito do desempenho energético de hotéis que funcionam em Porto Alegre. Além dos estudos energéticos e econômicos, foram realizadas variações em parâmetros do hotel para tentar abranger uma série de possíveis cenários e verificar suas viabilidades econômicas. Foi visto que a cogeração pode trazer redução de custo operacional mesmo não tendo um menor consumo energético anual. Em alguns cenários o tempo de retorno do investimento apresentou valor abaixo dos seis anos, sendo considerado como uma boa opção de investimento.
This work is a study of the application of a CHP (Combined Heat and Power) system in hotels built in the city of Porto Alegre, RS, Brazil. This system was analyzed from an economic and energetic point of view. The economic matter on its viability perspective was determined through quantitative methods, in this case, with the focus on the time frame for the investment's return. For the energetic analysis, a annual energy consumption and demand study was performed utilizing a whole-building energy model with computer simulation. The chosen software was EnergyPlus. The analyses, energetic and economic, were performed through a comparison between the hotel without the CHP system and with the CHP system. The hypothetic simulated hotel was determined by a data survey on the energetic performance of hotels build in Porto Alegre. In addition to the energetic and economic studies, some parametric variations to the hotel were made to include a series of possible scenarios and check their economic viability. It was observed that the CHP can provide some operational cost reductions even without presenting a lower annual energetic consumption. In some scenarios, the time to return the investment showed figures lower than six years, being considered a good investment option.
APA, Harvard, Vancouver, ISO, and other styles
2

Franceschin, Giada. "BIOETHANOL: A CONTRIBUTION TO BRIDGE THE GAP BETWEEN FIRST AND SECOND GENERATION PROCESSES." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3427333.

Full text
Abstract:
In this PhD thesis the development of a number of alternative solutions to overcome some of the main obstacles to the widespread diffusion of bioethanol production plants was addressed. The work was performed through both modelling, process simulation and optimization, and an experimental part concerning the pretreatment of lignocellulosic materials. A first generation ethanol production process was simulated and optimized. Some key sensitivities about the more likely improvements expected for the years to come, and the possibility to use supercritical CO2 extraction (SFE) instead of distillation were addressed. It was demonstrated that SFE is not economical because of high capital investment and high operation costs. About second generation bioethanol processes the simulation and energy optimization allowed identifying the best distillation configuration and demonstrating that the process is energetically selfsufficient even though an energy intensive pretreatment like the liquid hot water (LHW) one is chosen. The simulation tool was once more utilized to investigate the economical consequence of the additional xylose production to the traditional second generation process bioethanol. It was found that this high value product could make competitive even a medium size plant. The experimental work was focused on the pretreatment of both wheat bran and office paper with the liquid hot water technique. The tests on wheat bran were carried out in Germany at the TUHH University (Hamburg) while a lab scale reactor has been built to study the pretreatment of paper at DIPIC Università di Padova. In both cases it was demonstrated that with LHW pretreatment followed by enzymatic treatment it is possible to obtain monomeric sugars from the biomass. A model of the semibatch reactor was also developed to reproduce the experimental data about biomass solubilization.
A causa del prezzo altalenante del petrolio, del consistente impatto ambientale provocato dal massiccio utilizzo di combustibili fossili e della sempre più concreta possibilità che queste fonti di energia stiano per esaurirsi, negli ultimi anni si è rinnovato l'interesse per la questione dell’approvvigionamento energetico. Le previsioni riguardo all’anno in cui si presenterà il picco di Hubbert (il punto di produzione massima, oltre il quale la produzione del petrolio può soltanto diminuire) sono soggette a incertezze derivanti dalle diverse assunzioni sull’aumento della popolazione mondiale, il consumo pro capite e le politiche energetiche adottate dai diversi Paesi, ma la necessità di cercare al più presto un’alternativa ai combustibili fossili è un dato di fatto. Cercare di sostituire, almeno in parte, i prodotti di origine fossile con altri basati su risorse di tipo rinnovabile può essere la soluzione a breve e medio periodo per ridurre la dipendenza dal petrolio ed evitare una crisi economica dalle conseguenze imprevedibili. Il problema energetico riguarda in particolare la crescita repentina della richiesta di prodotti petroliferi, per il settore dei trasporti, che si è ulteriormente aggravata con l’ingresso massiccio di Paesi emergenti, come Cina e India, nel mercato internazionale del greggio. Infatti, mentre per sopperire al fabbisogno di elettricità e calore esistono già alternative tecnologiche (come l’energia eolica, solare, geotermica e le biomasse), per il problema dei carburanti per autotrasporto la scelta si fa più ristretta a causa della necessità di disporre di un combustibile fluido che abbia un’alta densità energetica. Se poi si cerca un’alternativa a breve termine che permetta di mantenere l’attuale struttura logistica così come di utilizzare la tecnologia esistente, le possibilità si restringono ulteriormente. Bioetanolo e biodiesel sono i candidati più probabili a sostituire benzina e gasolio essenzialmente perché possono essere utilizzati nei motori attualmente disponibili e perché i processi di produzione sono già ben conosciuti. Il costo del biodiesel rappresenta il maggior ostacolo alla sua commercializzazione ed è principalmente dovuto al fatto che gli oli vegetali, utilizzati come materie prime, sono molto costosi. L’impiego di oli di scarto come materia prima, la possibilità di ottenere processi di trans esterificazione continui e il recupero di glicerolo con un elevato grado di purezza sono i primi passi da considerare per superare il problema. Ma un altro ostacolo ben più grave alla larga diffusione del biodiesel è la bassa produttività di oli vegetali per ettaro. Se si volesse sostituire anche solo il 5.75% dei 49.1 milioni di tonnellate equivalenti di petrolio che sono annualmente consumate in Italia come combustibile bisognerebbe convertire a coltivazioni energetiche almeno 3.2 milioni di ettari di terreno coltivato (Russi, 2008). Un altro problema da risolvere sarebbe lo smaltimento dei 0.4 milioni di tonnellate di glicerina prodotte. Le microalghe appaiono essere le sole specie vegetali potenzialmente in grado di sostituire una fetta importante del combustibile fossile, ma la loro produzione su larga scala non è ancora stata ottenuta con rese soddisfacenti. Il bioetanolo è l’altro principale candidato per la sostituzione della benzina. Al giorno d’oggi il bioetanolo di prima generazione (ottenuto da mais e canna da zucchero) è caratterizzato da un mercato maturo e tecnologie conosciute, ed è infatti il biocarburante maggiormente prodotto su scala mondiale. In particolare, l’etanolo ottenuto a partire da canna da zucchero, è economicamente vantaggioso (il costo di produzione si aggira sui 0.22 $/L) ed ha rese elevate. La canna da zucchero, però, cresce solamente in climi tropicali o sub tropicali e necessita di almeno 600 mm di precipitazioni annue. Di conseguenza in paesi come gli Stati Uniti e l’Europa tale materia prima non può essere presa in considerazione (The Economist, 2007). In questi casi il bioetanolo è ottenuto a partire dal mais ma, a causa del più complesso processo produttivo e del maggior costo della materia prima, il costo di produzione è maggiore del 30%. Il bioetanolo prodotto da mais ha svolto e sta svolgendo un ruolo sicuramente molto importante nell’aprire la strada verso i biocombustibili, ma non può essere considerato la soluzione né a lungo né a medio termine per le ragioni già citate, e soprattutto per questioni di natura etica derivanti dal fatto di utilizzare una risorsa alimentare per fini energetici. Il bioetanolo di seconda generazione sembrerebbe l’unica soluzione in grado di superare il problema. In questo caso le materie prime utilizzate possono essere, infatti, scarti dell’industria agro forestale, del legno e della carta, oppure possono essere ottenute tramite culture marginali in grado di crescere in terreni non adatti alle altre culture e con una quantità di acqua ridotta (Detchon et al., 2005). Le materie primi non sono in competizione con l’industria alimentare; e il processo di produzione, nel complesso, produce meno anidride carbonica dei processi di prima generazione (Deurwaarder, 2005). Sfortunatamente, sebbene il bioetanolo da materiali lignocellulosici stia catalizzando l’attenzione sia della ricerca sia delle politiche di molti paesi, il suo sviluppo su scala industriale non è ancora avvenuto. Al momento il problema principale è l’alto costo di produzione causato principalmente dall’elevato costo degli enzimi utilizzati nel processo (Balat and Balat, 2009). Il grande interesse da parte della comunità scientifica internazionale per la questione energetica e la conclusione che il bioetanolo nel breve periodo è uno dei candidati più probabili per la parziale sostituzione dei combustibili fossili sono state le ragioni che hanno portato allo sviluppo di questa tesi. Considerando che il bioetanolo da canna da zucchero è già economico ed il processo già ampiamente ottimizzato, l’attenzione è stata rivolta alla produzione di etanolo da mais e da materiali lignocellulosici. Lo scopo è stato quello di studiare i processi produttivi, focalizzando l’attenzione sugli aspetti che limitano una produzione economica nel primo caso e la diffusione su scala industriale nel secondo. Nel Capitolo 1 i due processi produttivi sono presentati assieme alle innovazioni apportate negli ultimi anni e lo stato dell’arte. Nel Capitolo 2 un tipico impianto di prima generazione è presentato in dettaglio, grazie ai risultati ottenuti dalla simulazione di processo con il software Aspen PlusTM. Una volta sviluppato il modello, l’impianto è stato ottimizzato a livello energetico e si sono eseguite alcune analisi di sensitività. In particolare si è esaminata la possibile influenza delle future innovazioni (mais con un contenuto più elevato in amido e lieviti maggiormente resistenti ad alte concentrazioni di etanolo) sulle prestazioni del processo. Considerazioni di tipo economico, ottenute grazie ai risultati delle simulazioni, hanno permesso di individuare il costo del mais come maggiore contributo al costo di produzione finale (68.8%), seguito dalle richieste energetiche del processo (16.2%). Considerata l’impossibilità di agire sul costo del mais che segue regole di mercato, si è focalizzata l’attenzione sulla possibilità di ridurre le richieste energetiche del processo, in particolare quelle della distillazione. L’analisi bibliografica presentata nel Capitolo 3 ha permesso di individuare nell’estrazione dell’etanolo mediante CO2 supercritica una possibile alternativa alla distillazione tradizionale. L’azeotropo acqua-etanolo può, infatti, essere eliminato in presenza di CO2 supercritica, e di conseguenza il bioetanolo anidro potrebbe essere ottenuto mediante un solo passaggio. A seguito dell’implementazione dell’equilibrio ternario nel software Aspen PlusTM l’estrazione supercritica è stata integrata nel processo di prima generazione. I risultati delle simulazioni e l’analisi economica presentati nel Capitolo 4 hanno portato alla conclusione che tale soluzione, sebbene presentata in letteratura come valida alternativa alla distillazione, sia svantaggiosa a causa dell’alto investimento di capitale richiesto e dei costi operativi elevati. Nei capitoli successivi sono stati esaminati i processi di seconda generazione. Tra tutti i tipi di pretrattamento quello con acqua calda sotto pressione è stato individuato come uno dei più promettenti, ed è stato quindi stato scelto come base per il presente lavoro di ricerca. Nei processi di seconda generazione la materia prima incide in maniera assai inferiore sul costo di produzione finale, in quanto possono essere utilizzati anche materiali di scarto, per cui gli aspetti energetici assumono un’importanza maggiore. In particolare il pretrattamento con acqua calda ha il vantaggio di non utilizzare altre sostanze chimiche, ma l’acqua deve essere portata ad alta temperatura e pressione, con conseguente aumento della richiesta energetica. Nel Capitolo 5 il processo di produzione di bioetanolo da paglia è stato simulato in dettaglio giungendo a dimostrare che i residui solidi del processo sono in grado di sostenere le richieste energetiche dello stesso anche con il pretrattamento ad acqua calda. Un altro grande problema del bioetanolo di seconda generazione è la non competitività economica. Nel Capitolo 6 si è scelto di verificare l’impatto di un secondo prodotto ad alto valore sulla profittabilità dell’intero processo. I risultati dell’analisi tecnoeconomica sulla contemporanea produzione di bioetanolo (dagli zuccheri a sei atomi di carbonio) e xilitolo (dallo xilosio) hanno dimostrato che anche impianti di media taglia possono diventare competitivi se viene considerata questa opzione. Nei Capitoli 7 e 8 sono presentati i risultati sperimentali ottenuti dal pretrattamento della crusca e della carta con acqua calda. In entrambi i casi è stato dimostrato che mediante pretrattamento seguito da idrolisi enzimatica è possibile ottenere zuccheri monomerici, i quali possono essere poi fermentati a etanolo. Infine nel Capitolo 9 sono proposti due semplici modelli in grado di rappresentare il pretrattamento con acqua calda in un reattore semi continuo. Tali modelli sono in grado di riprodurre quantitativamente l’andamento della solubilizzazione della biomassa alle diverse temperature, prevedere le concentrazioni di zuccheri monometrici e dei prodotti di degradazione.
APA, Harvard, Vancouver, ISO, and other styles
3

Horák, Jakub. "Moderní bioplynová stanice jako součást „Smart Regions“." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232158.

Full text
Abstract:
This thesis deals with the design of computational model of a biogas plant and its use in the concept of intelligent region with focusing on district heating and cooling network. The introduction contains review of technology used in the biogas plant. This review covers the description of modern biogas plants and determination of the energy and technology parameters for computational model of biogas plant. The next part of thesis describes analyze of the dynamics of the operation and the possibilities of using waste heat from biogas plant. The last and also the most important part is based on design of computational model of a biogas plant and design of connection of a biogas plant to the district heating and cooling network.
APA, Harvard, Vancouver, ISO, and other styles
4

Boehnke, Jasper. "Business models for Micro CHP in residential buildings." kostenfrei, 2007. http://www.unisg.ch/www/edis.nsf/wwwDisplayIdentifier/3375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ruthberg, Richard, and Sebastian Wogenius. "Stochastic Modeling of Electricity Prices and the Impact on Balancing Power Investments." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192111.

Full text
Abstract:
Introducing more intermittent renewable energy sources in the energy system makes the role of balancing power more important. Furthermore, an increased infeed from intermittent renewable energy sources also has the effect of creating lower and more volatile electricity prices. Hence, investing in balancing power is prone to high risks with respect to expected profits, which is why a good representation of electricity prices is vital in order to motivate future investments. We propose a stochastic multi-factor model to be used for simulating the long-run dynamics of electricity prices as input to investment valuation of power generation assets. In particular, the proposed model is used to assess the impact of electricity price dynamics on investment decisions with respect to balancing power generation, where a combined heat and power plant is studied in detail. Since the main goal of the framework is to create a long-term representation of electricity prices so that the distributional characteristics of electricity prices are maintained, commonly cited as seasonality, mean reversion and spikes, the model is evaluated in terms of yearly duration which describes the distribution of electricity prices over time. The core aspects of the framework are derived from the mean-reverting Pilipovic model of commodity prices, but where we extend the assumptions in a multi-factor framework by adding a functional link to the supply- and demand for power as well as outdoor temperature. On average, using the proposed model as a way to represent future prices yields a maximum 9 percent overand underprediction of duration respectively, a result far better than those obtained by simpler models such as a seasonal profile or mean estimates which do not incorporate the full characteristics of electricity prices. Using the different aspects of the model, we show that variations of electricity prices have a large impact on the investment decision with respect to balancing power. The realized value of the flexibility to produce electricity in a combined heat and power plant is calculated, which yields a valuation close to historical realized values. Compared with simpler models, this is a significant improvement. Finally, we show that by including characteristics such as non-constant volatility and spiky behavior in investment decisions, the expected value of balancing power generators, such as combined heat and power plants, increases.
I takt med att fler intermittenta förnyelsebara energikällor tillför el i dagens energisystem, blir också balanskraftens roll i dessa system allt viktigare. Vidare så har en ökning av andelen intermittenta förnyelsebara energikällor även effekten att de bidrar till lägre men också mer volatila elpriser. Därmed är även investeringar i balanskraft kopplade till stora risker med avseende på förväntade vinster, vilket gör att en god representation av elpriser är central vid investeringsbeslut. Vi föreslår en stokastisk flerfaktormodell för att simulera den långsiktiga dynamiken i elpriser som bas för värdering av generatortillgångar. Mer specifikt används modellen till att utvärdera effekten av elprisers dynamik på investeringsbeslut med avseende på balanskraft, där ett kraftvärmeverk studeras i detalj. Eftersom huvudmålet med ramverket är att skapa en långsiktig representation av elpriser så att deras fördelningsmässiga karakteristika bevaras, vilket i litteraturen citeras som regression mot medelvärde, säsongsvariationer, hög volatilitet och spikar, så utvärderas modellen i termer av årlig prisvaraktighet som beskriver fördelningen av elpriser över tid. Kärnan i ramverket utgår från Pilipovic-modellen av råvarupriser, men där vi utvecklar antaganden i ett flerfaktorramverk genom att lägga till en länkfunktion till tillgång- och efterfrågan på el samt utomhustemperatur. Vid användande av modellen som ett sätt att representera framtida priser, fås en maximal över- och underprediktion av prisvaraktighet om 9 procent, ett resultat som är bättre än det som ges av enklare modellering såsom säsongsprofiler eller enkla medelvärdesestimat som inte tar hänsyn till elprisernas fulla karakteristika. Till sist visar vi med modellens olika komponenter att variationer i elpriser, och därmed antaganden som används i långsiktig modellering, har stor betydelse med avseende på investeringsbeslut i balanskraft. Det realiserade värdet av flexibiliteten att producera el för ett kraftvärmeverk beräknas, vilket ger en värdering nära faktiska realiserade värden baserade på historiska priser och som enklare modeller inte kan konkurrera med. Slutligen visar detta också att inkluderandet av icke-konstant volatilitet och spikkarakteristika i investeringsbeslut ger ett högre förväntat värde av tillgångar som kan producera balanskraft, såsom kraftvärmeverk.
APA, Harvard, Vancouver, ISO, and other styles
6

Hasasneh, Nabil M. "Chip multi-processors using a micro-threaded model." Thesis, University of Hull, 2006. http://hydra.hull.ac.uk/resources/hull:13609.

Full text
Abstract:
Most microprocessor chips today use an out-of-order (OOO) instruction execution mechanism. This mechanism allows superscalar processors to extract reasonably high levels of instruction level parallelism (lLP). The most significant problem with this approach is a large instruction window and the logic to support instruction issue from it. This includes generating wake-up signals to waiting instructions and a selection mechanism for issuing them. Wide-issue width also requires a large multi-ported register file, so that each instruction can read and write its operands simultaneously. Neither structure scales well with issue width leading to poor performance relative to the gates used. Furthermore, to obtain this ILP, the execution of instructions must proceed speculatively. An alternative, which avoids this complexity in instruction issue and eliminates speculative execution, is the microthreaded model. This model fragments sequential code at compile time and executes the fragments OOO while maintaining in-order execution within the fragments. The fragments of code are called microthreads and they capture ILP and loop concurrency. Fragments can be interleaved on a single processor to give tolerance to latency in operands or distributed to many processors to achieve speedup. The major advantage of this model is that it provides sufficient information to implement a penalty free distributed register file organisation. However, the scalability of the microthreaded register file in terms of the number of required logical read and write ports is not clear yet. In this thesis, we looked at the distribution and frequency of access to the asynchronous (non-pipeline) ports in the synchronising memory and provide a detail analysis and evaluation of this issue. It concluded, using an analysis of a range of different code kernel, that a distributed shared synchronising memory could be implemented with 5-ports per processor, where three ports provided single instruction issue per cycle and the other two asynchronous ports were able to manage all other demands on the local register file. Also, in the microthreaded CMP a broadcast bus is used for thread creation and to replicate the compiler-defined global state to each processor's local register file. This is done instead of accessing a centralised register file for global variables. The key problem is that, accessing this bus by multiple processors simultaneously caused contention and unfair communication between processors. Therefore, to avoid processor contention and to take the advantages of asynchronous communication, this thesis presents a scalable and partitionable asynchronous bus arbiter for use with chip multiprocessors (eMP) and its corresponding pre-layout simulation results using VHDL. It is shown in this thesis that this arbiter can be extended easily to support large numbers of processors and can be used for chip multiprocessor arbitration purposes. Furthermore, the microthreaded model requires dynamic register allocation and a hardware scheduler, which can support hundreds of microthreads per processor and their associated microcontexts. The scheduler must support thread creation, context switching and thread rescheduling on every machine cycle to fully support this model, which is a significant challenge. In this thesis, scalable implementations and evaluation of these support structures are presented and the feasibility of large-scale CMPs is investigated by giving detailed area estimate of these structures using 0.07-micron technology.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Feng. "Contaminated Chi-square Modeling and Its Application in Microarray Data Analysis." UKnowledge, 2014. http://uknowledge.uky.edu/statistics_etds/7.

Full text
Abstract:
Mixture modeling has numerous applications. One particular interest is microarray data analysis. My dissertation research is focused on the Contaminated Chi-Square (CCS) Modeling and its application in microarray. A moment-based method and two likelihood-based methods including Modified Likelihood Ratio Test (MLRT) and Expectation-Maximization (EM) Test are developed for testing the omnibus null hypothesis of no contamination of a central chi-square distribution by a non-central Chi-Square distribution. When the omnibus null hypothesis is rejected, we further developed the moment-based test and the EM test for testing an extra component to the Contaminated Chi-Square (CCS+EC) Model. The moment-based approach is easy and there is no need for re-sampling or random field theory to obtain critical values. When the statistical models are complicated such as large mixtures of dimensional distributions, MLRT and EM test may have better power than moment based approaches, and the MLRT and EM tests developed herein enjoy an elegant asymptotic theory.
APA, Harvard, Vancouver, ISO, and other styles
8

Dyer, Nigel. "Informative sequence-based models for fragment distributions in ChIP-seq, RNA-seq and ChIP-chip data." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/49963/.

Full text
Abstract:
Many high throughput sequencing protocols for RNA and DNA require that the polynucleic acid is fragmented so that the identity of a limited number of nucleic acids of one or both of the ends of the fragments can be determined by sequencing. The nucleic acid sequence allows the fragment to be located within the genome, and the fragment distribution can then be used for a variety of different purposes. In the case of DNA this includes identifying the locations where specific proteins are bound to the genome. In the case of RNA this includes quantifying the expression levels of different gene variants or transcripts. If the locations of the polynucleic acid fragments are partly determined by the underlying nucleic acid sequence this could bias any results derived from the data. Unfortunately, such sequence dependencies have already been observed in the distribution of both RNA and DNA fragments. Previous analyses of such data in order to reduce the bias have examined the role of regional characteristics such as GC bias, or the bias towards a specific sequence at the start of the fragments. This thesis introduces a new method for modelling the bias which considers the degree to which the nucleotide sequence affects the likelihood of a fragment originating at that location. This shows that there is often not a single bias characteristic, but multiple, alternative sequence biases that coexist within a single dataset. This also shows that the nucleotide sequence immediately proximal to the fragment also has a significant effect on the fragment likelihood. This new approach highlights characteristics that were previously hidden and provides a more powerful basis for correcting such bias. Multiple alternative sequence biases are observed when both RNA and DNA are fragmented, but the more detailed information provided by the new technique shows in detail how the characteristics are different for RNA and DNA and indicates that very different molecular mechanisms are responsible for the biases in the two processes. This thesis also shows how removing the effect of this bias in ChIP-seq experiments can reveal more subtle features of the distribution of the fragments. This can provide information on the nature of the binding between proteins and the DNA with per-nucleotide precision, revealed through the change in likelihood of the DNA fragmenting at each position in the binding site. It is also shown how the model fitting technique developed to analyse sequence bias can also be used to obtain additional information from the results of ChIP-chip experiments. The approach is used to find the nucleotide sequence preference of DNA binding proteins, and also the cooperative effects associated with binding at multiple binding sites in close proximity.
APA, Harvard, Vancouver, ISO, and other styles
9

Rocha, Erika da Justa Teixeira. "âModelagem da IntrusÃo Salina Utilizando Analise de Sensitividade Adjunta â Estudo de Caso: Cap-Bon/Tunisiaâ." Universidade Federal do CearÃ, 2011. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=6526.

Full text
Abstract:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
Nos dias atuais a Ãgua se constitui em um bem natural que limita o desenvolvimento socioeconÃmico e, atà mesmo, a subsistÃncia da populaÃÃo. Como tentativa de minimizar o problema da escassez de Ãgua tem-se utilizado a explotaÃÃo da Ãgua subterrÃnea. Entretanto, esse crescimento da utilizaÃÃo de Ãguas subterrÃneas foi feito de forma desordenada e com a construÃÃo inadequada de poÃos. Essa prÃtica acabou por colocar em risco a qualidade das Ãguas subterrÃneas. Assim, a gestÃo dos recursos hÃdricos subterrÃneos tem se tornado um grande desafio. Essa tese propÃe o desenvolvimento um modelo para a simulaÃÃo de fluxo hÃdrico e de transporte de massa para problemas transientes em aqÃÃferos costeiros sujeitos à intrusÃo salina, por meio do desenvolvimento de um modelo numÃrico. Em seguida à desenvolvida uma anÃlise de sensitividade com o objetivo de possibilitar, atravÃs do melhor conhecimento dos parÃmetros locais e suas influÃncias, uma melhor adequaÃÃo do modelo à realidade.
Today the water is a natural well which limits the socioeconomic development and even the subsistence of the population. An attempt to minimize the problem of water scarcity has used the farming of groundwater. However, this growth of the use of groundwater was done inappropriately and with inadequate wells construction. This practice was eventually put at risk the quality of groundwater. Thus, the management of groundwater resources has become a major challenge. This thesis proposes developing a model for the simulation of water flow and mass transport for transient problems in coastal aquifers subject to saline intrusion, through the development of a numerical model. Then we developed a sensitivity analysis with the goal of enabling through better knowledge of local parameters and their influences, a best fit of model to reality.
APA, Harvard, Vancouver, ISO, and other styles
10

Ricca, Steven. "Using a one-chip microcomputer to control an automated warehouse model." Ohio : Ohio University, 1988. http://www.ohiolink.edu/etd/view.cgi?ohiou1182869918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Singleton, Roger. "Utilisation of chip thickness models in grinding." Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2541/.

Full text
Abstract:
Grinding is now a well established process utilised for both stock removal and finish applications. Although significant research is performed in this field, grinding still experiences problems with burn and high forces which can lead to poor quality components and damage to equipment. This generally occurs in grinding when the process deviates from its safe working conditions. In milling, chip thickness parameters are utilised to predict and maintain process outputs leading to improved control of the process. This thesis looks to further the knowledge of the relationship between chip thickness and the grinding process outputs to provide an increased predictive and maintenance modelling capability. Machining trials were undertaken using different chip thickness parameters to understand how these affect the process outputs. The chip thickness parameters were maintained at different grinding wheel diameters for a constant productivity process to determine the impact of chip thickness at a constant material removal rate. Additional testing using a modified pin on disc test rig was performed to provide further information on process variables. The different chip thickness parameters provide control of different process outputs in the grinding process. These relationships can be described using contact layer theory and heat flux partitioning. The contact layer is defined as the immediate layer beneath the contact arc at the wheel workpiece interface. The size of the layer governs the force experienced during the process. The rate of contact layer removal directly impacts the net power required from the system. It was also found that the specific grinding energy of a process is more dependent on the productivity of a grinding process rather than the value of chip thickness. Changes in chip thickness at constant material removal rate result in microscale changes in the rate of contact layer removal when compared to changes in process productivity. This is a significant piece of information in relation to specific grinding energy where conventional theory states it is primarily dependent on chip thickness.
APA, Harvard, Vancouver, ISO, and other styles
12

Dou, Jialin. "A compiler cost model for speculative multithreading chip-multiprocessor architectures." Thesis, University of Edinburgh, 2006. http://hdl.handle.net/1842/24532.

Full text
Abstract:
This thesis proposes a novel compiler static cost model of speculative multithreaded execution that can be used to predict the resulting performance. This model attempts to predict the expected speedups, or slowdowns, of the candidate speculative sections based on the estimation of the combined run-time effects of various speculation overheads, and taking into account the scheduling restrictions of most speculative execution environments. The model is based on estimating the likely execution duration of threads and considers all the possible permutations of these threads when scheduled on a multiprocessor. The proposed cost model was implemented in a research computer development framework. The model seamlessly uses the compiler’s intermediate representation and integrates with the control and data flow analyses. The resulting framework was tested and evaluated on a collection of SPEC benchmarks, which include large real-world scientific and engineering applications. The framework was found to be very stable and efficient with moderate compilation times. Initially, the proposed framework is evaluated on a number of loops that suffer mainly from load imbalance and thread dispatch and commit overheads. Experimental results show that the framework can identify on average 68% of the loops that cause slowdowns and on average 97% of the loops that lead to speedups. In fact, the framework predicts the speedups or slowdowns with an error of less than 20% for an average of 44% of the loops across the benchmarks, and with an error of less than 50% for an average of 84% of the loops. Overall, the framework leads to a performance improvement of 5% on average, and as high as 38%, over a naïve approach that attempts to speculatively parallelize all the loops considered. The proposed framework is also evaluated on loops that may suffer from data dependence violations. Experimental results with all loops show that prediction accuracy is lower when loops with violations are included. Nevertheless, accuracy is still very high for a static model.
APA, Harvard, Vancouver, ISO, and other styles
13

Brooks, Zachary Edward. "Mechanical Stresses on Nasal Mucosa Using Nose-On-Chip Model." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1578492176817977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Muncy, Jennifer V. "Predictive Failure Model for Flip Chip on Board Component Level Assemblies." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5131.

Full text
Abstract:
Environmental stress tests, or accelerated life tests, apply stresses to electronic packages that exceed the stress levels experienced in the field. In theory, these elevated stress levels are used to generate the same failure mechanisms that are seen in the field, only at an accelerated rate. The methods of assessing reliability of electronic packages can be classified into two categories: a statistical failure based approach and a physics of failure based approach. This research uses a statistical based methodology to identify the critical factors in reliability performance of a flip chip on board component level assembly and a physics of failure based approach to develop a low cycle strain based fatigue equation for flip chip component level assemblies. The critical factors in determining reliability performance were established via experimental investigation and their influence quantified via regression analysis. This methodology differs from other strain based fatigue approaches because it is not an empirical fit to experimental data; it utilizes regression analysis and least squares to obtain correction factors, or correction functions, and constants for a strain based fatigue equation, where the total inelastic strain is determined analytically. The end product is a general flip chip on board equation rather than one that is specific to a certain test vehicle or material set.
APA, Harvard, Vancouver, ISO, and other styles
15

Zampieri, Marília. "Investigação do monitoramento metacognitivo de crianças diante de medidas de capacidades intelectuais." Universidade Federal de São Carlos, 2012. https://repositorio.ufscar.br/handle/ufscar/6027.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:30:53Z (GMT). No. of bitstreams: 1 4304.pdf: 1024628 bytes, checksum: 668e3d7ee9a9c99bca8b4665d1697ba9 (MD5) Previous issue date: 2012-03-09
Financiadora de Estudos e Projetos
Metacognition can be defined as the knowledge people have about their own cognitive processes, which can help them plan, monitor, regulate and assess their cognitive activities. Studies in the field have produced instruments and measures designed to assess metacognitive performance, many of them based on Flavell s Model of Cognitive Monitoring andas well as Nelsons and Narens model. The present study focused on gaining a better understanding of metacognitive monitoring, which is usually assessed through judgements, which represents one way of measuring metacognition. The aim of the present study was to investigate children s metacognitive monitoring during the execution of three subtests of the BMI (Multidimensional Battery of Child Intelligence), based on the Cattell-Horn-Carroll s model of intelligence. The subtests chosen assessed quantitative knowledge, crystallized intelligence and fluid intelligence. Participants were 44 fifht-year students, and each child was individually evaluated. Following each of the subtests, children were asked to estimate their performance. Results showed that children s repertoire included metacognitive abilities, and some metacognitive monitoring rates were better for quantitative knowledge . When these abilities were compared relative to cognitive performance, those with the highest scores on the intelligence subtests demonstrated better metacognitive monitoring. These results, obtained with Brazilian children, are compared with results reported in the international literature, and the implications in terms of promoting metacognitive training are discussed.
A metacognição pode ser entendida como o conhecimento que o indivíduo possui sobre seu próprio funcionamento cognitivo, o que lhe permite planejar, monitorar, regular e avaliar suas atividades cognitivas. Estudos da área têm sido conduzidos para produzir instrumentos e medidas confiáveis do desempenho metacognitivo dos indivíduos, diversos deles tendo como referencial teórico os modelos formulados por Flavell e Nelson e Narens. Alguns destes instrumentos são escalas de autorrelato. O monitoramento metacognitivo, foco do presente estudo, é avaliado com frequência por meio dos julgamentos, que constituem outra ferramenta de avaliação da metacognição. O presente estudo teve como objetivo investigar o monitoramento metacognitivo de crianças durante a realização de três subtestes que compõem a Bateria Multidimensional de Inteligência Infantil: Desempenho em Matemática, Vocabulário Geral e Indução. O referencial teórico desta bateria é o Modelo Cattell-Horn-Carroll de inteligência e os subtestes referidos são destinados à avaliação das capacidades de conhecimento quantitativo, inteligência cristalizada e inteligência fluida, respectivamente. Participaram do estudo 44 alunos do quinto ano do Ensino Fundamental. Eles realizaram, individualmente, os três subtestes da BMI, e foram solicitados a emitir estimativas acerca de seu desempenho; estes julgamentos correspondem ao monitoramento metacognitivo. Os resultados indicaram que a amostra já apresenta habilidades de monitoramento cognitivo, e algumas medidas de monitoramento mostraram-se significativamente melhores para o subteste Desempenho em Matemática. Quando as habilidades de monitoramento foram comparadas de acordo com o desempenho cognitivo dos indivíduos, foram observados melhores índices de monitoramento metacognitivo nos indivíduos com melhor desempenho nos. Os dados são relevantes para confirmar, na população nacional, as informações da literatura internacional, e também para discutir a importância do incentivo e estímulo ao treinamento das habilidades metacognitivas.
APA, Harvard, Vancouver, ISO, and other styles
16

Alvarez, Gustavo Adolfo Patiño. "Caracterização analítica de carga de trabalho baseada em cenários de aplicações multimídia." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/3/3140/tde-26072013-171107/.

Full text
Abstract:
As metodologias clássicas de análise de desempenho de sistemas sobre silício (System on chip, SoC) geralmente são descritas em função do tempo de execução do pior-caso1 das tarefas a serem executadas. No entanto, nas aplicações do mundo real, o tempo de execução destas tarefas pode variar devido à presença dos diferentes eventos de entrada que ativam o sistema, colocando uma exigência diferente de execução sobre os recursos do sistema. Geralmente, um modelo da carga de trabalho é uma parte integrante de um modelo de desempenho utilizado para avaliar o desempenho de um sistema. O quão bom for um modelo de carga de trabalho determina em grande medida a qualidade das soluções do projeto e a precisão das estimativas de desempenho baseadas nele. Nesta tese, é abordado o problema de modelar a carga de trabalho para o projeto de sistemas de tempo-real cuja funcionalidade envolve processamento de fluxos de multimídia, isto é, fluxos de dados representando áudio, imagens ou vídeo. O problema de modelar a carga de trabalho é abordado sob a premissa de que uma caracterização acurada do comportamento temporal do software embarcado permite ao projetista identificar diversas exigências variáveis de execução, apresentadas para os diversos recursos de arquitetura do sistema, tanto na operação individual do conjunto de tarefas de software, assim como na execução global da aplicação, em fase de projeto. A caracterização do comportamento de cada tarefa foi definida a partir de uma análise temporal dos códigos de software associados às diferentes tarefas de uma aplicação, a fim de identificar os múltiplos modos de operação que o código pode apresentar dentro de um processador. Esta caracterização é feita através da realização de uma análise estática das rotas do código executável, de forma que para cada rota de execução encontrada, estimam-se os tempos extremos de execução (WCET e BCET)2, baseando-se na modelagem da microarquitetura de um processador on-chip. Desta forma, cada rota do código executável junto aos seus respectivos tempos de execução, constitui um modo de operação do código analisado. A fim de agrupar os diversos modos de operação que apresentam um grau de semelhança entre si de acordo a uma perspectiva da medida de processamento utilizado do processador modelado, foi utilizado o conceito de cenário, o qual diferencia o comportamento de cada tarefa em relação às entradas que a aplicação sob análise pode receber. Partindo desta caracterização temporal de cada tarefa de software, as exigências da execução global da aplicação são representadas através de um modelo analítico de eventos. O modelo considera as diferentes tarefas como atores temporais de um grafo de fluxo síncrono de dados, de modo que os diferentes cenários de operação da aplicação são definidos em função dos tempos variáveis de execução identificados previamente na caracterização de cada tarefa. Uma descrição matemática deste modelo, baseada na Álgebra de Max-Plus, permite caracterizar analiticamente os diferentes fluxos de eventos entre a entrada e a saída da aplicação, assim como os fluxos de eventos entre as diferentes tarefas, considerando as mudanças nas exigências de processamento associadas aos diversos cenários previamente identificados. Esta caracterização analítica dos diversos fluxos de eventos de entrada e saída é a base para um modelo de curvas de carga de trabalho baseada em cenários de aplicação, e um modelo de curvas de serviços baseada também em cenários, que dão lugar a caracterizar o dinamismo comportamental da aplicação analisada, determinado pela diversidade de eventos de entrada que podem ativar diferentes comportamentos do sistema em fase de execução.
Classical methods for performance analysis of Multiprocessor System-on-chip (MPSoCs) are usually described in terms of Worst-Case Execution Times (WCET) of the executed tasks. Nevertheless, in real-world applications the running time of tasks varies due to different input events that trigger the system, imposing a different workload on the system resources. Usually, a workload model is a part of a performance model used to evaluate the performance of a system. How good is a workload model largely determines the quality of design solutions and the accuracy of performance estimations based on it. This thesis addresses the problem of modeling the workload for the design of real-time systems which functionality involves multimedia streams processing, i.e, data streams representing audio, images or video. The workload modeling problem is addressed from the assumption that an accurate characterization of timing behavior of real-time embedded software enables the designer to identify several variable execution requirements that the individual operation of the software tasks and the overall execution of the application will present to the several system resources of an architecture, in design phase. The software task characterization was defined from a timing analysis of the source code in order to identify the multiple operating modes the code can exhibit within a processor. This characterization is done by performing a static path analysis on the code, so that for each given path the worst-case and bestcase execution times (WCET and BCET) were estimated, based on a microarchitectural modeling of an on-chip processor. Thus, every execution path of the code, with its estimated execution times, defines an operation mode of the analyzed code. In order to cluster the several operation modes that exhibit certain degree of similarity according to the required amount of processing in the modeled processor, the concept of scenario was used, which differentiates every task behavior with respect to the several inputs the application under analysis may receive. From this timing characterization of every application task, the global execution requirements of the application are represented by an analytical event model. It describes the tasks as timed actors of a synchronous dataflow graph, so that the multiple application scenarios are defined in terms of the variable execution times previously identified in the task characterization. A mathematical description of this model based on the Max-Plus Algebra allows one to characterize the different event sequences incoming to, and exiting from, the application as well as the event sequences between the different tasks, having in count changes in the processing requirements associated with the various scenarios previously identified. This analytical characterization between the input event sequences and the output event sequences states the basis for a model of scenario-based workload curves and a model of scenario-based service curves that allow characterizing the behavioral dynamism of the application determined by the several input events that activate several system behaviors, in the execution phase.
APA, Harvard, Vancouver, ISO, and other styles
17

Malmquist, Hampus, and Anton Hansson. "Januarieffekten inom large cap och mid cap bolag : En studie på svenska börsmarknaden." Thesis, Linnéuniversitetet, Institutionen för ekonomistyrning och logistik (ELO), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-95572.

Full text
Abstract:
The stock market have received a fair amount of attention in the media recently as a result of the ongoing covid-19 pandemic. The question arouse if there is one month in the year that outperforms all other months in the stock market. A well known anomaly in the world of finance referred to as, the January effect, came up to discussion. Earlier studies of this subject have achieved different results and conclusions. Therefore, this study aims to examine if the January effect exists on mid cap and large cap companies on the Swedish stock market. To achieve this, one large cap portfolio and one mid cap portfolio both equally weighted with ten companies each were created. These two portfolios were analyzed with, among others, a well known regression model for season anomalies. The results of this study concludes that the January effect does not exist in neither of the portfolios.
APA, Harvard, Vancouver, ISO, and other styles
18

Weldezion, Awet Yemane. "Exploring the Scalability and Performance of Networks-on-Chip with Deflection Routing in 3D Many-core Architecture." Doctoral thesis, KTH, Industriell och Medicinsk Elektronik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-179694.

Full text
Abstract:
Three-Dimensional (3D) integration of circuits based on die and wafer stacking using through-silicon-via is a critical technology in enabling "more-than-Moore", i.e. functional integration of devices beyond pure scaling ("more Moore"). In particular, the scaling from multi-core to many-core architecture is an excellent candidate for such integration. 3D systems design follows is a challenging and a complex design process involving integration of heterogeneous technologies. It is also expensive to prototype because the 3D industrial ecosystem is not yet complete and ready for low-cost mass production. Networks-on-Chip (NoCs) efficiently facilitates the communication of massively integrated cores on 3D many-core architecture. In this thesis scalability and performance issues of NoCs are explored in terms of architecture, organization and functionality of many-core systems. First, we evaluate on-chip network performance in massively integrated many-core architecture when network size grows. We propose link and channel models to analyze the network traffic and hence the performance. We develop a NoC simulation framework to evaluate the performance of a deflection routing network as the architecture scales up to 1000 cores. We propose and perform comparative analysis of 3D processor-memory model configurations in scalable many-core architectures. Second, we investigate how the deflection routing NoCs can be designed to maximize the benefit of the fast TSVs through clock pumping techniques. We propose multi-rate models for inter-layer communication. We quantify the performance benefit through cycle-accurate simulations for various configurations of 3D architectures. Finally, the complexity of massively integrated many-core architecture by itself brings a multitude of design challenges such as high-cost of prototyping, increasing complexity of the technology, irregularity of the communication network, and lack of reliable simulation models. We formulate a zero-load average distance model that accurately predicts the performance of deflection routing networks in the absence of data flow by capturing the average distance of a packet with spatial and temporal probability distributions of traffic. The thesis research goals are to explore the design space of vertical integration for many-core applications, and to provide solutions to 3D technology challenges through architectural innovations. We believe the research findings presented in the thesis work contribute in addressing few of the many challenges to the field of combined research in many-core architectural design and 3D integration technology.

QC 20151221

APA, Harvard, Vancouver, ISO, and other styles
19

Qiu, Yi. "An investigation into the microplane constitutive model for concrete." Thesis, University of Sheffield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Preibisch, Jan. "Rozhodovací situace v pokerových turnajích." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-113706.

Full text
Abstract:
This thesis deals with factors which are important for making decisions in the game of poker. The goal is to find a way to improve players chances of success in this game. The first two chapters describe the rules of poker and the basics and presumptions of the game theory The following chapters analyze some mathematical models and assumptions for applying these models in the game. These models should find the optimal solution for individuals in decision making situations. It can be considered a static situation, where the behavior of each player is predetermined and the decision maker tries to find an appropriate strategy. It is also can be considered a dynamic situation, when all players react to each other, which heads to equilibrium solution. As a consequence of rising popularity of poker tournaments many strategy books have appeared, as well as analytic software. Nevertheless, it is and probably will remain impossible to solve all decision situations which can occur. A very important factor of success are some gamblers attitude, experience and mental skills. Mathematical knowledge, however, will become more and more important. This thesis will help to understand the basic of mathematic models and their application in poker game.
APA, Harvard, Vancouver, ISO, and other styles
21

Creyx, Marie. "Étude théorique et expérimentale d’une unité de micro-cogénération biomasse avec moteur Ericsson." Thesis, Valenciennes, 2014. http://www.theses.fr/2014VALE0026/document.

Full text
Abstract:
La micro-cogénération, production simultanée d’électricité et de chaleur à échelle domestique, se développe actuellement en Europe du fait notamment de son intérêt en termes d’économie d’énergie primaire. L’utilisation d’un combustible biomasse dans un système de micro-cogénération contribue à augmenter la part d’énergie renouvelable dans le mix énergétique. L’objet de ce travail est le développement d’un banc d’essai d’une unité de micro-cogénération biomasse composée d’une chaudière à pellets, d’un moteur à air chaud de type Ericsson (décomposé en une partie compression et une partie détente) et d’un échangeur gaz brûlés-air pressurisé inséré dans la chaudière. Des modèles de chacun de ces composants ont été établis pour caractériser leur fonctionnement sur la plage de réglage des paramètres influents et pour dimensionner l’unité prototype. Deux modèles du moteur Ericsson, en régime permanent et en régime dynamique, ont été mis en place. Ils ont montré l’influence prépondérante sur les performances du moteur des conditions de température et pression de l’air en entrée de détente et des réglages des instants de fermeture des soupapes. L’effet de la prise en compte des pertes dynamiques (pertes de charge, pertes thermiques à la paroi du cylindre, frottements mécaniques) sur l’estimation des performances du moteur a été étudié. Deux modélisations de l’échangeur ont permis de caractériser les transferts thermiques qui le traversent, incluant le rayonnement et l’encrassement par des particules de suie du côté des gaz brûlés. Le banc d’essai de l’unité de micro-cogénération mis en place
Nowadays, the micro combined heat and electrical power (micro-CHP) systems are developing in Europe, in particular because of their interest in terms of primary energy savings. The use of biomass fuel in micro-CHP systems enhances the share of renewable energy in the energy mix. The objective of this work is to develop a test bench for a biomass-fuelled micro-CHP unit composed of a pellet boiler, an Ericsson type hot air engine (decomposed into a compression and an expansion part) and a burned gas-pressurized air heat exchanger inserted in the boiler. Models of every component have been established to characterize their working conditions depending on influent parameter settings and to size the micro-CHP unit. Two models of Ericsson engine, with established and dynamic regimes, were implemented. The preponderant influence of the temperature and pressure conditions at the inlet of the expansion cylinder and of the timing of valve closing on the engine performances are shown. The dynamic model shows the effect of considering the dynamic losses (pressure loss, heat transfer at the cylinder wall, mechanical friction) on the estimation of engine performances. Two models of the heat exchanger allow the characterization of the heat transfers crossing it, taking into account the radiation and the fouling by soot particles on the side of combustion gases. Experimental measurements obtained from the test bench of the micro-CHP unit set up were used in the developed models
APA, Harvard, Vancouver, ISO, and other styles
22

Gérémie, Lauriane. "Development of an in-vitro intestinal model featuring peristaltic motion." Thesis, Sorbonne université, 2019. http://accesdistant.sorbonne-universite.fr/login?url=http://theses-intra.upmc.fr/modules/resources/download/theses/2019SORUS118.pdf.

Full text
Abstract:
Le Gut-on-chip fait partie d'un thème de recherche plus générale, appelez Organ-on-chip qui a pour objectif de développer des modèles in-vitro qui récapitulent des caractéristiques essentielles de l'organe d'intérêt. Dans le cas de l'intestin, les Gut-on-chip plateformes ont été principalement développés pour reconstituer soit l'architecture 3D de l'intestin, soit sa dynamique et plus particulièrement le péristaltisme. Durant ma thèse j'ai développé une nouvelle et polyvalente Gut-on-chip, présentant ces deux aspects du micro-environnement intestinale. Cette Peristalsis-on-chip nous a permis d'étudier l'influence du mouvement péristaltique sur le comportement cellulaire en fonction de la géométrie de la structure. Pour cette étude nous avons ensemencé des cellules Caco2 sur des substrats 2D ou 3D recouvert de laminine et les avons soumis à un étirement cyclique (à 0.2 Hz et 10\%) pendant 2, 5, 8, 16, 24 et 48 heures. Lors de ces expériences nous avons pu observer une réorientation cellulaire perpendiculaire à l'axe d'étirement que nous avons caractérisé en fonction des conditions de recouvrement, de la confluence initiale, du temps d'étirement et de la géométrie de la structure. Il est intéressant de noter que la réponse cellulaire la plus importante a été obtenue par la combinaison de la géométrie 3D et de l'étirement, ce qui illustre bien le besoin de ces deux éléments pour mieux mimer les conditions intestinales in vivo
My PhD work is part of the organ-on-chip field, and more precisely part of the gut-on-chip field. It is in line with the main objective of this field, which is the development of in-vitro models recapitulating as faithfully as possible the intestinal micro-environment. Through my PhD work I first developed a versatile gut-on-chip platform recapitulating the intestinal 3D architecture as well as its dynamic micro-environment. Therefore, this platform allows us to study the influence of the intestinal dynamic, especially the peristalsis, on cellular behavior in function of the 3D architecture of the scaffold. For this study Caco2 cells have been seeded either on a 2D or a 3D scaffold coated with laminin and submitted to a cyclic stretching (at 0.2 Hz and 10%) for 2, 5, 8, 16, 24 and 48 hours. Our main observation was the cellular reorientation induced by the stretching, therefore we characterized the cell behavior in function of the coating condition, the initial confluency, the stretching time and the scaffold geometry. Interestingly, the strongest cellular response was obtained when the 3D geometry and the stretching was combined illustrating the need of these two stimuli to better mimic the intestinal in vivo conditions
APA, Harvard, Vancouver, ISO, and other styles
23

Adhipathi, Pradeep. "Model based approach to Hardware/ Software Partitioning of SOC Designs." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/9986.

Full text
Abstract:
As the IT industry marks a paradigm shift from the traditional system design model to System-On-Chip (SOC) design, the design of custom hardware, embedded processors and associated software have become very tightly coupled. Any change in the implementation of one of the components affects the design of other components and, in turn, the performance of the system. This has led to an integrated design approach known as hardware/software co-design and co-verification. The conventional techniques for co-design favor partitioning the system into hardware and software components at an early stage of the design and then iteratively refining it until a good solution is found. This method is expensive and time consuming. A more modern approach is to model the whole system and rigorously test and refine it before the partitioning is done. The key to this method is the ability to model and simulate the entire system. The advent of new System Level Modeling Languages (SLML), like SystemC, has made this possible. This research proposes a strategy to automate the process of partitioning a system model after it has been simulated and verified. The partitioning idea is based on systems modeled using Process Model Graphs (PmG). It is possible to extract a PmG directly from a SLML like SystemC. The PmG is then annotated with additional attributes like IO delay and rate of activation. A complexity heuristic is generated from this information, which is then used by a greedy algorithm to partition the graph into different architectures. Further, a command line tool has been developed that can process textually represented PmGs and partition them based on this approach.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
24

Wong, Darrell. "Particleboard simulation model to improve machined surface quality." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/247.

Full text
Abstract:
Particleboard (PB) is a widely used panel material because of its physical properties and low cost. Unfortunately, cutting can degrade its surface creating rejects and increasing manufacturing costs. A major challenge is PB’s internal variability. Different particle and glue bond strength combinations can sometimes create high quality surfaces in one area and defects such as edge chipping in nearby areas. This research examines methods of improving surface quality by examining PB characteristics and their interactions with the cutting tool. It also develops an analytical model and software tool that allows the effects of these factors to be simulated, thereby giving practical guidance and reducing the need for costly experiments. When PB is cut and the glue bond strength is weaker than the particle strength, particles are pulled out, leading to surface defects. When instead the glue bond strength is stronger than the particle strength, particles are smoothly cut, leading to a high quality surface. PB is modeled as a matrix of particles each with stochastically assigned material and glue bond strengths. The PB model is layered allowing particles to be misaligned. Voids are modeled as missing particles. PB cutting is modeled in three zones. In the finished material and tool tip zones, particles are compressed elastically and then crushed at constant stress. After failure, chip formation occurs in the chip formation zone. At large rake angles, the chip is modeled as a transversely loaded beam that can fail by cleavage at its base or tensile failure on its surface. At small rake angles, the chip is modeled as the resultant force acting on the plane from the tool tip through to the panel surface. Experimental and simulation results show that cutting forces increase with depth of cut, glue content and particle strength. They decrease with rake angle. Glue bond strength can be increased to the equivalent particle strength through the selection of particle geometry and the subsequent increased glue bond efficiency, which increases the cut surface quality without the need for additional glue. Minimizing the size and frequency of voids and using larger rake angles can also increase surface quality.
APA, Harvard, Vancouver, ISO, and other styles
25

Martín, Badia Júlia. "Cap a l’autonomia de l’adolescent: model d’acompanyament per a professionals assistencials." Doctoral thesis, Universitat de Barcelona, 2019. http://hdl.handle.net/10803/667813.

Full text
Abstract:
[cat] El respecte a l’autonomia dels pacients ha esdevingut un principi fonamental de la bioètica que ha marcat canvis legislatius i de model assistencial, tanmateix el cas dels pacients menors és especialment controvertit: no s’aprofita la presa de decisions sobre la seva salut per acompanyar-los en el procés de maduració i és difícil respectar l’autonomia que no es reconeix ni es fomenta. La causa d’aquest fet és doble. D’una banda, s’ha tingut i es té una visió de la persona menor com immadura, sense capacitat de raonament. A més, en l’àmbit sanitari sovint es té una visió biocèntrica del pacients. Això impedeix que els professionals sanitaris prenguin consciència del seu rol educatiu i, conseqüentment, la relació assistencial no és apoderadora, sinó paternalista o adultista. D’altra banda, el marc legal en què es recolzen els professionals basa la capacitat decisòria de la persona menor en criteris ambigus per subjectius (maduresa i interès superior del menor) i el seu únic criteri objectiu (l’edat), que ofereix seguretat jurídica, no és estandarditzable. D’aquesta manera, si els adults no assumeixen el deure de garantir que les persones menors puguin exercir els seus drets, el discurs dels drets dels menors queda buit de contingut. Atenent a aquesta situació, es proposa un model d’acompanyament en la forja de l’autonomia pensat per a pacients menors d’entre 12 i 15 anys (franja del menor madur), és a dir, un model d’acompanyament dels pacients menors en el procés de forja de l’autonomia. Aquest model té com a objectiu la forja de l’autonomia de l’adolescent, entesa com el dret i la capacitat de prendre decisions que, en l’àmbit sanitari, van destinades a l’autocura. Per tant, caldrà ajudar-lo a apoderar-se, a desenvolupar capacitats. I l’estratègia per fer-ho no pot ser altra que la participació, en la mesura que les capacitats s’adquireixen exercint-les. L’acompanyament consistirà, doncs, en un cercle virtuós entre autonomia, participació i capacitats. És un model que ha de ser assumible per a qualsevol professional que treballi amb adolescents, per tal que afavoreixi la coordinació entre diversos àmbits (sanitari, educatiu, social...) i, conseqüentment, una visió integradora de la persona menor. Alhora, ha de ser aplicable a les especificitats de cada àmbit. És un model centrat en l’adolescent i la família, que requereix que els professionals el posin en pràctica amb habilitats de dues menes: comunicatives i educatives. Aquest model té tres condicions. Primera, cal una visió biopsicosocial de l’adolescent. Segona, cal exercir una responsabilitat apoderadora vers ell. I tercera, l’acompanyament ha de ser comunitari. A més, es basarà en principis ètics essencials com la dignitat, la vulnerabilitat, la justícia i la solidaritat. I tindrà tres objectius: un, la forja de la identitat, que és narrativa i relacional; dos, l’apoderament, que tindrà a veure amb el desenvolupament de capacitats i de consciència moral; i tres, la cura, entesa com l’atenció a la veu i al cos de l’adolescent. En definitiva, el model que proposem entén que l’acompanyament és el reconeixement de l’adolescent com a subjecte de necessitats, com a subjecte de drets i deures, i com a subjecte de capacitats per forjar l’autocura, l’autonomia i el seu projecte vital. Per garantir l’aplicabilitat del model a la pràctica diària de qualsevol professional que treballi amb adolescents proposem un procediment deliberatiu de presa de decisions que consta de 9 passos i incloem un capítol final amb recomanacions per als diferents nivells assistencials.
[eng] The respect for patients’ autonomy has become a fundamental principle of bioethics, which has led to legal changes and a shift of the healthcare model, but in the case of minor patients it is very controversial: medical decisions are not taken advantage of in order to support these patients in their maturing process, so it is difficult to respect the autonomy which is neither recognized nor promoted. There is a double cause for this. On the one hand, minors have been and are seen as immature, as having no reasoning power and, in the medical field, in a biocentrical way. This has prevented healthcare professionals to gain awareness of their educative role and, consequently, the current healthcare relationship is not an empowering one but a paternalistic or adultistic one. On the other hand, the legislation upon which professionals rely establishes three criteria for dealing with minors’ decisional capacity, two of which are ambiguous, because of being subjective (maturity and best interests of the child). The third one, the age, is objective, so it gives legal security, but is not standardisable. In this way, if adults do not assume their duty of ensure that these rights are exercisable, the discourse of the rights of the child has no content. Taking this situation into account, this thesis suggests a model of autonomy promotion in minors of 12 to 15 years old (age bracket called “mature minor”), that is to say, a model of accompanying minor patients in their process of forging autonomy. The aim of this model is the forge of autonomy. It can be understood as the right and capacity to make decisions, which in the medical field are intended to develop self-care. Hence, the adolescent will need help to empower himself and to develop basic capacities. And the strategy to do so must be participation, as long as capacities can only be acquired by exercising them. Supporting adolescents will then consist on a virtuous circle between autonomy, participation and capacities. This model has to be assumable for any professional working with adolescents, in order to foster coordination between fields (healthcare, education, social work) and, therefore, an integrative view of minors. But, at the same time, it has to be applicable to the specificities of each field, so as not to make any professional go beyond his profession. This model is adolescent and family-centred, which requires two types of abiliites from professionals: communicative and pedagogical. This model has three requirements. One, having a biopsychosocial view of adolescents. Second, exercing an empowering responsibility towards them. And third, understanding that accompanying adolescents is a community process. In addition, it will be based on essential ethical principles, such as dignity, vulnerability or solidarity. And it will have three ends: one, forging identity, which is narrative and relational; two, empowerment, which has to do with developing capacities and moral development; and three, care, which should be understood as caring of adolescent’s voice and body. In short, the suggested model understands that supporting adolescents means recognizing them as subjects of needs, subjects of rights and duties and subjects of capacities to forge self-care, autonomy and their vital project. In order to ensure that the model is applicable to the daily practice of any professional working with adolescents, we propose a deliberative decision-making procedure that consists of 9 steps, as well as we include a last chapter of recommendations for professionals according to each healthcare service.
APA, Harvard, Vancouver, ISO, and other styles
26

Hoelzle, James B. "Neuropsychological Assessment and the Cattell-Horn-Carroll (CHC)Cognitive Abilities Model." Connect to full text in OhioLINK ETD Center, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1216405861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Cacciari, Matteo <1984&gt. "Model predictive control in thermal management of multiprocessor systems-on-chip." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5771/1/CACCIARI_MATTEO_TESI.pdf.

Full text
Abstract:
MultiProcessor Systems-on-Chip (MPSoC) are the core of nowadays and next generation computing platforms. Their relevance in the global market continuously increase, occupying an important role both in everydaylife products (e.g. smartphones, tablets, laptops, cars) and in strategical market sectors as aviation, defense, robotics, medicine. Despite of the incredible performance improvements in the recent years processors manufacturers have had to deal with issues, commonly called “Walls”, that have hindered the processors development. After the famous “Power Wall”, that limited the maximum frequency of a single core and marked the birth of the modern multiprocessors system-on-chip, the “Thermal Wall” and the “Utilization Wall” are the actual key limiter for performance improvements. The former concerns the damaging effects of the high temperature on the chip caused by the large power densities dissipation, whereas the second refers to the impossibility of fully exploiting the computing power of the processor due to the limitations on power and temperature budgets. In this thesis we faced these challenges by developing efficient and reliable solutions able to maximize performance while limiting the maximum temperature below a fixed critical threshold and saving energy. This has been possible by exploiting the Model Predictive Controller (MPC) paradigm that solves an optimization problem subject to constraints in order to find the optimal control decisions for the future interval. A fully-distributedMPC-based thermal controller with a far lower complexity respect to a centralized one has been developed. The control feasibility and interesting properties for the simplification of the control design has been proved by studying a partial differential equation thermal model. Finally, the controller has been efficiently included in more complex control schemes able to minimize energy consumption and deal with mixed-criticalities tasks
APA, Harvard, Vancouver, ISO, and other styles
28

Cacciari, Matteo <1984&gt. "Model predictive control in thermal management of multiprocessor systems-on-chip." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5771/.

Full text
Abstract:
MultiProcessor Systems-on-Chip (MPSoC) are the core of nowadays and next generation computing platforms. Their relevance in the global market continuously increase, occupying an important role both in everydaylife products (e.g. smartphones, tablets, laptops, cars) and in strategical market sectors as aviation, defense, robotics, medicine. Despite of the incredible performance improvements in the recent years processors manufacturers have had to deal with issues, commonly called “Walls”, that have hindered the processors development. After the famous “Power Wall”, that limited the maximum frequency of a single core and marked the birth of the modern multiprocessors system-on-chip, the “Thermal Wall” and the “Utilization Wall” are the actual key limiter for performance improvements. The former concerns the damaging effects of the high temperature on the chip caused by the large power densities dissipation, whereas the second refers to the impossibility of fully exploiting the computing power of the processor due to the limitations on power and temperature budgets. In this thesis we faced these challenges by developing efficient and reliable solutions able to maximize performance while limiting the maximum temperature below a fixed critical threshold and saving energy. This has been possible by exploiting the Model Predictive Controller (MPC) paradigm that solves an optimization problem subject to constraints in order to find the optimal control decisions for the future interval. A fully-distributedMPC-based thermal controller with a far lower complexity respect to a centralized one has been developed. The control feasibility and interesting properties for the simplification of the control design has been proved by studying a partial differential equation thermal model. Finally, the controller has been efficiently included in more complex control schemes able to minimize energy consumption and deal with mixed-criticalities tasks
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Ji. "ON-CHIP SPIRAL INDUCTOR/TRANSFORMER DESIGN AND MODELING FOR RF APPLICATIONS." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4115.

Full text
Abstract:
Passive components are indispensable in the design and development of microchips for high-frequency applications. Inductors in particular are used frequently in radio frequency (RF) IC's such as low-noise amplifiers and oscillators. High performance inductor has become one of the critical components for voltage controlled oscillator (VCO) design, for its quality factor (Q) value directly affects the VCO phase noise. The optimization of inductor layout can improve its performance, but the improvement is limited by selected technology. Inductor performance is bounded by the thin routing metal and small distance from lossy substrate. On the other hand, the in-accurate inductor modeling further limits the optimization process. The on-chip inductor has been an important research topic since it was first proposed in early 1990's. Significant amount of study has been accomplished and reported in literature; whereas some methods have been used in industry, but not released to public. It is of no doubt that a comprehensive solution is not exist yet. A comprehensive study of previous will be first address. Later author will point out the in-adequacy of skin effect and proximity effect as cause of current crowding in the inductor metal. A model method embedded with new explanation of current crowding is proposed and its applicability in differential inductor and balun is validated. This study leads to a robust optimization routine to improve inductor performance without any addition technology cost and development.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
30

Flórez, Martha Johanna Sepúlveda. "Estimativa de desempenho de uma NoC a partir de seu modelo em SYSTEMC-TLM." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3140/tde-14122006-152854/.

Full text
Abstract:
The wide variety of interconnection structures presently nowadays for SoC (Systemon- Chip), bus and networks-on-Chip NoCs, each of them with a wide set of setup parameters, provides a huge amount of design alternatives. Although the interconnection structure is a key SoC component, there are few design tools in order to set the appropriate configuration parameters for a given application. An efficient SoC project may comply an exploration stage among the possible solutions for the communication structure, during the first steps of the design process. The absence of appropriate tools for that exploration makes critical the designer?s judgment. The present study aims to enhance the communication SoC structure design area, when a NoC is used. This work proposes a methodology that allows the establishment of the NoC communication parameters using a high level model (SystemC TLM timed). Our approach analyzes and evaluates the NoC performance under a wide variety of traffic conditions. The experimental stage was conducted employing a model of a net represented by a SystemC TLM timed (Hermes_Temp). Parametric and pseudo-random generators control the network traffic. The analysis was carried on with a tool designed for these purpose, which generates a group of performance metrics. The results allow to elucidate the global and inner network behavior. The performance values are useful for the heterogeneous and homogeneous NoC design projects, improving the performance evaluation studies scope.
The wide variety of interconnection structures presently nowadays for SoC (Systemon- Chip), bus and networks-on-Chip NoCs, each of them with a wide set of setup parameters, provides a huge amount of design alternatives. Although the interconnection structure is a key SoC component, there are few design tools in order to set the appropriate configuration parameters for a given application. An efficient SoC project may comply an exploration stage among the possible solutions for the communication structure, during the first steps of the design process. The absence of appropriate tools for that exploration makes critical the designer?s judgment. The present study aims to enhance the communication SoC structure design area, when a NoC is used. This work proposes a methodology that allows the establishment of the NoC communication parameters using a high level model (SystemC TLM timed). Our approach analyzes and evaluates the NoC performance under a wide variety of traffic conditions. The experimental stage was conducted employing a model of a net represented by a SystemC TLM timed (Hermes_Temp). Parametric and pseudo-random generators control the network traffic. The analysis was carried on with a tool designed for these purpose, which generates a group of performance metrics. The results allow to elucidate the global and inner network behavior. The performance values are useful for the heterogeneous and homogeneous NoC design projects, improving the performance evaluation studies scope.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Qingsen. "A Synthesizable VHDL Behavioral Model of A DSP On Chip Emulation Unit." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1723.

Full text
Abstract:

This thesis describes the VHDL behavioral model design of a DSP On Chip Emulation Unit. The prototype of this design is the OnCE port of the Motorola DSP56002.

Capabilities of this On Chip Emulation Unit are accessible through four pins, which allows the user to step through a program, to set the breakpoint that stop program execution at a specific address, and to examine the contents of registers, memory, and pipeline information. The detailed design that includes input/output signals and sub blocks is presented in this thesis.

The user will interact with the DSP through a GUI on the host computer via the RS232 port. An interface between the RS232 and On Chip Emulation Unit is therefore designed as well.

The functionality is designed to be same as described by Motorola and it is verified by a test bench. The writing of the test bench, test sequence and results is presented also.

APA, Harvard, Vancouver, ISO, and other styles
32

Tsang, Moses T. "3-D finite element beam/connector model for a glulam dome cap." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-09052009-040556/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

An, Feng-Chen. "Modelling of FRP-concrete interfacial bond behaviour." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10511.

Full text
Abstract:
Externally bonding of fibre-reinforced polymer (FRP) strips or sheets has become a popular strengthening method for reinforced concrete structures over the last two decades. For most such strengthened concrete beams and slabs, the failure is at or near the FRP-concrete interface due to FRP debonding. The objective of this thesis is to develop a deeper understanding of the debonding behaviour of the FRP-concrete interface through mesoscale finite element simulation. Central to the investigation is the use of the concrete damaged plasticity (CDP) model for modelling the concrete. The FRP is treated as an elastic material. The numerical simulation is focused on the single shear test of FRP-concrete bonded joints. This problem is known to be highly nonlinear and has many difficulties in achieving a converged solution using the standard static loading procedures. A dynamic loading procedure is applied in this research and various parameters such as time step, loading rate etc. are investigated. In particular, the effect of the damping ratio is investigated in depth and an appropriate selection is recommended for solving such problems. It has been identified that the concrete damage model can have a significant effect on the numerical predictions in the present problem. Various concrete empirical damage models are assessed using cyclic test data and simulation of the single shear test of the FRP-concrete bonded joint and it is proposed that the Birtel and Mark’s (2006) model is the most appropriate one for use in the present problem. Subsequently, the effects of other aspects of the concrete behaviour on the FRP-concrete bond behaviour are investigated. These include the tensile fracture energy, compression strain energy and different concrete compression stress-strain models. These leads to the conclusion that the CEBFIP1990 model is the most appropriate one for the problem. An important issue for recognition is that the actual behaviour of the FRP-concrete bonded joints is three dimensional (3D), but most numerical simulations have treated the problem as two dimensional (2D) which has a number of imitations. True 3D simulation is however very expensive computationally and impractical. This study proposes a simple procedure for modelling the joint in 2D with the 3D behaviour properly considered. Numerical results show that the proposed method can successfully overcome the limitations of the traditional 2D simulation method. The above established FE model is then applied to simulate a large number of test specimens. The bond stress-slip relationship is extracted from the mesoscale FE simulation results. An alternative model is proposed based on these results which is shown to be advantageous compared with existing models. This new model provides the basis for further investigation of debonding failures in FRP strengthened concrete structures in the future.
APA, Harvard, Vancouver, ISO, and other styles
34

Young, Antony, and antony young@rmit edu au. "Accountants' acceptance of a cashless monetary system using an implantable chip." RMIT University. Accounting and Law, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080618.093806.

Full text
Abstract:
A logical control extension surrounding cashless means of exchange is a permanent personal verification mark. An implanted micro chip such as ones that have been successfully implanted into humans could identify and store information. Connected with global positioning satellites and a computer system, a cashless monetary system could be formed in the future. The system would provide complete and continual real time records for individuals, businesses and regulators. It would be possible for all trading to occur in this way in the future. A modified Technology Acceptance Model was developed based on Davis' (1989) model and Fishbein and Ajzen's (1975) theory to test the acceptance level of the new monetary system by professional accountants in Australia. The model includes perceived ease of use, perceived usefulness, perceived risk, and a subjective norm component. 523 accountants were surveyed in December 2003 with a response rate of 27%. 13% either strongly agree d or agreed that they would accept the implantable chip. The analysis showed that Perception of Risk, Subjective Norm and Perception of Usefulness were all significant in explaining the dependent variable at the 95% confidence level. The Perception of Ease of Use was not proved to be significant. In consideration of response bias, it was found that with respect to the perception of usefulness at the 0.01 level, two elements were not significant, those being
APA, Harvard, Vancouver, ISO, and other styles
35

Otoom, Mwaffaq Naif. "Capacity Metric for Chip Heterogeneous Multiprocessors." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26332.

Full text
Abstract:
The primary contribution of this thesis is the development of a new performance metric, Capacity, which evaluates the performance of Chip Heterogeneous Multiprocessors (CHMs) that process multiple heterogeneous channels. Performance metrics are required in order to evaluate any system, including computer systems. A lack of appropriate metrics can lead to ambiguous or incorrect results, something discovered while developing the secondary contribution of this thesis, that of workload modes for CHMs â or Workload Specific Processors (WSPs). For many decades, computer architects and designers have focused on techniques that reduce latency and increase throughput. The change in modern computer systems built around CHMs that process multi-channel communications in the service of single users calls this focus into question. Modern computer systems are expected to integrate tens to hundreds of processor cores onto single chips, often used in the service of single users, potentially as a way to access the Internet. Here, the design goal is to integrate as much functionality as possible during a given time window. Without the ability to correctly identify optimal designs, not only will the best performing designs not be found, but resources will be wasted and there will be a lack of insight to what leads to better performing designs. To address performance evaluation challenges of the next generation of computer systems, such as multicore computers inside of cell phones, we found that a structurally different metric is needed and proceeded to develop such a metric. In contrast to single-valued metrics, Capacity is a surface with dimensionality related to the number of input streams, or channels, processed by the CHM. We develop some fundamental Capacity curves in two dimensions and show how Capacity shapes reveal interaction of not only programs and data, but the interaction of multiple data streams as they compete for access to resources on a CHM as well. For the analysis of Capacity surface shapes, we propose the development of a demand characterization method in which its output is in the form of a surface. By overlaying demand surfaces over Capacity surfaces, we are able to identify when a system meets its demands and by how much. Using the Capacity metric, computer performance optimization is evaluated against workloads in the service of individual users instead of individual applications, aggregate applications, or parallel applications. Because throughput was originally derived by drawing analogies between processor design and pipelines in the automobile industry, we introduce our Capacity metric for CHMs by drawing an analogy to automobile production, signifying that Capacity is the successor to throughput. By developing our Capacity metric, we illustrate how and why different processor organizations cannot be understood as being better performers without both magnitude and shape analysis in contrast to other metrics, such as throughput, that consider only magnitude. In this work, we make the following major contributions: â ¢ Definition and development of the Capacity metric as a surface with dimensionality related to the number of input streams, or channels, processed by the CHM. â ¢ Techniques for analysis of the Capacity metric. Since the Capacity metric was developed out of necessity, while pursuing the development of WSPs, this work also makes the following minor contributions: â ¢ Definition and development of three foundations in order to establish an experimental foundation â a CHM model, a multimedia cell phone example, and a Workload Specific Processor (WSP). â ¢ Definition of Workload Modes, which was the original objective of this thesis. â ¢ Definition and comparison of two approaches to workload mode identification at run time; The Workload Classification Model (WCM) and another model that is based on Hidden Markov Models (HMMs). â ¢ Development of a foundation for analysis of the Capacity metric, so that the impact of architectural features in a CHM may be better understood. In order to do this, we develop a Demand Characterization Method (DCM) that characterizes the demand of a specific usage pattern in the form of a curve (or a surface in general). By doing this, we will be able to overlay demand curves over Capacity curves of different architectures to compare their performance and thus identify optimal performing designs.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
36

Fan, Xiaolin. "Material flow in a wood-chip refiner." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Hao. "A mathematical model for the deformations achievable by a slightly extensible spherical cap." Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/7055.

Full text
Abstract:
The shape of a deep seismic zone is thought to represent that of the descending slab of the lithosphere. The lithosphere before subduction is a spherical cap, and the shape of the descending slab is the result of the deformation of the spherical lithosphere at the subduction zone. It was found that the actual shape of the descending slab maybe approximated by a slightly extensible spherical cap for a shallow subduction zone, while for a deep subduction zone, the cap appears to be too inflexible to form a steep dipping angle. A technique was developed to explore the relationship between the subducted slab shape and the slab deformation.
vii, 35 leaves
APA, Harvard, Vancouver, ISO, and other styles
38

Rydén, Linda. "The EU common agricultural policy and its effects on trade." Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-21403.

Full text
Abstract:
The common agricultural policy (CAP) is a much discussed policy in the European Union (EU). It allocates great sums to the European agricultural sector every year and has been accused of being trade distorting and outdated. This thesis takes a closer look at what protectionist measures the CAP has used. The policy’s effects on trade will be assessed employing the sugar industry as a reference case. Sugar is heavily protected and is one of the most distorted sectors in agriculture. The CAP effects on trade in the sugar industry for ten countries in and outside the EU from 1991 to 2011 are estimated using a gravity model. This particular type of estimation has, to the author’s knowledge, not been performed for the sugar industry before, which makes the study unique. The results of the empirical testing indicates that trade diversion occurs if one country is a member of the CAP and its trading partner is not. When both trading partners are outside the CAP cooperation, they are estimated to have a higher trade volume. This result indicates that the CAP decreases trade. Current economic theory, in particular the North-South model of trade developed by Krugman (1979), suggests that protectionism of non-competitive sectors should be abolished and funds should instead be directed to innovation and new technology. The CAP is in this sense not adapted to modern economic thought.
APA, Harvard, Vancouver, ISO, and other styles
39

Wahlström, Rickard. "Validation of docking performance in the context of a structural water molecule using model system." Thesis, Department of Physics, Chemistry and Biology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-19525.

Full text
Abstract:

In silico ligand docking is a versatile and common technique when predicting ligands and inhibitors for protein binding sites. The various docking programmes aim to calculate binding energies and to predict interactions, thus identifying potential ligands.The currently available programmes lack satisfying means by which to account for structural water molecules which can either mediate protein-ligand contacts or be displaced upon ligand binding. The present project aims to generate data to facilitate the global work of developing scoring functions in docking programmes to account for structural water molecules contribution to ligand binding to fill the said void. This is done by validating the performance of docking using a simple model system (cytochrome C peroxidase (CCP) W191G) containing four well ordered, deeply buried structural water molecules which are known to either interact with a ligand or to be displaced upon ligand binding.Known ligands were docked into eight (crystallographically determined) receptor set-ups comprising the receptor and no, one or two of the water molecules. The performance was validated by comparison of the binding modes of the docked ligands and the crystal structures, comparison of docking scores of the ligands in the different set-ups, enrichment of the ligands from a database of decoys and finally by predicting new ligands from the decoy database. In addition a high resolution crystal structure of CCP W191G in complex with 3-aminopyridine (3AP) was determined in order to resolve ambiguities in the binding mode of this ligand.

APA, Harvard, Vancouver, ISO, and other styles
40

Bai, Di. "A feminist brave new world : the cultural revolution model theater revisited." Connect to resource, 1997. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1129217899.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 1997.
Advisors: Kirk Denton and Marlene Longenecker, Interdisciplinary Program. Includes bibliographical references (leaves 196-202). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
41

Zhou, Kan. "Demand Response in Smart Grid." Thesis, 2015. http://hdl.handle.net/1828/5973.

Full text
Abstract:
Conventionally, to support varying power demand, the utility company must prepare to supply more electricity than actually needed, which causes inefficiency and waste. With the increasing penetration of renewable energy which is intermittent and stochastic, how to balance the power generation and demand becomes even more challenging. Demand response, which reschedules part of the elastic load in users' side, is a promising technology to increase power generation efficiency and reduce costs. However, how to coordinate all the distributed heterogeneous elastic loads efficiently is a major challenge and sparks numerous research efforts. In this thesis, we investigate different methods to provide demand response and improve power grid efficiency. First, we consider how to schedule the charging process of all the Plugged-in Hybrid Electrical Vehicles (PHEVs) so that demand peaks caused by PHEV charging are flattened. Existing solutions are either centralized which may not be scalable, or decentralized based on real-time pricing (RTP) which may not be applicable immediately for many markets. Our proposed PHEV charging approach does not need complicated, centralized control and can be executed online in a distributed manner. In addition, we extend our approach and apply it to the distribution grid to solve the bus congestion and voltage drop problems by controlling the access probability of PHEVs. One of the advantages of our algorithm is that it does not need accurate predictions on base load and future users' behaviors. Furthermore, it is deployable even when the grid size is large. Different from PHEVs, whose future arrivals are hard to predict, there is another category of elastic load, such as Heating Ventilation and Air-Conditioning (HVAC) systems, whose future status can be predicted based on the current status and control actions. How to minimize the power generation cost using this kind of elastic load is also an interesting topic to the power companies. Existing work usually used HVAC to do the load following or load shaping based on given control signals or objectives. However, optimal external control signals may not always be available. Without such control signals, how to make a tradeoff between the fluctuation of non-renewable power generation and the limited demand response potential of the elastic load, and to guarantee user comfort level, is still an open problem. To solve this problem, we first model the temperature evolution process of a room and propose an approach to estimate the key parameters of the model. Then, based on the model predictive control, a centralized and a distributed algorithm are proposed to minimize the fluctuation and maximize the user comfort level. In addition, we propose a dynamic water level adjustment algorithm to make the demand response always available in two directions. Extensive simulations based on practical data sets show that the proposed algorithms can effectively reduce the load fluctuation. Both randomized PHEV charging and HVAC control algorithms discussed above belong to direct or centralized load shaping, which has been heavily investigated. However, it is usually not clear how the users are compensated by providing load shaping services. In the last part of this thesis, we investigate indirect load shaping in a distributed manner. On one hand, we aim to reduce the users' energy cost by investigating how to fully utilize the battery pack and the water tank for the Combined Heat and Power (CHP) systems. We first formulate the queueing models for the CHP systems, and then propose an algorithm based on the Lyapunov optimization technique which does not need any statistical information about the system dynamics. The optimal control actions can be obtained by solving a non-convex optimization problem. We then discuss when it can be converted into a convex optimization problem. On the other hand, based on the users' reaction model, we propose an algorithm, with a time complexity of O(log n), to determine the RTP for the power company to effectively coordinate all the CHP systems and provide distributed load shaping services.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
42

Grosberg, Anna. "A Bioinspired Computational Model of Cardiac Mechanics: Pathology and Development." Thesis, 2008. https://thesis.library.caltech.edu/2263/13/Ch2.pdf.

Full text
Abstract:

In this work we study the function and development of the myocardium by creating models that have been stripped down to essentials. The model for the adult myocardium is based on the double helical band formation of the heart muscle fibers, observed in both histological studies and advanced DTMRI images. The muscle fibers in the embryonic myocardium are modeled as a helical band wound around a tubular chamber. We model the myocardium as an elastic body, utilizing the finite element method for the computations. We show that when the spiral band architecture is combined with spatial wave excitations the structure is twisted, thus driving the development of the embryonic heart into an adult heart. The double helical band model of the adult heart allows us to gain insight into the long standing paradox between the modest, by only 15 %, ability of muscle fibers to contract, and the large left ventricular volume ejection fraction of 60 %. We show that the double helical band structure is the essential factor behind such efficiency. Additionally, when the double helical band model is excited following the path of the Purkinje nerve network, physiological twist behavior is reproduced. As an additional validation, we show that when the stripped down double helical band is placed inside a sack of soft collagen-like tissue it is capable of producing physiologically high pressures.

We further develop the model to understand the different factors behind the loss of efficiency in heart with a common pathology such as dilated cardiomyopathy. Using the stripped down model we are able to show that the change to fiber angle is the much more important factor to heart function than the change in gross geometry. This finding has the potential to greatly impact the strategy used in certain surgical procedures.

APA, Harvard, Vancouver, ISO, and other styles
43

Zaid, Farid. "VIEWER-CENTRIC MOBILE SERVICES- A Framework and a Query Model." Phd thesis, 2011. https://tuprints.ulb.tu-darmstadt.de/2389/3/Farid_Zaid_Dissertation_Ch1-Ch5.pdf.

Full text
Abstract:
Over the last decade, the reach of the internet to the mobile platforms has redefined the usage of the mobile devices from voice-centric to data-centric. Parallel to this trend, location is becoming significantly more native to the mobile devices. This can be attributed mainly to the increasing prevalence of GPS-empowered handsets. Nowadays, Location-based Services (LBS) feature many useful consumer services like navigation, location-based search, and location-based advertising. With the increased integration of high-end GPS units in mobile devices, combined with the integration of orientation sensors like digital compasses and accelerometers, a new class of mobile Location-based Services (LBS) is emerging: the viewer-centric mobile services. With focus on viewer-centricity, viewer-centric mobile services aim at providing the user with information and services that are not only relevant to her current location, but also relevant to her field-of-view. Therefore, a viewer-centric service where a restaurant is located in her viewing range, what food menu it is offering etc., all by pointing the mobile device to scan the surrounding objects. The main goal of this thesis is to develop a framework for viewer-centric mobile services and a query model, with main focus on modeling viewer-centric queries from the GPS and orientation sensors existing in modern mobile devices. As these sensors, regardless of their fabrication quality, do suffer from runtime uncertainties, e.g. blockage of GPS signal by buildings, distortion of magnetic field due to nearby metallic structures, a key concept of the proposed framework, iVu.KOM, is to recognize the effects of such uncertainties on the overall accuracy of the modeled queries and to offer a means to correct the queries, when needed. As the view of the user is also affected by obstacles, e.g. buildings, that exist in the user's surrounding and limit the viewing range, the framework aims as well at efficient execution of the viewer-centric queries against the world geometry model. For this purpose, the proposed iVu query model suggests to include in the query result both the retrieved points of interest and a simplified description of the surrounding geometry. This query model allows executing the queries promptly on the client and with minimal need to contact the server, even when the visibility changes due to user's motion. To evaluate the feasibility of the proposed framework and model, a prototypical implementation of iVu.KOM was realized for a modern mobile platform featuring the required sen
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, En-Tsyr, and 劉恩慈. "Improvement of RELAP5/MOD3 CHF Model." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/47405775182304825353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Yang, Ling-Shiang, and 楊凌翔. "Conditional probability prediction model for landslides induced by Chi-Chi earthquake." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/20195995026775436983.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
93
Taiwan locates in the circum-pacific seismic zone with frequent earthquake activities, which could induce the hazardous landslides. An effective landslide prediction map could provide an important reference for policymaking for land use regulation and drafting of mitigation measures of potential disastrous area.   The geographic information system database of the research area was constructed by colleting geology and geomorphology data and the landslide scars triggered by Chi-Chi earthquake of research area. Furthermore, the Conditional Probability method was utilized to construct landslide potential model and prediction model.   Based on the results of landslide potential analysis, the best factor combination for landslide prediction analysis was determined. Verification of results from the landslide potential and prediction analysis was performed using landslide scars of research area, and success rate of analysis could be quantified.   The results of landslide prediction analysis indicate that using the aspect, slope and geology factors, could properly build up a distinguishing landslide prediction model. The landslide scars in the landslide prediction map coincide well with the high landslide probability area. Furthermore, the results of comparisons also prove the suitability of verification method used in this research.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Chun-Hsiang, and 王俊翔. "Optical Model Establishment for Blue LED Flip Chip." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/14320983822292203576.

Full text
Abstract:
碩士
國立中興大學
精密工程學系所
105
Nowadays, Blue LED Flip Chip simulation is usually modeled as surface. In this thesis, we discuss the LED Chip which has six surfaces, and measure the flux and light pattern of each surface. However, in general research it is not discussed in details regarding the luminous flux and the optical model distribution of the front side and lateral side. The front surface and lateral surface light pattern are combined with OPTISWORKS software to establish the optical model for blue LED flip chip. It can provide a reference template for optical designer. In the future, it also can apply to flip chip package.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Jeng-Yang, and 陳政揚. "Building the Model of Blue LED Flip Chip." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/52630568647296632993.

Full text
Abstract:
碩士
國立中興大學
精密工程學系所
103
LED is one of the promising light source which has the characteristics of low power consumption, small volume and no mercury pollution. Two well-known methods to increase luminous efficiency are LED-side roughness and design geometric patterned sapphire structure. But no LED Chip model has been proposed so far. Motivated by this point, this work employees optical simulation software called OptisWorks and optics principle to design the model of blue LED chip. With the verification of the simulation and experiment, we succeed in building the model of blue flip LED chip. Through the optical microscope and simulation we know the intensity distribution in the surface regions of LED is not very uniform. Through the side view of LED it is observed that the existence varies substalutially at different angles. The luminous existence is more concentrated at the bottom.The model of flip chip is investigated and built in the work, which improves the understanding of the top and side surfaces emission characteristics. This is helpful to future package and applications for blue LED flip chip.
APA, Harvard, Vancouver, ISO, and other styles
48

Hong, Sheng-Jhong, and 洪勝忠. "Integrity Testing of Model Piles with Pile Cap." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/83126524854761132090.

Full text
Abstract:
碩士
朝陽科技大學
營建工程系碩士班
101
The non-destructive testing (NDT) methods have been successfully applied to the integrity testing for newly-built pile. But experience with integrity testing of piles with pile cap is still lacking. The main reason is the effect of pile cap. The existing piles are often presented in the form of group pile with pile cap, thus making the testing signal very complex, and causing difficulties for integrity testing. In this study, sonic echo (SE) method, impulse response (IR) method and ultra-seismic (US) method were used for integrity testing of model piles with pile cap. The model piles contain defects of various size, location, and type. The objective of this study is to investigate the feasibility of these three NDT methods in detecting the defects in the model piles. Results of these tests indicate that: even with the influence of the pile cap, SE method and IR method can still distinguish between intact pile and piles with major defects, but error in pile length are greater than piles without pile cap. Furthermore, the signal is more difficult to interpret. If the receiver is placed properly, the US method can clearly determine the intact pile and broken pile.
APA, Harvard, Vancouver, ISO, and other styles
49

Wu, Mingqi. "Population SAMC, ChIP-chip Data Analysis and Beyond." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8752.

Full text
Abstract:
This dissertation research consists of two topics, population stochastics approximation Monte Carlo (Pop-SAMC) for Baysian model selection problems and ChIP-chip data analysis. The following two paragraphs give a brief introduction to each of the two topics, respectively. Although the reversible jump MCMC (RJMCMC) has the ability to traverse the space of possible models in Bayesian model selection problems, it is prone to becoming trapped into local mode, when the model space is complex. SAMC, proposed by Liang, Liu and Carroll, essentially overcomes the difficulty in dimension-jumping moves, by introducing a self-adjusting mechanism. However, this learning mechanism has not yet reached its maximum efficiency. In this dissertation, we propose a Pop-SAMC algorithm; it works on population chains of SAMC, which can provide a more efficient self-adjusting mechanism and make use of crossover operator from genetic algorithms to further increase its efficiency. Under mild conditions, the convergence of this algorithm is proved. The effectiveness of Pop-SAMC in Bayesian model selection problems is examined through a change-point identification example and a large-p linear regression variable selection example. The numerical results indicate that Pop- SAMC outperforms both the single chain SAMC and RJMCMC significantly. In the ChIP-chip data analysis study, we developed two methodologies to identify the transcription factor binding sites: Bayesian latent model and population-based test. The former models the neighboring dependence of probes by introducing a latent indicator vector; The later provides a nonparametric method for evaluation of test scores in a multiple hypothesis test by making use of population information of samples. Both methods are applied to real and simulated datasets. The numerical results indicate the Bayesian latent model can outperform the existing methods, especially when the data contain outliers, and the use of population information can significantly improve the power of multiple hypothesis tests.
APA, Harvard, Vancouver, ISO, and other styles
50

Chang, Chun-Ping, and 張鈞評. "Building the Chip Scale Package Model With Fluorescent powder." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/65938952897467800501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography