Дисертації з теми "Complex drives"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Complex drives.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Complex drives".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Fernandez, Fournier Philippe. "Complex spider webs as habitat patches : environmental filtering drives species composition." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58947.

Повний текст джерела
Анотація:
Metacommunity theory has advanced understanding of mechanisms shaping community structure. Four main models (neutral, patch-dynamics, species-sorting, and mass-effects) have been recognized to explain these mechanisms, differing in their assumptions about the effects of environmental filtering and species traits on community composition. Here, I focus on complex, three-dimensional spider webs of two social and two solitary species as habitat patches for associated arthropods in a tropical rainforest in Ecuador. I used variance partitioning and various analyses of metacommunity structure to study the role of environmental filtering and dispersal in this system. I found that local patch characteristics, such as patch size and host species, predominantly affected local community composition. Webs of social spider species had higher richness, more variable communities, and proportionally more aggressive (i.e. predatory) web associates. Behavioral characteristics of the host spiders, such as sociality and aggressiveness, seemed to play an important role, as well, in shaping community composition on these patches. In a colonization experiment, there was indication of high dispersal rates at a short temporal scale and some evidence of species dominance at a longer temporal scale. I conclude that environmental filtering is responsible for the patterns of species distribution and that, given the conjunctive high dispersal and species specialization, the metacommunity patterns in this system seem to best be explained by a combination of the species sorting and mass effects models.
Science, Faculty of
Zoology, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Unicomb, Samuel Lee. "Threshold driven contagion on complex networks." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN003.

Повний текст джерела
Анотація:
Les interactions entre les composants des systèmes complexes font émerger différents types de réseaux. Ces réseaux peuvent jouer le rôle d’un substrat pour des processus dynamiques tels que la diffusion d’informations ou de maladies dans des populations. Les structures de ces réseaux déterminent l’évolution d’un processus dynamique, en particulier son régime transitoire, mais aussi les caractéristiques du régime permanent. Les systèmes complexes réels manifestent des interactions hétérogènes en type et en intensité. Ces systèmes sont représentés comme des réseaux pondérés à plusieurs couches. Dans cette thèse, nous développons une équation maîtresse afin d’intégrer ces hétérogénéités et d’étudier leurs effets sur les processus de diffusion. À l’aide de simulations mettant en jeu des réseaux réels et générés, nous montrons que les dynamiques de diffusion sont liées de manière non triviale à l’hétérogénéité de ces réseaux, en particulier la vitesse de propagation d’une contagion basée sur un effet de seuil. De plus, nous montrons que certaines classes de réseaux sont soumises à des transitions de phase réentrantes fonctions de la taille des “global cascades”. La tendance des réseaux réels à évoluer dans le temps rend difficile la modélisation des processus de diffusion. Nous montrons enfin que la durée de diffusion d’un processus de contagion basé sur un effet de seuil change de manière non-monotone du fait de la présence de “rafales” dans les motifs d’interactions. L’ensemble de ces résultats mettent en lumière les effets de l’hétérogénéité des réseaux vis-à-vis des processus dynamiques y évoluant
Networks arise frequently in the study of complex systems, since interactions among the components of such systems are critical. Net- works can act as a substrate for dynamical process, such as the diffusion of information or disease throughout populations. Network structure can determine the temporal evolution of a dynamical process, including the characteristics of the steady state. The simplest representation of a complex system is an undirected, unweighted, single layer graph. In contrast, real systems exhibit heterogeneity of interaction strength and type. Such systems are frequently represented as weighted multiplex networks, and in this work we in- corporate these heterogeneities into a master equation formalism in order to study their effects on spreading processes. We also carry out simulations on synthetic and empirical networks, and show that spread- ing dynamics, in particular the speed at which contagion spreads via threshold mechanisms, depend non-trivially on these heterogeneities. Further, we show that an important family of networks undergo reentrant phase transitions in the size and frequency of global cascades as a result of these interactions. A challenging feature of real systems is their tendency to evolve over time, since the changing structure of the underlying network is critical to the behaviour of overlying dynamical processes. We show that one aspect of temporality, the observed “burstiness” in interaction patterns, leads to non-monotic changes in the spreading time of threshold driven contagion processes. The above results shed light on the effects of various network heterogeneities, with respect to dynamical processes that evolve on these networks
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wu, Xiaolei. "COORDINATION-DRIVEN SELF-ASSEMBLY OF TERPYRIDINE-BASED SUPRAMOLECULES." University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1490372164176458.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lee, Donald Kwun Kuen. "Data-driven models for complex medical systems /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sage, Aled. "Observation-driven configuration of complex software systems." Thesis, University of St Andrews, 2004. http://hdl.handle.net/10023/6479.

Повний текст джерела
Анотація:
The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies (e.g. communication, concurrency and recovery strategies) can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC-Directory, making them a valuable addition to the techniques available to system administrators and developers.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Schoner, Bernd 1969. "Probabilistic characterization and synthesis of complex driven systems." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/62352.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.
Includes bibliographical references (leaves 194-204).
Real-world systems that have characteristic input-output patterns but don't provide access to their internal states are as numerous as they are difficult to model. This dissertation introduces a modeling language for estimating and emulating the behavior of such systems given time series data. As a benchmark test, a digital violin is designed from observing the performance of an instrument. Cluster-weighted modeling (CWM), a mixture density estimator around local models, is presented as a framework for function approximation and for the prediction and characterization of nonlinear time series. The general model architecture and estimation algorithm are presented and extended to system characterization tools such as estimator uncertainty, predictor uncertainty and the correlation dimension of the data set. Furthermore a real-time implementation, a Hidden-Markov architecture, and function approximation under constraints are derived within the framework. CWM is then applied in the context of different problems and data sets, leading to architectures such as cluster-weighted classification, cluster-weighted estimation, and cluster-weighted sampling. Each application relies on a specific data representation, specific pre and post-processing algorithms, and a specific hybrid of CWM. The third part of this thesis introduces data-driven modeling of acoustic instruments, a novel technique for audio synthesis. CWM is applied along with new sensor technology and various audio representations to estimate models of violin-family instruments. The approach is demonstrated by synthesizing highly accurate violin sounds given off-line input data as well as cello sounds given real-time input data from a cello player.
by Bernd Schoner.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Klus, Stefan [Verfasser]. "Data-driven analysis of complex dynamical systems / Stefan Klus." Berlin : Freie Universität Berlin, 2020. http://d-nb.info/1221599895/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Abou, Jaoude Dany. "Computationally Driven Algorithms for Distributed Control of Complex Systems." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/85965.

Повний текст джерела
Анотація:
This dissertation studies the model reduction and distributed control problems for interconnected systems, i.e., systems that consist of multiple interacting agents/subsystems. The study of the analysis and synthesis problems for interconnected systems is motivated by the multiple applications that can benefit from the design and implementation of distributed controllers. These applications include automated highway systems and formation flight of unmanned aircraft systems. The systems of interest are modeled using arbitrary directed graphs, where the subsystems correspond to the nodes, and the interconnections between the subsystems are described using the directed edges. In addition to the states of the subsystems, the adopted frameworks also model the interconnections between the subsystems as spatial states. Each agent/subsystem is assumed to have its own actuating and sensing capabilities. These capabilities are leveraged in order to design a controller subsystem for each plant subsystem. In the distributed control paradigm, the controller subsystems interact over the same interconnection structure as the plant subsystems. The models assumed for the subsystems are linear time-varying or linear parameter-varying. Linear time-varying models are useful for describing nonlinear equations that are linearized about prespecified trajectories, and linear parameter-varying models allow for capturing the nonlinearities of the agents, while still being amenable to control using linear techniques. It is clear from the above description that the size of the model for an interconnected system increases with the number of subsystems and the complexity of the interconnection structure. This motivates the development of model reduction techniques to rigorously reduce the size of the given model. In particular, this dissertation presents structure-preserving techniques for model reduction, i.e., techniques that guarantee that the interpretation of each state is retained in the reduced order system. Namely, the sought reduced order system is an interconnected system formed by reduced order subsystems that are interconnected over the same interconnection structure as that of the full order system. Model reduction is important for reducing the computational complexity of the system analysis and control synthesis problems. In this dissertation, interior point methods are extensively used for solving the semidefinite programming problems that arise in analysis and synthesis.
Ph. D.
The work in this dissertation is motivated by the numerous applications in which multiple agents interact and cooperate to perform a coordinated task. Examples of such applications include automated highway systems and formation flight of unmanned aircraft systems. For instance, one can think of the hazardous conditions created by a fire in a building and the benefits of using multiple interacting multirotors to deal with this emergency situation and reduce the risks on humans. This dissertation develops mathematical tools for studying and dealing with these complex systems. Namely, it is shown how controllers can be designed to ensure that such systems perform in the desired way, and how the models that describe the systems of interest can be systematically simplified to facilitate performing the tasks of mathematical analysis and control design.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hong, Seong-Kwan. "Performance driven analog layout compiler." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/15037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fu, Chao-ying. "Compiler-Driven Value Speculation Scheduling." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010508-151111.

Повний текст джерела
Анотація:

Modern microprocessors utilize several techniques for extracting instruction-level parallelism (ILP) to improve the performance. Current techniques employed in the microprocessor include register renaming to eliminate register anti- and output (false) dependences, branch prediction to overcome control dependences, and data disambiguation to resolve memory dependences. Techniques for value prediction and value speculation have been proposed to break register flow (true) dependences among operations, so that dependent operations can be speculatively executed without waiting for producer operations to finish. This thesis presents a new combined hardware and compiler synergy, value speculation scheduling (VSS), to exploit the predictability of operations to improve the performance of microprocessors. The VSS scheme can be applied to dynamically-scheduled machines and statically-scheduled machines. To improve the techniques for value speculation, a value speculation model is proposed as solving an optimal edge selection problem in a data dependence graph. Based on three properties observed from the optimal edge selection problem, an efficient algorithm is designed and serves as a new compilation phase of benefit analysis to know which dependences should be broken to obtain maximal benefits from value speculation. A pure software technique is also proposed, so that existing microprocessors can employ software-only value speculation scheduling (SVSS) without adding new value prediction hardware and modifying processor pipelines. Hardware-based value profiling is investigated to collect highly predictable operations at run-time for reducing the overhead of program profiling and eliminating the need of profile training inputs.

Стилі APA, Harvard, Vancouver, ISO та ін.
11

Mo, Monica L. "Characterizing complex phenotypes in metabolism an "omics"-driven systems approach /." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3380446.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed January 12, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 92-104).
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Coore, Daniel. "Automatic profiler-driven probabilistic compiler optimization." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35396.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Afzal, Nasrin. "Aging processes in complex systems." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23901.

Повний текст джерела
Анотація:
Recent years have seen remarkable progress in our understanding of physical aging in nondisordered systems with slow, i.e. glassy-like dynamics. In many systems a single dynamical length L(t), that grows as a power-law of time t or, in much more complicated cases, as a logarithmic function of t, governs the dynamics out of equilibrium. In the aging or dynamical scaling regime, these systems are best characterized by two-times quantities, like dynamical correlation and response functions, that transform in a specific way under a dynamical scale transformation. The resulting dynamical scaling functions and the associated non-equilibrium exponents are often found to be universal and to depend only on some global features of the system under investigation. We discuss three different types of systems with simple and complex aging properties, namely reaction diffusion systems with a power growth law, driven diffusive systems with a logarithmic growth law, and a non-equilibrium polymer network that is supposed to capture important properties of the cytoskeleton of living cells. For the reaction diffusion systems, our study focuses on systems with reversible reaction diffusion and we study two-times functions in systems with power law growth. For the driven diffusive systems, we focus on the ABC model and a related domain model and measure two- times quantities in systems undergoing logarithmic growth. For the polymer network model, we explain in some detail its relationship with the cytoskeleton, an organelle that is responsible for the shape and locomotion of cells. Our study of this system sheds new light on the non- equilibrium relaxation properties of the cytoskeleton by investigating through a power law growth of a coarse grained length in our system.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Tofield, Matthew Ian. "Visual attention in complex environments in relation to the older driver." Thesis, University of Reading, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270830.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Allan, Lucy. "A system tool for a complex world : a data-driven approach." Thesis, University of Bristol, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.715738.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hermas, Wael. "Approach to coverage-driven functional verification of complex multimillion gate ASICs." Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/27370.

Повний текст джерела
Анотація:
Today ASICs' designs are very complex and each consists of multi-millions gates. This creates a difficult and almost an impossible task for the verification engineers to verify thoroughly the whole design. The complexity of the verification effort grows exponentially. The most common verification methodology used today is Deterministic Testing. Based on today large designs, hundreds of deterministic test cases are required to verify the whole design. This is very time consuming and it requires lots of engineers' manpower. Since time to market is very essential, the engineers are forced to send the ASIC for fabrication even though lots of functionalities are not covered. A solution to this problem is Coverage-Driven Functional Verification (CDV). The CDV approach is based on ASICs functionalities. The verification process is done in the early stages of the design. In fact it is done right after the ASICs specifications are outlined and in parallel with RTL development. Functional Coverage is performed by collecting coverage items that correspond to ASICs functionalities. Using Functional coverage would minimize the number of test cases and enhance the verification process. In this thesis, the "Ethernet IP Core" Verilog design from OpenCores will be used. The Ethernet IP Core is a 10/100 Media Access Controller (MAC). It consists of a synthesizable Verilog RTL core that provides all features necessary to implement the Layer 2 protocol of the Ethernet standard. Both the Deterministic Testing Approach and the Coverage-Driven Functional Verification will be applied on the Ethernet IP Core design. Both verification methodologies will be compared and a conclusion will be driven to prove that CDV is the way for verifying today complex multi-millions gates ASICs. Cadence Specman (e language - IEEE 1647 e) is chosen as the verification tool because it provides different features for the Coverage-Driven Functional Verification Methodology.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Huang, Guorong. "Cost Modeling Based on Support Vector Regression for Complex Products During the Early Design Phases." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/28825.

Повний текст джерела
Анотація:
The purpose of a cost model is to provide designers and decision-makers with accurate cost information to assess and compare multiple alternatives for obtaining the optimal solution and controlling cost. The cost models developed in the design phases are the most important and the most difficult to develop. Therefore it is necessary to identify appropriate cost drivers and employ appropriate modeling techniques to accurately estimate cost for directing designers. The objective of this study is to provide higher predictive accuracy of cost estimation for directing designer in the early design phases of complex products. After a generic cost estimation model is presented and the existing methods for identification of cost drivers and different cost modeling techniques are reviewed, the dissertation first proposes new methodologies to identify and select the cost drivers: Causal-Associated (CA) method and Tabu-Stepwise selection approach. The CA method increases understanding and explanation of the cost analysis and helps avoid missing some cost drivers. The Tabu-Stepwise selection approach is used to select significant cost drivers and eliminate irrelevant cost drivers under nonlinear situation. A case study is created to illustrate their procedure and benefits. The test data show they can improve predictive capacity. Second, this dissertation introduces Tabu-SVR, a nonparametric approach based on support vector regression (SVR) for cost estimation for complex products in the early design phases. Tabu-SVR determines the parameters of SVR via a tabu search algorithm improved by the author. For verification and validation of performance on Tabu-SVR, the five common basic cost characteristics are summarized: accumulation, linear function, power function, step function, and exponential function. Based on these five characteristics and the Flight Optimization Systems (FLOPS) cost module (engine part), seven test data sets are generated to test Tabu-SVR and are used to compare it with other traditional methods (parametric modeling, neural networking and case-based reasoning). The results show Tabu-SVR significantly improves the performance compared to SVR based on empirical study. The radial basis function (RBF) kernel, which is much more robust, often has better performance over linear and polynomial kernel functions. Compared with other traditional cost estimating approaches, Tabu-SVR with RBF kernel function has strong predicable capability and is able to capture nonlinearities and discontinuities along with interactions among cost drivers. The third part of this dissertation focuses on semiparametric cost estimating approaches. Extensive studies are conducted on three semiparametric algorithms based on SVR. Three data sets are produced by combining the aforementioned five common basic cost characteristics. The experiments show Semiparametric Algorithm 1 is the best approach under most situations. It has better cost estimating accuracy over the pure nonparametric approach and the pure parametric approach. The model complexity influences the estimating accuracy for Semiparametric Algorithm 2 and Algorithm 3. If the inexact function forms are used as the parametric component of semiparametric algorithm, they often do not bring any improvement of cost estimating accuracy over the pure nonparametric approach and even worsen the performance. The last part of this dissertation introduces two existing methods for sensitivity analysis to improve the explanation capability of the cost estimating approach based on SVR. These methods are able to show the contribution of cost drivers, to determine the effect of cost drivers, to establish the profiles of cost drivers, and to conduct monotonic analysis. They finally can help designers make trade-off study and answer “what-i” questions.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Pittman, Grant Falwell. "Drivers of demand, interrelationships, and nutritional impacts within the nonalcoholic beverage complex." Texas A&M University, 2004. http://hdl.handle.net/1969.1/2673.

Повний текст джерела
Анотація:
This study analyzes the economic and demographic drivers of household demand for at-home consumption of nonalcoholic beverages in 1999. Drivers of available intake of calories, calcium, vitamin C, and caffeine associated with the purchase of nonalcoholic beverages also are analyzed. The 1999 ACNielsen HomeScan Panel, purchased by the U. S. Department of Agriculture, Economic Research Service, is the source of the data for this project. Many different classifications of beverages were analyzed including milk(whole, reduced fat, flavored, and non-flavored), regular and low-calorie carbonated soft drinks, powdered soft drinks, isotonics(sports drinks), juices(orange, apple, vegetable, and other juices), fruit drinks, bottled water, coffee(regular and decaffeinated), and tea(regular and decaffeinated). Probit models were used to find demographic drivers that affect the choice to purchase a nonalcoholic beverage. Heckman sample selection models and cross tabulations were used to find demographic patterns pertaining to the amount of purchase of the nonalcoholic beverages. The nutrient analysis indicated that individuals receive 211 calories, 217 mg of calcium, 45 mg of vitamin C, and 95 mg of caffeine per day from all nonalcoholic beverages. A critical finding for the nutrient analysis was that persons within households below 130% of poverty were receiving more calories and caffeine from nonalcoholic beverages compared to persons within households above 130% of poverty. Likewise, persons in households below 130% of poverty were receiving less calcium and vitamin C from nonalcoholic beverages compared to persons in households above 130% of poverty. Price and cross-price elasticities were examined using the LA/AIDS model. Methodological concerns of data frequency, beverage aggregations, and censoring techniques were explored and discussed. Own-price and cross-price elasticities for the beverages were uncovered. Price elasticities by selected demographic groups also were investigated. Results indicated that price elasticities varied by demographics, specifically for race, region, and presence of children within the household. The information uncovered in this dissertation helps to update consumer demand knowledge and nutritional intake understanding in relation to nonalcoholic beverages. The information can be used as a guide for marketing strategists for targeting and promotion as well as for policy makers looking to improve nutritional intake received from nonalcoholic beverages.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Wang, Fulin. "Combined fouling of pressure-driven membranes treating feed waters of complex composition." Diss., Connect to online resource - MSU authorized users, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Reilly, David James. "Experimental study of shock-driven, variable-density turbulence using a complex interface." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54456.

Повний текст джерела
Анотація:
The overarching goal of this work is to advance the current knowledge of hydrodynamic instabilities (namely, Richtmyer-Meshkov and Kelvin-Helmholtz instabilities) and associated turbulent mixing phenomena which is important for several emerging technologies and verification/validation of numerical models being developed to study these phenomena. Three experimental campaigns were designed to focus on understanding the evolution of the instability under different impulsive acceleration histories and highlight the impact of initial conditions on the developing turbulent flow environment. The first campaign highlights the importance of initial baroclinic torque distribution along the developing shocked interface in a twice-shocked variable-density flow environment. The second campaign is a parametric study which aims at providing a large dataset for validating models in literature as well as simulations. In the last study, a new type of initial condition was designed to study the effect of initial conditions on late time turbulent flows. A description of the optical diagnostic techniques developed in our laboratory in order to complete these studies will be given. Now each campaign will be introduced. In the first campaign, an inclined interface perturbation is used as the initial condition. The Mach number (1.55), angle of inclination (60 degrees), and gas pair (N2/CO2) were held constant. The parameter which changed was the distance that the initial condition was placed relative to the end of the shock tube (i.e., the end of the test section). Three distances were used. The vorticity distribution was found to be very different for the most developed case after reshock. Furthermore, the most developed case started to develop an inertial range before reshock. The second campaign is parametric and seeks to test a proposed inclined interface scaling technique. The data is also useful for comparing to Ares simulation results. The parameter space covered Mach number (1.55 and 2.01), inclination angle (60 degrees and 80 degrees), and Atwood number (0.23 and 0.67). PLIF was developed and used to collect data for four cases before and after reshock. Linear and nonlinear cases developed very differently before reshock, but their mixing widths converged after reshock. The last campaign involves a new perturbation technique which generates what will be referred to as a complex interface. Counter-flowing jets were placed near the interface exit ports to create shear. The perturbation was made more complex by also injecting light (heavy) gas into the heavy (light) one. Density and velocity statistics were collected simultaneously. The complex case retained a signature of the inclined interface perturbation at late time before reshock and developed a larger inertial range than its inclined interface counterpart. Important parameters for a variable-density turbulence model are also presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Tournavitis, Georgios. "Profile-driven parallelisation of sequential programs." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5287.

Повний текст джерела
Анотація:
Traditional parallelism detection in compilers is performed by means of static analysis and more specifically data and control dependence analysis. The information that is available at compile time, however, is inherently limited and therefore restricts the parallelisation opportunities. Furthermore, applications written in C – which represent the majority of today’s scientific, embedded and system software – utilise many lowlevel features and an intricate programming style that forces the compiler to even more conservative assumptions. Despite the numerous proposals to handle this uncertainty at compile time using speculative optimisation and parallelisation, the software industry still lacks any pragmatic approaches that extracts coarse-grain parallelism to exploit the multiple processing units of modern commodity hardware. This thesis introduces a novel approach for extracting and exploiting multiple forms of coarse-grain parallelism from sequential applications written in C. We utilise profiling information to overcome the limitations of static data and control-flow analysis enabling more aggressive parallelisation. Profiling is performed using an instrumentation scheme operating at the Intermediate Representation (Ir) level of the compiler. In contrast to existing approaches that depend on low-level binary tools and debugging information, Ir-profiling provides precise and direct correlation of profiling information back to the Ir structures of the compiler. Additionally, our approach is orthogonal to existing automatic parallelisation approaches and additional fine-grain parallelism may be exploited. We demonstrate the applicability and versatility of the proposed methodology using two studies that target different forms of parallelism. First, we focus on the exploitation of loop-level parallelism that is abundant in many scientific and embedded applications. We evaluate our parallelisation strategy against the Nas and Spec Fp benchmarks and two different multi-core platforms (a shared-memory Intel Xeon Smp and a heterogeneous distributed-memory Ibm Cell blade). Empirical evaluation shows that our approach not only yields significant improvements when compared with state-of- the-art parallelising compilers, but comes close to and sometimes exceeds the performance of manually parallelised codes. On average, our methodology achieves 96% of the performance of the hand-tuned parallel benchmarks on the Intel Xeon platform, and a significant speedup for the Cell platform. The second study, addresses the problem of partially sequential loops, typically found in implementations of multimedia codecs. We develop a more powerful whole-program representation based on the Program Dependence Graph (Pdg) that supports profiling, partitioning and codegeneration for pipeline parallelism. In addition we demonstrate how this enhances conventional pipeline parallelisation by incorporating support for multi-level loops and pipeline stage replication in a uniform and automatic way. Experimental results using a set of complex multimedia and stream processing benchmarks confirm the effectiveness of the proposed methodology that yields speedups up to 4.7 on a eight-core Intel Xeon machine.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Fenacci, Damon. "Compiler-driven data layout transformations for network applications." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6210.

Повний текст джерела
Анотація:
This work approaches the little studied topic of compiler optimisations directed to network applications. It starts by investigating if there exist any fundamental differences between application domains that justify the development and tuning of domain-specific compiler optimisations. It shows an automated approach that is capable of identifying domain-specific workload characterisations and presenting them in a readily interpretable format based on decision trees. The generated workload profiles summarise key resource utilisation issues and enable compiler engineers to address the highlighted bottlenecks. By applying this methodology to data intensive network infrastructure application it shows that data organisation is the key obstacle to overcome in order to achieve high performance. It therefore proposes and evaluates three specialised data transformations (structure splitting, array regrouping, and software caching) against the industrial EEMBC networking benchmarks and real-world data sets. It also demonstrates on one hand that speedups of up to 2.62 can be achieved, but on the other that no single solution performs equally well across different network traffic scenarios. Hence, to address this issue, an adaptive software caching scheme for high frequency route lookup operations is introduced and its effectiveness evaluated one more time against EEMBC networking benchmarks and real-world data sets achieving speedups of up to 3.30 and 2.27. The results clearly demonstrate that adaptive data organisation schemes are necessary to ensure optimal performance under varying network loads. Finally this research addresses another issue introduced by data transformations such as array regrouping and software caching, i.e. the need for static analysis to allow efficient resource allocation. This thesis proposes a static code analyser that allows the automatic resource analysis of source code containing lists and tree structures. The tool applies a combination of amortised analysis and separation logic methodology to real code and is able to evaluate type and resource usage of existing data structures, which can be used to compute global resource consumption values for full data intensive network applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Hu, Bo. "Model compiler driven device modeling and circuit simulation /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/6054.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Labatut, Patrick. "Labeling of data-driven complexes for surface reconstruction." Paris 7, 2009. http://www.theses.fr/2009PA077106.

Повний текст джерела
Анотація:
Cette thèse introduit une nouvelle approche pour la reconstruction de surface à partir d'acquisitions de nuages de points. Cette approche construit un complexe cellulaire à partir du nuage de points puis formule la reconstruction comme un problème d'étiquetage binaire des cellules de ce complexe sous un ensemble de contraintes de visibilité. La résolution du problème se ramène alors au calcul d'une coupe minimale s-t permettant d'obtenir efficacement une surface optimale d'après ces contraintes. Dans la première partie de cette thèse, l'approche est utilisée pour la reconstruction générique de surface. Une première application aboutit à un algorithme très robuste de reconstruction de surface à partir de nuages denses issus d'acquisitions laser. Une seconde application utilise une variante de cet algorithme au sein d'une chaîne de photo-modélisation en combinaison avec un raffinement variationnel photométrique. La chaîne complète est adaptée à la reconstruction de scènes de grande échelle et obtient d'excellents résultats en terme de complétude et de précision des reconstructions obtenues. La seconde partie de cette thèse considère le problème de la reconstruction directe de modèles géométriques simples à partir de nuages de points. Un algorithme robuste est proposé pour décomposer hiérarchiquement des nuages de points denses en formes issues d'un ensemble restreint de classes de formes. Lorsque que cet ensemble de classes est réduit aux plans seulement, la reconstruction de modèles de très faible complexité est possfcle. Une extension à d'autres classes de forme échange cet avantage contre la gestion de nuages de points plus difficiles
This thesis introduces a new flexible framework for surface reconstruction from acquired point sets. This framework casts the surface reconstruction problem as a cells binary labeling problem on a point-guided cell complex under a combination of visibility constraints. This problem can be solved by Computing a simple minimum s-t cut allowing an optimal visibility-consistent surface to be efficiently found. In the first part of this thesis, the framework is used for general surface reconstruction problems. A first application leads to an extremely robust surface reconstruction algorithm for dense point clouds from range data. A second application consists in a key component of a dense multi-view stereo reconstruction pipeline, combined with a carefully designed photometric variational refmement. The whole pipeline is suitable to large-scale scenes and achieves state-of-the-art results both in completeness and accuracy of the obtained reconstructions. In the second part of this thesis, the problem of directly reconstructing geometrically simple models from point clouds is addressed. A robust algorithm is proposed to hierarchically cluster a dense point clouds into shapes from a predefined set of classes. If this set of classes is reduced to planes only, the concise reconstruction of models of extremely low combinatorial complexity is achieved. The extension to more general shapes trades this conciseness for a more verbose reconstruction with the added feature of handling more challenging point clouds
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Chakraborty, Shatakshi. "A study on context driven human activity recognition framework." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439308572.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Van, der Merwe Hendrik Naude. "Remote sensing driven lithological discrimination within nappes of the Naukluft Nappe Complex, Namibia." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/97147.

Повний текст джерела
Анотація:
Thesis (MSc)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: Geological remote sensing is a powerful tool for lithological discrimination, especially in arid regions with minimal vegetative cover to obscure rock exposures. Commercial multispectral imaging satellites provide a broad spectral range with which to target specific rock types. Landsat ETM+ (7), ASTER, and SPOT 5 multispectral images were acquired and digitally processed: band ratioing, principle components analysis, and maximum likelihood supervised classification. The sensors were evaluated on the ability to discriminate between sedimentary rocks in a structurally complex setting. The study focusses on the formations of the Naukluft Nappe Complex, Namibia. Previous work of the area had to be consulted in order to identify the main target rock types. Dolomite, limestone, quartzite, and shale were determined to make up the majority of rock types in the area. Landsat, ASTER, and SPOT 5 imagery were acquired and pre-processed. Each was subjected to transform techniques: band ratios and PCA. Band ratios were tailored to highlighted target rock types as well as a number of control ratios to ensure the integrity of important ratios. PCA components were inspected to find the most useful ones which were combined into FCCs. Transform results, expert knowledge, and a geological map were consulted to identify training and accuracy samples for the supervised classifications. All three classifications made use of the same set of training and accuracy samples to facilitate useful comparisons. Transform results were promising for Landsat and ASTER images, while SPOT 5 struggled. The limited spectral resolution of SPOT 5 limited its use for identifying target rock types, with the superior spatial resolution contributing very little. Landsat benefitted from good spectral resolution. This allowed for good performance with highlighting limestone and dolomite, while being less successful with shale. Quartzite was a real problem as the spectral resolution of Landsat could not cover this range as well. ASTER, having the highest spectral resolution, could distinguish between all four target rock types. Landsat and ASTER results suffered in areas where formations were relatively thin (smaller than sensor spatial resolution). The supervised classification results were similar to the transforms in that both Landsat and ASTER provided useful results, while SPOT 5 failed to yield definitive results. Accuracy assessment determined that ASTER performed the best at 98.72%. Landsat produced an accuracy of 93.29% while SPOT 5 was 80.17% accuracy. Landsat completely overestimated the amount of quartzite present, while all results classified significant proportions Quaternary sediments as shale. Limestone was well represented in even the poorest results, while dolomite usually struggled in areas where it was in close association with quartzite. Silica yields relatively strong responses in the TIR spectrum which could lead to misclassification of dolomite, which also has strong TIR signatures.
AFRIKAANSE OPSOMMING: Geologiese afstandswaarneming is 'n kragtige tegniek vir litologiese diskriminasie, veral in droë streke met minimale plantbedekking om dagsome te verduister. Kommersiële multispektrale satelliete beelde bied 'n breë spektrale reeks waarmee spesifieke gesteentetipes geteiken kan word. Landsat ETM + (7), ASTER, en SPOT 5 multispektrale beelde was bekom en digitaal verwerk: bandverhoudings, hoofkomponente-ontleding, en maksimum waarskynlikheid klassifikasie. Die sensors is geëvalueer op hul vermoë om te onderskei tussen sedimentêre gesteentes in 'n struktureel komplekse omgewing. Die studie fokus op die formasies van die Naukluft Dekblad Kompleks, Namibië. Vorige werk van die area was geraadpleeg om die hoofgesteentetipes te identifiseer. Dit was bepaal dat dolomiet, kalksteen, kwartsiet, en skalie die oorgrote meerderheid van kliptipes in area opgemaak het. Landsat, ASTER, en SPOT 5 beelde is verkry en voorverwerk. Elke beeld was onderwerp aan transformasietegnieke: bandverhoudings en hoofkomponente-ontleding. Bandverhoudings is aangepas om teiken rotstipes uit te lig asook 'n aantal kontrole bandverhoudings om die integriteit van belangrike verhoudings te verseker. Hoofkomponente-ontleding komponente is ondersoek om die mees bruikbares te vind en dié was gekombineer in valse kleur samestellings. Transformasie resultate, deskundige kennis, en 'n geologiese kaart was geraadpleeg om opleidings- en verwysingsmonsters was verkry vanaf die beelde vir die klassifikasies. Al drie klassifikasies gebruik gemaak van dieselfde stel van die opleiding- en akkuraatheidsmonsters om sodoende betekenisvolle vergelykings te verseker. Transformasie resultate is belowend vir Landsat en ASTER beelde, terwyl SPOT 5 minder bruikbaar was. Die noue spektrale resolusie van SPOT 5 beperk die gebruik daarvan vir die identifisering van teiken gesteentetipes terwyl die hoë ruimtelike resolusie baie min bydra. Landsat het voordeel getrek uit goeie spektrale resolusie. Dit goeie resultate opgelwer met die klem op kalksteen en dolomiet, terwyl skalie aansienlik swakker resultate opgelewer het. Kwartsiet was 'n werklike probleem omdat die spektrale resolusie van Landsat nie breed genoeg was om hierdie kliptipe te onderskei nie. ASTER, met die hoogste spektrale resolusie, kon onderskei tussen al vier teiken rotstipes. Landsat en ASTER resultate was baie negatief beïnvloed in gebiede waar formasies relatief dun was (kleiner as sensor ruimtelike resolusie). Die klassifikasie resultate was soortgelyk aan die transformasies in dat beide Landsat en ASTER nuttige resultate opgelewer het, terwyl SPOT 5 misluk het. Akkuraatheids assessering het bepaal dat ASTER die beste gevaar het met 98,72%. Landsat het 'n akkuraatheid van 93,29% opgelewer, terwyl SPOT 5 80,17% akkuraat was. Landsat het die hoeveelheid kwartsiet heeltemal oorskat, terwyl al die resultate groot hoeveelhede Kwaternêre sedimente as skalie geklassifiseer het. Kalksteen is goed verteenwoordig in tot die armste resultate, terwyl resultate gewoonlik afgeneem het waar dolomiet in noue verband met kwartsiet was. Dit is moontlik asgevolg van silika se relatiewe sterk reaksies in die termiese infra-rooi spektrum wat kan lei tot die foutiewe klassifisering met dolomiet (wat ook sterk reageer in die TIR spektrum).
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Tao, Li. "Understanding the performance of healthcare services: a data-driven complex systems modeling approach." HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/89.

Повний текст джерела
Анотація:
Healthcare is of critical importance in maintaining people’s health and wellness. It has attracted policy makers, researchers, and practitioners around the world to .nd better ways to improve the performance of healthcare services. One of the key indicators for assessing that performance is to show how accessible and timely the services will be to speci.c groups of people in distinct geographic locations and in di.erent seasons, which is commonly re.ected in the so-called wait times of services. Wait times involve multiple related impact factors, called predictors, such as demographic characteristics, service capacities, and human behaviors. Some impact factors, especially individuals’ behaviors, may have mutual interactions, which can lead to tempo-spatial patterns in wait times at a systems level. The goal of this thesis is to gain a systematic understanding of healthcare services by investigating the causes and corresponding dynamics of wait times. This thesis presents a data-driven complex systems modeling approach to investigating the causes of tempo-spatial patterns in wait times from a self-organizing perspective. As the predictors of wait times may have direct, indirect, and/or moderating e.ects, referred to as complex e.ects, a Structural Equation Modeling (SEM)-based analysis method is proposed to discover the complex e.ects from aggregated data. Existing regression-based analysis techniques are only able to reveal pairwise relationships between observed variables, whereas this method allows us to explore the complex e.ects of observed and/or unobserved(latent) predictors on waittimes simultaneously. This thesis then considers how to estimate the variations in wait times with respect to changes in speci.c predictors and their revealed complex e.ects. An integrated projection method using the SEM-based analysis, projection, and a queuing model analysis is developed. Unlike existing studies that either make projections based primarily on pairwise relationships between variables, or queuing model-based discrete event simulations, the proposed method enables us to make a more comprehensive estimate by taking into account the complex e.ects exerted by multiple observed and latent predictors, and thus gain insights into the variations in the estimated wait times over time. This thesis further presents a method for designing and evaluating service management strategies to improve wait times, which are determined by service management behaviors. Our proposed strategy for allocating time blocks in operating rooms (ORs) incorporates historical feedback information about ORs and can adapt to the unpredictable changes in patient arrivals and hence shorten wait times. Existing time block allocations are somewhat ad hoc and are based primarily on the allocations in previous years, and thus result in ine.cient use of service resources. Finally, this thesis proposes a behavior-based autonomy-oriented modeling method for modeling and characterizing the emergent tempo-spatial patterns at a systems level by taking into account the underlying individuals’ behaviors with respect to various impact factors. This method uses multi-agent Autonomy-Oriented Computing (AOC), a computational modeling and problem-solving paradigm with a special focus on addressing the issues of self-organization and interactivity, to model heterogeneous individuals (entities), autonomous behaviors, and the mutual interactions between entities and certain impact factors. The proposed method therefore eliminates to a large extent the strong assumptions that are used to de.ne the stochastic properties of patient arrivalsand servicesinstochasticmodeling methods(e.g.,thequeuing model and discrete event simulation), and those of .xed relationships between entities that are held by system dynamics methods. The method is also more practical than agent-based modeling (ABM) for discovering the underlying mechanisms for emergent patterns, as AOC provides a general principle for explicitly stating what fundamental behaviors of and interactions between entities should be modeled. To demonstrate the e.ectiveness of the proposed systematic approach to understanding the dynamics and relevant patterns of wait times in speci.c healthcare service systems, we conduct a series of studies focusing on the cardiac care services in Ontario, Canada. Based on aggregated data that describe the services from 2004 to 2007, we use the SEM-based analysis method to (1) investigate the direct and moderating e.ects that speci.c demand factors, in terms of certaingeodemographicpro.les, exert onpatient arrivals, whichindirectly a.ect wait times; and (2) examine the e.ects of these factors (e.g., patient arrivals, physician supply, OR capacity, and wait times) on the wait times in subsequent units in a hospital. We present the e.ectiveness of integrated projection in estimating the regional changes in service utilization and wait times in cardiac surgery services in 2010-2011. We propose an adaptive OR time block allocation strategy and evaluate its performance based on a queuing model derived from the general perioperative practice. Finally, we demonstrate how to use the behavior-based autonomy-oriented modeling method to model and simulate the cardiac care system. We .nd that patients’ hospital selection behavior, hospitals’ service adjusting behavior, and their interactions via wait times may account for the emergent tempo-spatial patterns that are observed in the real-world cardiac care system. In summary, this thesis emphasizes the development of a data-driven complex systems modeling approach for understanding wait time dynamics in a healthcare service system. This approach will provide policy makers, researchers, and practitioners with a practically useful method for estimating the changes in wait times in various “what-if” scenarios, and will support the design and evaluation of resource allocation strategies for better wait times management. By addressing the problem of characterizing emergenttempo-spatial waittimepatternsinthe cardiac care system from a self-organizing perspective, we have provided a potentially e.ective means for investigating various self-organized patterns in complex healthcare systems. Keywords: Complex Healthcare Service Systems, Wait Times, Data-Driven Complex Systems Modeling, Autonomy-Oriented Computing(AOC), Cardiac Care
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Deicke, Markus. "Virtuelle Absicherung von Steuergeräte-Software mit hardwareabhängigen Komponenten." Doctoral thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-230123.

Повний текст джерела
Анотація:
Der stetig steigende Funktionsumfang im Automobil und die zunehmende Vernetzung von Steuergeräten erfordern neue Methoden zur Beherrschung der Komplexität in der Validierung und Verifikation. Die virtuelle Absicherung ermöglicht die Integration der Software in einem PC-System, unabhängig von der Ziel-Hardware, zur frühzeitigen Gewährleistung der Softwarequalität im Entwicklungsprozess. Ebenso kann die Wiederverwendbarkeit vorhandener Komponenten in zukünftigen Mikrocontrollern sichergestellt werden. Die Grundlage dafür liefert der AUTOSAR-Standard durch einheitliche Schnittstellenbeschreibungen, welche die Abstraktion von Hardware und Software ermöglichen. Allerdings enthält der Standard hardwareabhängige Software-Komponenten, die als Complex-Device-Drivers (CDDs) bezeichnet werden. Aufgrund ihrer Hardwareabhängigkeit sind CDDs nicht direkt in eine virtuelle Absicherungsplattform integrierbar, da die spezifischen Hardware-Module nicht verfügbar sind. Die Treiber sind dennoch Teil der Steuergeräte-Software und somit bei einem ganzheitlichen Absicherungsansatz mit zu betrachten. Diese Dissertation beschreibt sieben unterschiedliche Konzepte zur Berücksichtigung von CDDs in der virtuellen Absicherung. Aus der Evaluierung der Praxistauglichkeit aller Ansätze wird eine Auswahlmethodik für die optimale Lösung bei sämtlichen Anwendungsfällen von CDDs in der Steuergeräte-Software entwickelt. Daraus abgeleitet, eignen sich zwei der Konzepte für die häufigsten Anwendungsfälle, die im Weiteren detailliert beschrieben und realisiert werden. Das erste Konzept erlaubt die vollständige Simulation eines CDD. Dies ist notwendig, um die Integration der Funktions-Software selbst ohne den Treiber zu ermöglichen und alle Schnittstellen abzusichern, auch wenn der CDD noch nicht verfügbar ist. Durch eine vollständige Automatisierung ist die Erstellung der Simulation nur mit geringem Arbeitsaufwand verbunden. Das zweite Konzept ermöglicht die vollständige Integration eines CDD, wobei die Hardware-Schnittstellen über einen zusätzlichen Hardware-Abstraction-Layer an die verfügbare Hardware des Systems zur virtuellen Absicherung angebunden werden. So ist der Treiber in der Lage, reale Hardware-Komponenten anzusteuern und kann funktional abgesichert werden. Eine flexible Konfiguration der Abstraktionsschicht erlaubt den Einsatz für eine große Bandbreite von CDDs. Im Rahmen der Arbeit werden beide Konzepte anhand von industrierelevanten Projekten aus der Serienentwicklung erprobt und detailliert evaluiert
The constantly increasing amount of functions in modern automobiles and the growing degree of cross-linking between electronic control units (ECU) require new methods to master the complexity in the validation and verification process. The virtual validation and verification enables the integration of the software on a PC system, which is independent from the target hardware, to guarantee the required software quality in the early development stages. Furthermore, the software reuse in future microcontrollers can be verified. All this is enabled by the AUTOSAR standard which provides consistent interface descriptions to allow the abstraction of hardware and software. However, the standard contains hardware-dependent components, called complex device drivers (CDD). Those CDDs cannot be directly integrated into a platform for virtual verification, because they require a specific hardware which is not generally available on such a platform. Regardless, CDDs are an essential part of the ECU software and therefore need to be considered in an holistic approach for validation and verification. This thesis describes seven different concepts to include CDDs in the virtual verification process. A method to always choose the optimal solution for all use cases of CDDs in ECU software is developed using an evaluation of the suitably for daily use of all concepts. As a result from this method, the two concepts suited for the most frequent use cases are detailed and developed as prototypes in this thesis. The first concept enables the full simulation of a CDD. This is necessary to allow the integration of the functional software itself without the driver. This way all interfaces can be tested even if the CDD is not available. The complete automation of the generation of the simulation makes the process very efficient. With the second concept a CDD can be entirely integrated into a platform for virtual verification, using an hardware abstraction layer to connect the hardware interfaces to the available hardware of the platform. This way, the driver is able to control real hardware components and can be tested completely. A flexible configuration of the abstraction layer allows the application of the concept for a wide variety of CDDs. In this thesis both concepts are tested and evaluated using genuine projects from series development
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Seegmiller, Jayrin Ella. "Development of a Complex Synthetic Larynx Model and Characterization of the Supraglottal Jet." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4149.

Повний текст джерела
Анотація:
Voice is an important tool for communication. Consequently, voice disorders tend to severely diminish quality of life. Voice research seeks to understand the physics that govern voice production to improve treatment of voice disorders. This thesis develops a method for creating complex synthetic laryngeal models and obtaining flow data within these complex models. The method uses Computed Tomography (CT) scan data to create silicone models of the larynx. Index of refraction matching allows flow field data to be collected within a synthetic complex larynx, which had previously been impossible. A short proof-of-concept of the method is set forth. Details on the development of a mechanically-driven synthetic model are presented. Particle image velocimetry was used to collect flow field data in a complex and a simplified supraglottal model to study the effect of complex geometry on the supraglottal jet. Axis switching and starting and closing vortices were observed. The thesis results are anticipated to aid in better understanding flow structures present during voice production.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Costa, Raul António Janeiro. "Simulation and optimisation of CLIC’s recombination complex." Master's thesis, Universidade de Aveiro, 2018. http://hdl.handle.net/10773/23656.

Повний текст джерела
Анотація:
Mestrado em Física
Nesta dissertação apresentamos as primeiras simulações de recombinação com o software Placet2 do complexo de recombinação do feixe de aceleração (DBRC) do colisor linear compacto (CLIC). Começamos por apresentar uma revisão do projeto CLIC e do papel e design do DBRC neste projeto. Continuamos discutindo alguns princípios básicos de dinâmica de feixes e a forma como programas de seguimento como o Placet2 os implementam. De seguida, apresentamos os possíveis problemas de design levantadas pelas nossas simulações e a estratégia que propomos para os superar. A principal descoberta é uma correlação parabolica da posição logitudinal com o momento (T566), que ameaça a eficiência das estruturas de extração de potência. Através de otimização iterativa do design, esta aberração foi eliminada no circuito de atraso e no anel de combinação 1. Também descobrimos que a emitância horizontal do feixe se encontra significativamente acima do orçamento (150 μm) e tentámos ir de encontro a este, reduzindo-a para 157 μm. Para obter este valor de emitância, foi necessário atualizar o esquema de injeção do anel de combinação 2. No plano vertical, que tem o mesmo orçamento de emitância, esta foi mantida a 127 μm
In this thesis we present the first Placet2 recombination simulations of the drive beam recombination complex (DBRC) design for the compact linear collider (CLIC). We start by presenting a review of the CLIC project and the DBRC’s role and design within it. We then discuss some of the core principles of beam dynamics and how tracking codes like Placet2 implement them. We follow that by presenting the design issues raised by our simulations and our proposed strategy to address them, key among which is a previously unknown parabolic dependency of the longitudinal position to the momentum (T566), which threatens the efficiency of the power extraction structures. Through iterative optimisation of the design, we eliminated this aberration both in the delay loop and in combiner ring 1. We also found the beam’s horizontal emittance to be significantly over the design budget (150 μm) and attempted to meet that budget, reaching 157 μm. In order to obtain this emittance value, an update to the combiner ring 2’s injection scheme was necessary. On the vertical plane, which has the same emittance budget, it was kept at 127 μm.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Idowu, Michael Adewunmi. "Data-driven modelling and optimised reverse engineering of complex dynamical systems in cancer research." Thesis, Abertay University, 2013. https://rke.abertay.ac.uk/en/studentTheses/c025b467-b317-4dbf-9c52-96a6e9d75047.

Повний текст джерела
Анотація:
Biological systems typically generate complex data that encapsulate the dynamics of interactions among measurables over time. To support the formation of insights into time series data from a biological system, there is a requirement to develop new methods that can analyse and translate such complex data into a form that allows trends, patterns, and predictions to be easily viewed, verified and tested. Here, a suite of novel analytical and matrix-based techniques for dynamical systems modelling are developed that are time-efficient and data-driven. These techniques facilitate a range of scientific analyses through novel matrix-based system identification and parameter estimation methods. The inference techniques are fast, optimised, and do not require a priori information to successfully infer network of interactions or automatically construct data-consistent models from data. Two distinct principal (Jacobian and power-law) models (solutions) that are data-consistent may be constructed from a single time series data set. A recast technique has also been developed to reconstruct either one of the principal models from the other, providing support for model interoperability and multiple model integration. The thesis demonstrates the effectiveness of a new theoretical framework developed to incorporate a modelling and visualization pipeline able to deal with a wide range of time-series data sets relating to complex biological systems. The integrated framework is able to infer and depict interaction networks implicit in time series data in just a matter of seconds and then display the evolution of that network dynamics in response to network perturbation such as drug treatments. Beyond this, there is a broader contribution to the field of biochemical system theory (BST), evidenced by establishing methods for transforming a constructed jacobian model to equivalent power-law models, and vice versa. The effectiveness of these new techniques is demonstrated using artificial time series data samples, simulated pseudo-data of biologically plausible models of real biological systems, and real experimental data derived from biological experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Graham, Jeremy A. "PATTERNS IN ENVIRONMENTAL DRIVERS OF WETLAND FUNCTIONING AND SPECIES COMPOSITION IN A COMPLEX PEATLAND." OpenSIUC, 2012. https://opensiuc.lib.siu.edu/theses/1049.

Повний текст джерела
Анотація:
The boreal peatlands that cover much of western Canada are immense reservoirs of organic carbon and nitrogen, serving as sinks for atmospheric carbon, as well as providing habitat for flora and fauna, and nutrient cycling. These ecosystems are generally believed to be nitrogen limited. Due to regional increases in industrial activities associated in the Athabasca Oil Sands Region (AOSR), atmospheric deposition of nitrogen is projected to increase, with unknown effects on peatland functioning. The results of this study provide baseline data for a nitrogen fertilization experiment with an accurate site description of the entire peatland complex to provide reference for the experiment. This study also examines patterns in production and nitrogen usage along a wet to dry gradient. My main question was if species assemblages could be sorted into communities and how these were related to environmental gradients. In chapters three and four I asked how production and nitrogen usage and storage varied along a moisture gradient. In chapter two, four communities were identified as being independent with clear indicator species. These communities had differences in abiotic factors formed clear gradients across the peatland, influencing the distribution of species arrangements in the peatland complex. Sphagnum angustifolium thrived in all four communities and across the entire range of gradients. This species is a foundation of species of bogs and poor fens and was studied in more detail in chapters 3 and 4. In chapter three, I found that primary production of S. angustifolium increased from dry to wet along the moisture gradient. Cranked wires used to measure linear growth became less reliable in wetter habitats, missing over 50 % of growth measure by innate time markers. Capitula increased in biomass throughout the course of the growing season, suggesting that after vertical elongation, S. angustifolium begins to accumulate branches and leaves in the capitula to close the growing season. Chapter four, evaluating nitrogen requirements found that while primary production of S. angustifolium increased from dry to wet, tissue quality of the growth decreased along this gradient. Despite the lower tissue quality, wet habitats had higher nitrogen requirements to support growth rates. Inputs of atmospheric deposition fulfilled <5% of annual N requirements and nitrogen saturated capitula in the beginning of the season was found to be an important source of nitrogen for growth, as capitula nitrogen storage declined over the season. Of the total nitrogen assimilated into annual growth, the percent lost a year later was similar across the moisture gradient; more nitrogen is stored in the wet habitats, strictly due to higher amounts initially assimilated. The results of this study suggest that in drier peatland habitats, there is an insufficient supply of water to deliver nitrogen and to support continuous growth during the growing season. Consequently, in wetter habitats, production is limited by nitrogen while in drier habitats it is limited by climate.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ashley, Victoria M. "On the development of knowledge driven optimisation methods : application to complex reactor network synthesis." Thesis, University of Surrey, 2004. http://epubs.surrey.ac.uk/843025/.

Повний текст джерела
Анотація:
Existing methods for reactor network synthesis vary from graphical approaches such as Attainable Region to superstructure-based optimisation using both deterministic and stochastic methods. Complex reactor network applications, consisting of many components and highly non-linear reaction kinetics, tend to face problems using current methods. Dimensionality limitations, initialisation problems, convergence difficulties and excessive computational times are some of the shortcomings, In an effort to overcome these difficulties, this project introduces the concept of Knowledge Driven Optimisation, utilising systems knowledge in order to develop rules that help to focus the optimisation search on high performance regions. The method uses knowledge derived from kinetic infoimation to gain an understanding of the system and devise a set of rules to guide the optimisation search. Data mining techniques are engaged to analyse serial and parallel pathways, relating concentration and temperature variables to regions of high performance. Extracted trends are translated into optimal design rules, and applied to a customised Tabu Search. Rule violations identify directions for improvement and aide move selection, guiding the superstructure optimisation search towards well performing structures, achieving more effective knowledge-based decision making than can be realised by a random stochastic search alone. Results show optimal solutions obtained for a number of examples agree with published literature whilst achieving faster convergence and reduced computational times compared to standard Tabu Search. To increase rule performance automatically, dynamic rule updates are implemented to tune the rule limits as the optimal search progresses. A hybrid optimisation approach, combining the stochastic rule-based search with deterministic techniques, is developed to promote efficient fine-tuning of the final structure. Application of the methodology to complex systems is demonstrated through a biocatalytic case study, metabolism by Saccharomyces cerevisiae, and the knowledge-driven rule-based approach significantly outperforms random Tabu Search. Preliminary studies into nonisothermal applications trial the use of temperature profiles. Finally, parallel processing and Grid technology is briefly investigated to assess the potential for achieving results in reduced times.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Rottenberg, Sam. "Modèles, méthodes et outils pour les systèmes répartis multiéchelles." Thesis, Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0003/document.

Повний текст джерела
Анотація:
Les systèmes informatiques sont des systèmes de plus en plus complexes, répartis sur plusieurs niveaux d’infrastructures des Technologies de l’Information et de la Communication (TIC). Ces systèmes sont parfois appelés des systèmes répartis multiéchelles. Le terme « multiéchelle » peut qualifier des systèmes répartis extrêmement variés suivant les points de vue dans lesquels ils sont caractérisés, comme la dispersion géographique des entités, la nature des équipements qui les hébergent, les réseaux sur lesquels elles sont déployées, ou encore l’organisation des utilisateurs. Pour une entité d’un système multiéchelle, les technologies de communication, les propriétés non fonctionnelles (en termes de persistance ou de sécurité), ou les architectures à favoriser, varient suivant la caractérisation multiéchelle pertinente définie ainsi que l’échelle à laquelle est associée l’entité. De plus, des architectures ad hoc de tels systèmes complexes sont coûteuses et peu durables. Dans cette thèse, nous proposons un framework de caractérisation multiéchelle, appelé MuSCa. Ce framework inclut un processus de caractérisation fondé sur les concepts de points de vue, dimensions et échelles, permettant de mettre en avant, pour chaque système complexe étudié, ses caractéristiques multiéchelles. Ces concepts constituent le cœur d’un métamodèle dédié. Le framework que nous proposons permet aux concepteurs de systèmes répartis multiéchelles de partager une taxonomie pour qualifier chaque système. Le résultat d’une caractérisation est un modèle à partir duquel le framework produit des artefacts logiciels qui apportent, à l’exécution, la conscience des échelles aux entités du système
Computer systems are becoming more and more complex. Most of them are distributed over several levels of Information and Communication Technology (ICT) infrastructures. These systems are sometimes referred to as multiscale systems. The word “multiscale” may qualify extremely various distributed systems according to the viewpoints in which they are characterized, such as the geographic dispersion of the entities, the nature of the hosting devices, the networks they are deployed on, or the users’ organization. For one entity of a multiscale system, communication technologies, non-functional properties (in terms of persistence or security) or architectures to be favored may vary depending on the relevant multiscale characterization defined for the system and on the scale associated to the entity. Moreover, ad hoc architectures of such complex systems are costly and non-sustainable. In this doctoral thesis, we propose a multiscale characterization framework, called MuSCa. The framework includes a characterization process based on the concepts of viewpoints, dimensions and scales, which enables to put to the fore the multiscale characteristics of each studied system. These concepts constitute the core of a dedicated metamodel. The proposed framework allows multiscale distributed systems designers to share a taxonomy for qualifying each system. The result of a characterization is a model from which the framework produces software artifacts that provide scale-awareness to the system’s entities at runtime
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Deluca, Silberberg Anna. "Complexity in Slowly-Driven Interaction-Dominated Threshold Systems: the Case of Rainfall." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/131289.

Повний текст джерела
Анотація:
Molts processos geofísics presenten comportament emergent. Aquest sovint es manifesta com a regularitats estadístiques de gran escala com les distribucions de lleis de potències de certs observables dels corresponents sistemes. En aquesta tesi investiguem l’aparició d' aquestes regularitats, desenvolupant tècniques estadístiques per fer estimacions acurades dels paràmetres de les distribucions de lleis de potencies. El nostre mètode proporciona un criteri objectiu per escollir el domini on la distribució segueix una llei de potencies. L' apliquem per investigar temps de vida mitja d’elements radioactius, el moment sísmic de terratrèmols, l’energia dels ciclons tropicals, els incendis forestals, i els temps d’espera entre terratrèmols. En el cas de la pluja també s'han observat, per mesures a latituds mitjanes, lleis de potències per les mides dels esdeveniments. En aquest estudi, apliquem el mètode per investigar si aquestes observacions es poden reproduir per dades de diversos climes diferents. Els resultats són positius i constitueixen un indici més de què la convecció atmosfèrica i les precipitacions podrien ser un exemple, al món real, de la Criticalitat Auto-Organitzada (Self-Organised Criticality o SOC en anglès; un mecanisme que explica l'aparició de lleis de potències a la natura). També fem un anàlisi d'escala per tal d’observar el col·lapse de les distribucions. Tanmateix, el mètode no serveix per comprovar la presència d’universalitat, que és quelcom que s'espera observar en un sistema SOC. Per tant, hem desenvolupat un mètode basat en un test de permutació per tal de determinar si els exponents estimats són estadísticament compatibles. El nostre test permutacional alternatiu dóna resultats clars: tot i el fet que les diferències entre els exponents són més aviat petites, la presència d’universalitat queda descartada. El fet que la hipòtesi d’universalitat quedi rebutjada en aquests tests, no implica però que s’hagi de descartar l’existència d’un mecanisme universal per la convecció atmosfèrica, ja que les dades recol·lectades podrien presentar errors sistemàtics no controlats. Finalment, estudiem les conseqüències dels resultats anteriors en la predicció de fenòmens atmosfèrics. Analitzem l'efecte de posar llindars d'observació en models SOC i dades de pluja. La predictibilitat de fenòmens extrems i intensitats extremes s’estudia mitjançant una variable de decisió sensible a la tendència a formar “clusters” o a repel·lir-se dels esdeveniments. Avaluem la qualitat d'aquestes prediccions mitjançant el mètode anomenat Característica Operativa del Receptor. En l’escala d’esdeveniments (gran escala), els temps entre esdeveniments de pluja renormalitzen a un procés de puntual trivial, i llavors la predictibilitat decreix quan el llindar creix. El mateix comportament s'observa per series temporals de models SOC en els quals s'ha aplicat un llindar de detecció d'intensitats, però s'observa el comportament contrari quan aquest no s'aplica. En l'escala de les intensitats (curta escala), la predicció no es veu afectada pel llindar, donat que els processos roman gairebé inalterat (així també els exponents crítics corresponents) fins que llindars significativament elevats s’assoleixen.
Many geophysical phenomena present emergent behaviour, which manifested as large-scale statistical regularities such as power-law distributions for the coarse-grained observables of the corresponding systems. In this thesis we investigate the appearance of power-law distributions in geophysical phenomena. We develop a statistical technique for making accurate estimations of the parameters of power-law distributions. The method introduced, which gives an objective criteria to decide the power-law domain of the distribution, is applied to investigate the half-lives of radioactive elements, the seismic moment of earthquakes, the energy of tropical cyclones, the area burnt in forest fires and the waiting time between earthquakes. In addition, the method is applied for investigating the reproducibility of the observation of scale-free rain event avalanche distributions using data across diverse climates and for looking for signs of universality in the associated fitted exponents. Scaling techniques are also applied in order to see the collapse of the distributions. This study contributes to a recent array of statistical measures that give support to the hypothesis that atmospheric convection and precipitation may be a real-world example of Self-Organised Criticality (SOC, a mechanism able to reproduce the observed power laws). Another expectation of the SOC paradigm is universality, but the fitting method is not enough for checking this hypothesis. Therefore, a method based on a permutation test is developed in order to determine if the estimated exponents are statistically compatible. Our alternative permutational tests give clear results: despite the fact that the differences between the exponents are rather small, the universality hypothesis is rejected. However, the fact that the universality hypothesis is rejected in the tests does not mean that one has to rule out the existence of a universal mechanism for atmospheric convection, as uncontrolled systematic errors can be present in the collection of data. Finally, we study the consequences of the previous results for the prediction of atmospheric phenomena by analysing the effect of applying thresholds on SOC models and rainfall time series. The predictability of extreme events and extreme intensities is studied by means of a decision variable sensitive to the tendency to cluster or repulse between them and the quality of the predictions is evaluated by the receiver operating characteristics method. On the events scale (large scale), times between events for rainfall data and models renormalise to a trivial point process, and then the predictability decreases when the threshold increases. In the intensity picture (short scale), the prediction is not affected by the threshold, as the process remains mostly unchanged (also their critical corresponding exponents) until very high thresholds are reached.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Quinn, Colin. "A value approach to complex system design utilising a non-rigid solution space." Thesis, Queen's University Belfast, 2017. https://pure.qub.ac.uk/portal/en/theses/a-value-approach-to-complex-system-design-utilising-a-nonrigid-solution-space(f6ca632c-4ab8-4a25-a314-b49f50a318f6).html.

Повний текст джерела
Анотація:
The research presented in this thesis develops an improved design methodology for designing complex systems. While traditional methods have been able to create complex systems, their success is usually overshadowed by long delays and expensive overruns. The method developed within this research is known as Value Seeking System Design (VSSD) and builds upon the foundations of the System Engineering (SE) and Value Driven Design (VDD) approaches. Creation and implementation of the new design environment is provided, including a method on how to create the value model for any complex system. Key conclusions from this work include a need to redefine the process in which stakeholder needs are currently defined and captured as well as a need to create an improved value model. Defining all stakeholders’ needs as requirements constrains the designer to a rigid solution space, which may not include the “best” solution for the stakeholder. Similarly not including the social aspects within a value model causes the designer to make poor value trades. To overcome these problems the VSSD technique incorporates desirements and their associated design desirability functions within the design process to create a non-rigid solution space while the value model has been redeveloped to easily incorporate the performance, economic and social aspects of a design, to allow a more accurate and balanced value trade off analysis to occur. Benchmarking the VSSD approach against the current state of the art methods (SE and VDD) highlighted the advantages of adapting a value approach to complex system design compared to traditional requirement based techniques. Additionally while all three approaches were capable of designing complex systems the VSSD approach was demonstrated to be an improved design methodology as it possessed the benefits inherent within both the SE and VDD approaches without suffering from their limitations.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Sušak, Hana 1985. "The Hunt of cancer genes : statistical inference of cancer risk and driver genes using next generation sequencuing data." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/668447.

Повний текст джерела
Анотація:
International cancer sequencing projects have generated comprehensive catalogs of alterations found in tumor genomes, as well as germline variant data for thousands of individuals. In this thesis, we describe two statistical methods exploiting these rich datasets in order to better understand tumor initiation, tumor progression and the contribution of genetic variants to the lifetime risk of developing cancer. The first method, a Bayesian inference model named cDriver, utilizes multiple signatures of positive selection acting on tumor genomes to predict cancer driver genes. Cancer cell fraction is introduced as a novel signature of positive selection on a cellular level, based on the hypothesis that cells obtaining additional advantageous driver mutations will undergo rapid proliferation and clonal expansion. We benchmarked cDriver against state of the art driver prediction methods on three cancer datasets demonstrating equal or better performance than the best competing tool. The second method, termed REWAS is a comprehensive framework for rare-variant association studies (RVAS) aiming at improving identification of cancer predisposition genes. Nonetheless, REWAS is readily applicable to any case-control study of complex diseases. Besides integrating well-established RVAS methods, we developed a novel Bayesian inference RVAS method (BATI) based on Integrated Nested Laplace Approximation (INLA). We demonstrate that BATI outperforms other methods on realistic simulated datasets, especially when meaningful biological context (e.g. functional impact of variants) is available or when risk variants in sum explain low phenotypic variance. Both methods developed during my thesis have the potential to facilitate personalized medicine and oncology through identification of novel therapeutic targets and identification of genetic predisposition facilitating prevention and early diagnosis of cancer.
Els distints projectes internacionals de seqüenciació de càncer duts a terme en els últims anys han generat catàlegs complets d’alteracions trobades en els genomes tumorals, així com informació de variants germinals per a milers d'individus. En aquesta tesi descrivim dos mètodes estadístics aprofitant aquestes bases de dades per tal d’entendre millor la iniciació i la progressió dels tumors, i la contribució de variants genètiques al risc de desenvolupar càncer al llarg de la vida. El primer mètode, anomenat cDriver, es basa en un model d’inferència Bayesià que utilitza múltiples senyals de la selecció positiva que ocorre en els genomes tumorals per tal de predir els gens driver del càncer. En aquest mètode, hem inclòs la fracció de cèl·lules tumorals com a nova senyal de la selecció positiva a nivell cel·lular. Aquesta es basa en la hipòtesi que les cèl·lules que adquireixen mutacions ventajoses proliferaran i s’expandiran clonalment més ràpidament. Per avaluar cDriver, aquest es va comparar amb els mètodes més utilitzats per a la predicció de gens driver actuals. L’anàlisi es va dur a terme amb conjunts de dades de tres càncer diferents i els resultats van ser iguals o millors que els obtinguts per les eines més competitives en el tema. El segon mètode, anomenat REWAS, és un marc de treball per l’estudi d’associació de variants rares (RVAS) amb l'objectiu de millorar la identificació dels gens de predisposició al càncer. Tot i això, REWAS es pot aplicar a qualsevol estudi cas-control de malalties complexes. Per una altra part, a més d'integrar mètodes RVAS ben establerts, hem desenvolupat un nou mètode d'inferència Bayesiana RVAS basat en Integrated Nested Laplace Approximation (BATI). També demostrem que BATI mostra millors resultats que altres mètodes en dades simulades amb soroll de fons real, especialment quan el context biològic (p.e. variants amb impacte funcional) està disponible or quan les variants de risc expliquen en total poca variància fenotípica.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Ruppert, Jan Gustav. "Functional analysis of heterochromatin protein 1-driven localisation and activity of the chromosomal passenger complex." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33158.

Повний текст джерела
Анотація:
The ultimate goal of mitosis is the equal distribution of chromosomes between the two daughter cells. One of the key players that ensures faithful chromosome segregation is the chromosomal passenger complex (CPC). CPC localisation to mitotic centromeres is complex, involving interactions with Shugoshin and binding to phosphorylated histone H3T3. It was recently reported that Heterochromatin Protein 1 (HP1) has a positive impact on CPC function during mitosis. The interaction between HP1 and the CPC appears to be perturbed in cancer-­‐derived cell lines, resulting in decreased HP1 levels at mitotic centromeres and may be a potential cause for increased chromosome mis-­‐segregation rates. In this study, I tethered HP1α to centromeres via the DNA-­‐binding domain CENP-­‐B. However, instead of improving the rate of chromosome mis-­‐segregation, HP1α tethering resulted in activity of the spindle assembly checkpoint and destabilisation of kinetochore-­‐microtubule attachments, most likely caused by the robust recruitment of the CPC. Tethered HP1α even traps the CPC at centromeres during mitotic exit, resulting in a catalytically active CPC throughout interphase. However, it was not clear whether endogenous HP1 contributes to CPC localisation and function prior to mitosis. Here I also describe a substantial interaction between endogenous HP1 and the CPC during the G2 stage of the cell cycle. The two isoforms HP1α and HP1γ contribute to the clustering of the CPC into active foci in G2 cells, a process that is independent of CDK1 kinase activity. Furthermore, the H3S10ph focus formation in the G2 phase appears to be independent of H3T3ph and H2AT120ph, the two histone marks that determine the CPC localisation in early mitosis. Together, my results indicate that HP1 contributes to CPC concentration and activation at pericentromeric heterochromatin in G2. This novel mode of CPC localisation occurs before the Aurora B-­‐driven methyl/phos switch releases HP1 from chromatin, which possibly enables the H3T3ph and H2AT120ph driven localisation of the CPC during mitosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Patel, Mayank Raman. "HARDWARE COMPILER DRIVEN HEURISTIC SEARCH FOR DIGITAL IC TEST SEQUENCES." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275246.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Silva, Filipe Ribeiro Ferreira da. "Electron driven reactions in complexes embedded in superfluid helium droplets." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2009. http://hdl.handle.net/10362/5340.

Повний текст джерела
Анотація:
A thesis submitted to the University of Innsbruck for the doctor degree in Natural Sciences, Physics and New University of Lisbon for the doctor degree in Physics, Atomic and Molecular Physics
The research work performed under the course of this thesis at the Nano-Bio Physics Group of the Institute of Ion Physics and Applied Physics, University of Innsbruck, deals exclusively with electron driven reactions in complexes embedded in helium nanodroplets. Helium nanodroplets provide a special and exotical environment that is not reachable with other techniques. The cold environment of the helium nanodroplets (0.38K), is a perfect tool to study complex systems in their ro-vibrational ground state. Dopants are added to the helium nanodroplets in a pick up cell allowing to control accurately the growing of clusters‘ size in helium droplets. The research activities described in this thesis cover the interaction of low and intermediate energies (0 – 100 eV) electrons with a wide range of simple and complex molecules in a very cold environment. Electron impact ionisation and free electron attachment to different systems were studied. Different halogenated molecules were used to study the size of solvated cations and anions. Clusters of the rare gas argon were also investigated and compared with argon cluster ions formed upon electron impact of pure neutral argon clusters. Several biomolecules and molecules with biological interest have been studied, these including: some amino acids as Glycine, L-alanine and L-serine embedded in helium nanodroplets. Several features were assigned as helium solvation and fragmentation. In the case of L-serine, a magic octamer S8H+ cluster was observed and identified. Free electron attachment experiments to L-serine shows very rich chemistry observed here for the first time in amino acids embedded in helium nanodroplets. Positively and negatively charged ions from He nanodroplets doped with acetic acid were also investigated. Chemistry triggered by low energy electrons was discuss and compared with previous studies especially with single, gas phase molecules. Preliminary studies on L-valine show strong indication for peptide bond formation at cold temperatures and triggered by low electron energy, close to 0 eV.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Costa, Rui Américo Ferreira da. "Novel critical phenomena in optimization driven processes on networks." Doctoral thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/12297.

Повний текст джерела
Анотація:
Doutoramento em Física
The work presented in this Ph.D thesis was developed in the context of complex network theory, from a statistical physics standpoint. We examine two distinct problems in this research field, taking a special interest in their respective critical properties. In both cases, the emergence of criticality is driven by a local optimization dynamics. Firstly, a recently introduced class of percolation problems that attracted a significant amount of attention from the scientific community, and was quickly followed up by an abundance of other works. Percolation transitions were believed to be continuous, until, recently, an 'explosive' percolation problem was reported to undergo a discontinuous transition, in [93]. The system's evolution is driven by a metropolis-like algorithm, apparently producing a discontinuous jump on the giant component's size at the percolation threshold. This finding was subsequently supported by number of other experimental studies [96, 97, 98, 99, 100, 101]. However, in [1] we have proved that the explosive percolation transition is actually continuous. The discontinuity which was observed in the evolution of the giant component's relative size is explained by the unusual smallness of the corresponding critical exponent, combined with the finiteness of the systems considered in experiments. Therefore, the size of the jump vanishes as the system's size goes to infinity. Additionally, we provide the complete theoretical description of the critical properties for a generalized version of the explosive percolation model [2], as well as a method [3] for a precise calculation of percolation's critical properties from numerical data (useful when exact results are not available). Secondly, we study a network flow optimization model, where the dynamics consists of consecutive mergings and splittings of currents flowing in the network. The current conservation constraint does not impose any particular criterion for the split of current among channels outgoing nodes, allowing us to introduce an asymmetrical rule, observed in several real systems. We solved analytically the dynamic equations describing this model in the high and low current regimes. The solutions found are compared with numerical results, for the two regimes, showing an excellent agreement. Surprisingly, in the low current regime, this model exhibits some features usually associated with continuous phase transitions.
O trabalho apresentado nesta tese foi desenvolvido no contexto da teoria de redes complexas, na perspectiva da física estatística. So analisados dois problemas distintos neste campo de investigação, dando especial importância às respectivas propriedades críticas. Em ambos os casos, o estado crítico é produzido por mecanismos de optimização local. Em primeiro lugar, estudamos uma classe de modelos de percolação recentemente proposta, que atraiu uma quantidade significativa da atenção da comunidade científica, e foi prontamente acompanhada por uma abundância de outros trabalhos. As transições de percolação julgavam-se continuas, ate recentemente ter sido relatado, em [93], um problema de percolação 'explosiva', que possui uma transição de fase descontínua. A evolução do sistema é impelida por um algoritmo do tipo metropolis, o que aparentemente produz um salto no tamanho do componente gigante. Esta noção foi subsequentemente apoiada por vários outros estudos experimentais [96, 97, 98, 99, 100, 101]. Contudo, em [1] nós provámos que a transição de percolação explosiva é, na verdade, contínua. A descontinuidade observada na evolução do tamanho relativo do componente gigante e explicada pela invulgar pequenez do expoente crítico correspondente, combinada com o carácter finito dos sistemas considerados nas experiências. Assim, o salto desaparece quando o tamanho do sistema vai para infinito. Adicionalmente, fornecemos a descrição teórica completa das propriedades críticas de um modelo [2] que generalizada o problema da percolação explosiva, assim como, um método [3] que permite o cálculo preciso dessas propriedades a partir de dados numéricos (útil na ausência de resultados exactos). Em segundo lugar, estudamos um modelo de optimização de fluxo em redes, onde a dinâmica consiste em consecutivas junções e divisões das correntes. A condição de conservação de corrente não impõe qualquer critério em particular para a divisão da corrente pelos canais de saída dos nodos, o que permite introduzir uma regra assimétrica, observada em vários sistemas reais. Resolvemos analiticamente as equações dinâmicas que descrevem estes sistemas para os regimes de altas e baixas correntes. As soluções encontradas são comparadas com resultados numéricos, em ambos os regimes, e mostram uma excelente concordância. Surpreendentemente, no regime de baixas corrente, este modelo exibe alguns dos atributos geralmente associados a transições de fase contínuas.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Galvagno, Mariano. "Modelling of driven free surface liquid films." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/16574.

Повний текст джерела
Анотація:
In several types of coating processes a solid substrate is removed at a controlled velocity U from a liquid bath. The shape of the liquid meniscus and the thickness of the coating layer depend on U. These dependencies have to be understood in detail for non-volatile liquids to control the deposition of such a liquid and to lay the basis for the control in more complicated cases (volatile pure liquid, solution with volatile solvent). We study the case of non-volatile liquids employing a precursor film model that describes partial wettability with a Derjaguin (or disjoining) pressure. In particular, we focus on the relation of the deposition of (i) an ultrathin precursor film at small velocities and (ii) a macroscopic film of thickness h ∝ U^(2/3) (corresponding to the classical Landau Levich film). Depending on the plate inclination, four regimes are found for the change from case (i) to (ii). The different regimes and the transitions between them are analysed employing numerical continuation of steady states and saddle-node bifurcations and simulations in time. We discuss the relation of our results to results obtained with a slip model. In connection with evaporative processes, we will study the pinning of a droplet due to a sharp corner. The approach employs an evolution equation for the height profile of an evaporating thin film (small contact angle droplet) on a substrate with a rounded edge, and enables one to predict the dependence of the apparent contact angle on the position of the contact line. The calculations confirm experimental observations, namely that there exists a dynamically produced critical angle for depinning that increases with the evaporation rate. This suggests that one may introduce a simple modification of the Gibbs criterion for pinning that accounts for the non-equilibrium effect of evaporation.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jones, Greg Gordon. "The SHOC2 phosphatase complex as a therapeutic target for ERK pathway inhibition in RAS-driven tumors." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10060157/.

Повний текст джерела
Анотація:
Targeted inhibition of the ERK-MAPK pathway, upregulated in the majority of human cancers, has been hindered in the clinic by drug resistance and on-target toxicity. The MRAS-SHOC2-PP1 complex plays a key, but underexplored role in RAF-ERK pathway activation, by dephosphorylating a critical inhibitory site on RAF-kinases. In this body of work we present a preferential requirement for the SHOC2-phosphatase complex, specifically for Receptor Tyrosine Kinase (RTK), and anchorage-independent (tumorigenic) growth stimulated ERK-activation. We highlight that this context-dependent signalling bias has functional consequences in RAS-mutant cells, by specifically inhibiting anchorage-independent, but not 2D-adhered cell growth. Strikingly we show in vivo that SHOC2 deletion suppresses tumour initiation in KRAS-driven lung cancer models, and significantly extends overall survival. Additionally, SHOC2 inhibition selectively sensitizes KRAS- and EGFR-mutant Non-small cell lung carcinoma (NSCLC) cells to MEK inhibitors. Mechanistically we show this is because SHOC2 is required for feedback-induced RAF dimerization, and as such combined MEK inhibition and SHOC2 suppression leads to more potent and sustained ERK-pathway repression, driving a BIM-dependent apoptosis. Crucially, systemic SHOC2 ablation in adult mice is relatively well tolerated compared with other, core, ERK-pathway signalling nodes. These results present a rationale for the generation of SHOC2 targeted therapies, both as a monotherapy, and to widen the therapeutic index of MEK inhibitors.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Peddersen, Jorgen Computer Science &amp Engineering Faculty of Engineering UNSW. "Run-time energy-driven optimisation of embedded systems: a complete solution." Publisher:University of New South Wales. Computer Science & Engineering, 2008. http://handle.unsw.edu.au/1959.4/43240.

Повний текст джерела
Анотація:
Consumption of power and conservation of energy have become two of the biggest design challenges in construction of embedded systems. Energy is a resource in limited supply, but demands are increasing. Hence, much research is being performed to reduce power and energy usage or optimise performance under energy constraints. There are very few solutions that try to cater for the applications where the data input is not easily testable before run-time. These applications require an optimisation procedure that knows the power consumption of the system and is able to dynamically optimise operation to maximise performance while meeting energy constraints. This thesis provides a complete solution to the problem of run-time energy-driven optimisation of application performance. The complete system, from a processor that is able to provide feedback of the power consumption in parallel to execution, to applications that exploit the power feedback to provide dynamic optimisation. A processor that estimates its own power consumption is designed by the addition of small dedicated counters that tally occurrences of power consuming events which are macro modelled. The methodology is demonstrated on a standard processor achieving an average power estimation error of less than 2% while increasing area of the processor by only 5%. This enables energy-driven optimisation via application adaptation. Modification techniques and low-overhead algorithms are provided to demonstrate how energy feedback can be effectively used to maximise performance of algorithms within given constraints. Applications?? quality is maximised under given energy constraints using less than 0.02% of the execution time. Finally, the dissertation discusses the systems used to demonstrate the methodologies and techniques created throughout the research project. These implementations of the energy-driven optimisation system verify the soundness of the methods and applicability of the approaches used. This is the first time a complete solution for energy-driven optimisation has been shown, from creation of the processor to analysis of software utilising the approach. The methodologies and techniques can be applied to a variety of applications in a range of fields such as multimedia and networking that have never been possible before.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Johan, Fredrik Raak. "Data-driven analysis of wind power and power system dynamics via Koopman mode decomposition". Kyoto University, 2017. http://hdl.handle.net/2433/227628.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Kaya, Muammer Ozge. "A Complex Event Processing Framework Implementation Using Heterogeneous Devices In Smart Environments." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614152/index.pdf.

Повний текст джерела
Анотація:
Significant developments in microprocessor and sensor technology make wirelessly connected small computing devices widely available
hence they are being used frequently to collect data from the environment. In this study, we construct a framework in order to extract high level information in an environment containing such pervasive computing devices. In the framework, raw data originating from wireless sensors are collected using an event driven system and converted to simple events for transmission over a network to a central processing unit. We also utilize complex event processing approach incorporating temporal constraints, aggregation and sequencing of events in order to define complex events for extracting high level information from the collected simple events. We develop a prototype using easily accessible hardware and set it up in a classroom within our university. The results demonstrate the feasibility of our approach, ease of deployment and successful application of the complex event processing framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

DINIZ, Herbertt Barros Mangueira. "Linguagem específica de domínio para abstração de solução de processamento de eventos complexos." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18030.

Повний текст джерела
Анотація:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-10-31T12:04:21Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DissertacaoHerbertt_CIN_UFPE.pdf: 3162767 bytes, checksum: 3208dfce28e7404730479384c2ba99a0 (MD5)
Made available in DSpace on 2016-10-31T12:04:21Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DissertacaoHerbertt_CIN_UFPE.pdf: 3162767 bytes, checksum: 3208dfce28e7404730479384c2ba99a0 (MD5) Previous issue date: 2016-03-04
Cada vez mais se evidencia uma maior escassez de recursos e uma disputa por espaços físicos, em decorrência da crescente e demasiada concentração populacional nas grandes cidades. Nesse âmbito, surge a necessidade de soluções que vão de encontro à iniciativa de “Cidades Inteligentes" (Smart Cities). Essas soluções buscam centralizar o monitoramento e controle, para auxiliar no apoio à tomada de decisão. No entanto, essas fontes de TICs formam estruturas complexas e geram um grande volume de dados, que apresentam enormes desafios e oportunidades. Uma das principais ferramentas tecnológicas utilizadas nesse contexto é o Complex Event Processing (CEP), o qual pode ser considerado uma boa solução, para lidar com o aumento da disponibilidade de grandes volumes de dados, em tempo real. CEPs realizam captação de eventos de maneira simplificada, utilizando linguagem de expressão, para definir e executar regras de processamento. No entanto, apesar da eficiência comprovada dessas ferramentas, o fato das regras serem expressas em baixo nível, torna o seu uso exclusivo para usuários especialistas, dificultando a criação de soluções. Com intuito de diminuir a complexidade das ferramentas de CEP, em algumas soluções, tem-se utilizado uma abordagem de modelos Model-Driven Development (MDD), a fim de se produzir uma camada de abstração, que possibilite criar regras, sem que necessariamente seja um usuário especialista em linguagem de CEP. No entanto, muitas dessas soluções acabam tornando-se mais complexas no seu manuseio do que o uso convencional da linguagem de baixo nível. Este trabalho tem por objetivo a construção de uma Graphic User Interface (GUI) para criação de regras de CEP, utilizando MDD, a fim de tornar o desenvolvimento mais intuitivo, através de um modelo adaptado as necessidades do usuário não especialista.
Nowadays is Increasingly evident a greater resources scarcity and competition for physical space, in result of growing up and large population concentration into large cities. In this context, comes up the necessity of solutions that are in compliance with initiative of smart cities. Those solutions seek concentrate monitoring and control, for help to make decisions. Although, this sources of information technology and communications (ITCs) forming complex structures and generates a huge quantity of data that represents biggest challenges and opportunities. One of the main technological tools used in this context is the Complex Event Processing (CEP), which may be considered a good solution to deal with increase of the availability and large volume of data, in real time. The CEPs realizes captation of events in a simple way, using expressive languages, to define and execute processing rules. Although the efficient use of this tools, the fact of the rules being expressed in low level, becomes your use exclusive for specialists, difficulting the creation of solutions. With the aim of reduce the complexity of the CEPs tools, solutions has used an approach of models Model-Driven Development (MDD), in order to produce an abstraction layer, that allows to create rules, without necessarily being a specialist in CEP languages. however, many this tools become more complex than the conventional low level language approach. This work aims to build a Graphic User Interface (GUI) for the creation of CEP rules, using MDD, in order to a more intuitive development, across of the adapted model how necessities of the non specialist users.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Deicke, Markus. "Virtuelle Absicherung von Steuergeräte-Software mit hardwareabhängigen Komponenten." Universitätsverlag Chemnitz, 2016. https://monarch.qucosa.de/id/qucosa%3A20810.

Повний текст джерела
Анотація:
Der stetig steigende Funktionsumfang im Automobil und die zunehmende Vernetzung von Steuergeräten erfordern neue Methoden zur Beherrschung der Komplexität in der Validierung und Verifikation. Die virtuelle Absicherung ermöglicht die Integration der Software in einem PC-System, unabhängig von der Ziel-Hardware, zur frühzeitigen Gewährleistung der Softwarequalität im Entwicklungsprozess. Ebenso kann die Wiederverwendbarkeit vorhandener Komponenten in zukünftigen Mikrocontrollern sichergestellt werden. Die Grundlage dafür liefert der AUTOSAR-Standard durch einheitliche Schnittstellenbeschreibungen, welche die Abstraktion von Hardware und Software ermöglichen. Allerdings enthält der Standard hardwareabhängige Software-Komponenten, die als Complex-Device-Drivers (CDDs) bezeichnet werden. Aufgrund ihrer Hardwareabhängigkeit sind CDDs nicht direkt in eine virtuelle Absicherungsplattform integrierbar, da die spezifischen Hardware-Module nicht verfügbar sind. Die Treiber sind dennoch Teil der Steuergeräte-Software und somit bei einem ganzheitlichen Absicherungsansatz mit zu betrachten. Diese Dissertation beschreibt sieben unterschiedliche Konzepte zur Berücksichtigung von CDDs in der virtuellen Absicherung. Aus der Evaluierung der Praxistauglichkeit aller Ansätze wird eine Auswahlmethodik für die optimale Lösung bei sämtlichen Anwendungsfällen von CDDs in der Steuergeräte-Software entwickelt. Daraus abgeleitet, eignen sich zwei der Konzepte für die häufigsten Anwendungsfälle, die im Weiteren detailliert beschrieben und realisiert werden. Das erste Konzept erlaubt die vollständige Simulation eines CDD. Dies ist notwendig, um die Integration der Funktions-Software selbst ohne den Treiber zu ermöglichen und alle Schnittstellen abzusichern, auch wenn der CDD noch nicht verfügbar ist. Durch eine vollständige Automatisierung ist die Erstellung der Simulation nur mit geringem Arbeitsaufwand verbunden. Das zweite Konzept ermöglicht die vollständige Integration eines CDD, wobei die Hardware-Schnittstellen über einen zusätzlichen Hardware-Abstraction-Layer an die verfügbare Hardware des Systems zur virtuellen Absicherung angebunden werden. So ist der Treiber in der Lage, reale Hardware-Komponenten anzusteuern und kann funktional abgesichert werden. Eine flexible Konfiguration der Abstraktionsschicht erlaubt den Einsatz für eine große Bandbreite von CDDs. Im Rahmen der Arbeit werden beide Konzepte anhand von industrierelevanten Projekten aus der Serienentwicklung erprobt und detailliert evaluiert.
The constantly increasing amount of functions in modern automobiles and the growing degree of cross-linking between electronic control units (ECU) require new methods to master the complexity in the validation and verification process. The virtual validation and verification enables the integration of the software on a PC system, which is independent from the target hardware, to guarantee the required software quality in the early development stages. Furthermore, the software reuse in future microcontrollers can be verified. All this is enabled by the AUTOSAR standard which provides consistent interface descriptions to allow the abstraction of hardware and software. However, the standard contains hardware-dependent components, called complex device drivers (CDD). Those CDDs cannot be directly integrated into a platform for virtual verification, because they require a specific hardware which is not generally available on such a platform. Regardless, CDDs are an essential part of the ECU software and therefore need to be considered in an holistic approach for validation and verification. This thesis describes seven different concepts to include CDDs in the virtual verification process. A method to always choose the optimal solution for all use cases of CDDs in ECU software is developed using an evaluation of the suitably for daily use of all concepts. As a result from this method, the two concepts suited for the most frequent use cases are detailed and developed as prototypes in this thesis. The first concept enables the full simulation of a CDD. This is necessary to allow the integration of the functional software itself without the driver. This way all interfaces can be tested even if the CDD is not available. The complete automation of the generation of the simulation makes the process very efficient. With the second concept a CDD can be entirely integrated into a platform for virtual verification, using an hardware abstraction layer to connect the hardware interfaces to the available hardware of the platform. This way, the driver is able to control real hardware components and can be tested completely. A flexible configuration of the abstraction layer allows the application of the concept for a wide variety of CDDs. In this thesis both concepts are tested and evaluated using genuine projects from series development.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Sousa, Vasco Nuno da Silva de. "Model driven development implementation of a control systems user interfaces specification tool." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/1961.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Su, Yang Ph D. Massachusetts Institute of Technology. "Disassembly of electron transport chain complexes drives macrophage TLR responses by reprogramming metabolism and translation." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127139.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references.
Metabolic switch from oxidative phosphorylation (OxPhos) to glycolysis is a key feature of inflammatory response in macrophages, but how this switch occurs in response to inflammatory signals and how it precisely contributes to macrophage function is still obscure. Here we show that stimulation of macrophages through Toll-like receptors (TLR) disrupts the assembly of mitochondrial electron transfer chain (ETC) complexes I-V, leading to the metabolic switch by inhibiting OxPhos and activating HIF-1[alpha]-dependent glycolysis. Disassembly of ETC complexes influences the global metabolic status of macrophages not only by inducing glycolysis but also largely by inducing the reprogramming of cellular translational capacity via mTORC1 and ATF4, leading to enhanced global translation rate, cell growth, and production of inflammatory cytokines. Inhibition of OxPhos via myeloid-specific knockout of OPA1, which stimulates ETC complex assembly, exacerbates sepsis in mice while inhibition of mTORC1 reverses this effect. These findings reveal that disassembly of ETC complexes underlies macrophage metabolic switch and inflammatory responses and may be a conserved pathway to reprogram cellular anabolism and function.
by Yang Su.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Biology
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії