Dissertations / Theses on the topic 'Chemical sensitivity analysi'

To see the other types of publications on this topic, follow the link: Chemical sensitivity analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'Chemical sensitivity analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Khan, Kamil Ahmad. "Sensitivity analysis for nonsmooth dynamic systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98156.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 369-377).
Nonsmoothness in dynamic process models can hinder conventional methods for simulation, sensitivity analysis, and optimization, and can be introduced, for example, by transitions in flow regime or thermodynamic phase, or through discrete changes in the operating mode of a process. While dedicated numerical methods exist for nonsmooth problems, these methods require generalized derivative information that can be difficult to furnish. This thesis presents some of the first automatable methods for computing these generalized derivatives. Firstly, Nesterov's lexicographic derivatives are shown to be elements of the plenary hull of Clarke's generalized Jacobian whenever they exist. Lexicographic derivatives thus provide useful local sensitivity information for use in numerical methods for nonsmooth problems. A vector forward mode of automatic differentiation is developed and implemented to evaluate lexicographic derivatives for finite compositions of simple lexicographically smooth functions, including the standard arithmetic operations, trigonometric functions, exp / log, piecewise differentiable functions such as the absolute-value function, and other nonsmooth functions such as the Euclidean norm. This method is accurate, automatable, and computationally inexpensive. Next, given a parametric ordinary differential equation (ODE) with a lexicographically smooth right-hand side function, parametric lexicographic derivatives of a solution trajectory are described in terms of the unique solution of a certain auxiliary ODE. A numerical method is developed and implemented to solve this auxiliary ODE, when the right-hand side function for the original ODE is a composition of absolute-value functions and analytic functions. Computationally tractable sufficient conditions are also presented for differentiability of the original ODE solution with respect to system parameters. Sufficient conditions are developed under which local inverse and implicit functions are lexicographically smooth. These conditions are combined with the results above to describe parametric lexicographic derivatives for certain hybrid discrete/ continuous systems, including some systems whose discrete mode trajectories change when parameters are perturbed. Lastly, to eliminate a particular source of nonsmoothness, a variant of McCormick's convex relaxation scheme is developed and implemented for use in global optimization methods. This variant produces twice-continuously differentiable convex underestimators for composite functions, while retaining the advantageous computational properties of McCormick's original scheme. Gradients are readily computed for these underestimators using automatic differentiation.
by Kamil Ahmad Khan.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Guinand, Ernique A. (Ernique Alberto) 1970. "Optimization and network sensitivity analysis for process retrofitting." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8744.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2001.
"February 2001."
Includes bibliographical references.
Retrofitting is the redesign of an operating chemical plant to find new configurations and optimal operating conditions. In the chemical industry, 60% of new capital investments in plants and equipment are retrofitting projects, while only 10% goes to building new plants. Investment in retrofitting amounted to $26 billion in 2000. Despite the importance of retrofitting, there are few methodologies for finding improved economic and environmental performance for continuous processes. This work proposes a systematic framework for the understanding of retrofitting of continuous chemical processes and develops a new methodology to support decision making in solving this problem. Successful retrofitting solutions derive from a balance of operational experience in the plant and the rigor of mathematical analysis. This balance is accomplished by proposing tools and algorithms that in the problem formulation, the analysis of the flowsheet, the synthesis of retrofitting options and the final decision, allow the decision maker to handle the complexity of the problem and focus on the truly critical aspects of the flowsheet. The proposed methodology structures the problem by defining a broad range of retrofitting objectives and alternatives. The initial step is the formulation of retrofitting as an optimization problem. This includes defining retrofitting goals and translating them into objective functions. A parameter optimization of the base case design determines the incentives and constraints for retrofitting. The analysis continues through a network optimization analogy. The representation of the flowsheet as a multicommodity network allows the use of a graph based algorithm to determine the cycles in the process and apply flow decomposition by techniques developed in this study. Flow decomposition determines the path and cycles by which commodities (chemicals) flow through the network. The focus on chemicals and their paths rather than unit operations avoids the distinction of process subsystems providing an integrated view of the flowsheet. The objective function is evaluated in terms of path and cycle flows. Using graphical and mathematical programming (sensitivity analysis) approaches, the synthesis stage identifies retrofitting opportunities that increase the favorable and limit the unfavorable paths and cycles. Once a set of appropriate retrofitting alternatives is identified. the decision stage proceeds through a systematic construction of the superstructure and the corresponding MINLP model. The procedure takes into account the implicit logic of the retrofit alternatives to reduce the space of decision variables. The methodology is completed with a framework to implement the outer approximation algorithm taking into account the characteristics of the retrofitting problem. Case studies illustrate the benefits of the different stages of the proposed retrofitting methodology: efficient solution algorithms, systematic ways to analyze and generate alternative plant configurations and ease in finding optimal designs and investment decisions. The new methodology is compatible with existing flowsheet simulation tools and optimization packages and can easily be applied to a wide range of practical problems.
Ernique A. Guinand.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Gomez, Jose Alberto Ph D. Massachusetts Institute of Technology. "Simulation, sensitivity analysis, and optimization of bioprocesses using dynamic flux balance analysis." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117325.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 301-312).
Microbial communities are a critical component of natural ecosystems and industrial bioprocesses. In natural ecosystems, these communities can present abrupt and surprising responses to perturbations, which can have important consequences. For example, climate change can influence drastically the composition of microbial communities in the oceans, which in turn affects the entirety of the food chain, and changes in diet can affect drastically the composition of the human gut microbiome, making it stronger or more vulnerable to infection by pathogens. In industrial bioprocesses, engineers work with these communities to obtain desirable products such as biofuels, pharmaceuticals, and alcoholic beverages, or to achieve relevant environmental objectives such as wastewater treatment or carbon capture. Mathematical models of microbial communities are critical for the study of natural ecosystems and for the design and control of bioprocesses. Good mathematical models of microbial communities allow scientists to predict how robust an ecosystem is, how perturbed ecosystems can be remediated, how sensitive an ecosystem is with respect to specific perturbations, and in what ways and how fast it would react to environmental changes. Good mathematical models allow engineers to design better bioprocesses and control them to produce high-quality products that meet tight specifications. Despite the importance of microbial communities, mathematical models describing their behavior remain simplistic and only applicable to very simple and controlled bioprocesses. Therefore, the study of natural ecosystems and the design of complex bioprocesses is very challenging. As a result, the design of bioprocesses remains experiment-based, which is slow, expensive, and labor-intensive. With high throughput experiments large datasets are generated, but without reliable mathematical models critical links between the species in the community are often missed. The design of novel bioprocesses rely on informed guesses by scientists that can only be tested experimentally. The expenses incurred by these experiments can be difficult to justify. Predictive mathematical models of microbial communities can provide insights about the possible outcomes of novel bioprocesses and guide the experimental design, resulting in cheaper and faster bioprocess development. Most mathematical models describing microbial communities do not take into account the internal structure of the microorganisms. In recent years, new knowledge of the internal structures of these microorganisms has been generated using highthroughput DNA sequencing. Flux balance analysis (FBA) is a modeling framework that incorporates this new information into mathematical models of microbial communities. With FBA, growth and exchange flux predictions are made by solving linear programs (LPs) that are constructed based on the metabolic networks of the microorganisms. FBA can be combined with the mathematical models of dynamical biosystems, resulting in dynamic FBA (DFBA) models. DFBA models are difficult to simulate, sensitivity information is challenging to obtain, and reliable strategies to solve optimization problems with DFBA models embedded are lacking. Therefore, the use of DFBA models in science and industry remains very limited. This thesis makes DFBA simulation more accessible to scientists and engineers with DFBAlab, a fast, reliable, and efficient Matlab-based DFBA simulator. This simulator is used by more than a 100 academic users to simulate various processes such as chronic wound biofilms, gas fermentation in bubble column bioreactors, and beta-carotene production in microalgae. Also, novel combinations of microbial communities in raceway ponds have been studied. The performance of algal-yeast cocultures and more complex communities for biolipids production has been evaluated, gaining relevant insights that will soon be tested experimentally. These combinations could enable the production of lipids-rich biomass in locations far away from power plants and other concentrated CO 2 sources by utilizing lignocellulosic waste instead. Following reliable DFBA simulation, the mathematical theory required for sensitivity analysis of DFBA models, which happen to be nonsmooth, was developed. Methods to compute generalized derivative information for special compositions of functions, hierarchical LPs, and DFBA models were generated. Significant numerical challenges appeared during the sensitivity computation of DFBA models, some of which were resolved. Despite the challenges, sensitivity information for DFBA models was used to solve for the steady-state of a high-fidelity model of a bubble column bioreactor using nonsmooth equation-solving algorithms. Finally, local optimization strategies for different classes of problems with DFBA models embedded were generated. The classes of problems considered include parameter estimation and optimal batch, continuous steady-state, and continuous cyclic steady-state process design. These strategies were illustrated using toy metabolic networks as well as genome-scale metabolic networks. These optimization problems demonstrate the superior performance of optimizers when reliable sensitivity information is used, as opposed to approximate information obtained from finite differences. Future work includes the development of global optimization strategies, as well as increasing the robustness of the computation of sensitivities of DFBA models. Nevertheless, the application of DFBA models of microbial communities for the study of natural ecosystems and bioprocess design and control is closer to reality.
by Jose Alberto Gomez.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Gou, Tianyi. "Computational Tools for Chemical Data Assimilation with CMAQ." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/31017.

Full text
Abstract:
The Community Multiscale Air Quality (CMAQ) system is the Environmental Protection Agency's main modeling tool for atmospheric pollution studies. CMAQ-ADJ, the adjoint model of CMAQ, offers new analysis capabilities such as receptor-oriented sensitivity analysis and chemical data assimilation. This thesis presents the construction, validation, and properties of new adjoint modules in CMAQ, and illustrates their use in sensitivity analyses and data assimilation experiments. The new module of discrete adjoint of advection is implemented with the aid of automatic differentiation tool (TAMC) and is fully validated by comparing the adjoint sensitivities with finite difference values. In addition, adjoint sensitivity with respect to boundary conditions and boundary condition scaling factors are developed and validated in CMAQ. To investigate numerically the impact of the continuous and discrete advection adjoints on data assimilation, various four dimensional variational (4D-Var) data assimilation experiments are carried out with the 1D advection PDE, and with CMAQ advection using synthetic and real observation data. The results show that optimization procedure gives better estimates of the reference initial condition and converges faster when using gradients computed by the continuous adjoint approach. This counter-intuitive result is explained using the nonlinearity properties of the piecewise parabolic method (the numerical discretization of advection in CMAQ). Data assimilation experiments are carried out using real observation data. The simulation domain encompasses Texas and the simulation period is August 30 to September 1, 2006. Data assimilation is used to improve both initial and boundary conditions. These experiments further validate the tools developed in this thesis.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Wilkins, Anna Katharina. "Sensitivity analysis of oscillating dynamical systems with applications to the mammalian circadian clock." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/42944.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 227-234).
The work presented in this thesis consists of two major parts. In Chapter 2, the theory for sensitivity analysis of oscillatory systems is developed and discussed. Several contributions are made, in particular in the precise definition of phase sensitivities and in the generalization of the theory to all types of autonomous oscillators. All methods rely on the solution of a boundary value problem, which identifies the periodic orbit. The choice of initial condition on the limit cycle has important consequences for phase sensitivity analysis, and its influence is quantified and discussed in detail. The results are exact and efficient to compute compared to existing partial methods. The theory is then applied to different models of the mammalian circadian clock system in the following chapters. First, different types of sensitivities in a pair of smaller models are analyzed. The models have slightly different architectures, with one having an additional negative feedback loop compared to the other. The differences in their behavior with respect to phases, the period and amplitude are discussed in the context of their network architecture. It is found that, contrary to previous assumptions in the literature, the additional negative feedback loop makes the model less "flexible" in at least one sense that was studied here. The theory was also applied to larger, more detailed models of the mammalian circadian clock, based on the original model of Forger and Peskin. Between the original model's publication in 2003 and the present time, several key advances were made in understanding the mechanistic detail of the mammalian circadian clock, and at least one additional clock gene was identified. These advances are incorporated in an extended model, which is then studied using sensitivity analysis. Period sensitivity analysis is performed first and it was found that only one negative feedback loop dominates the setting of the period.
(cont.) This was an interesting one-to-one correlation between one topological feature of the network and a single metric of network performance. This led to the question of whether the network architecture is modular, in the sense that each of the several feedback loops might be responsible for a separate network function. A function of particular interest is the ability to separately track "dawn" and "dusk", which is reported to be present in the circadian clock. The ability of the mammalian circadian clock to modify different relative phases --defined by different molecular events -- independently of the period was analyzed. If the model can maintain a perceived day -- defined by the time difference between two phases -- of different lengths, it can be argued that the model can track dawn and dusk separately. This capability is found in all mammalian clock models that were studied in this work, and furthermore, that a network-wide effort is needed to do so. Unlike in the case of the period sensitivities, relative phase sensitivities are distributed throughout several feedback loops. Interestingly, a small number of "key parameters" could be identified in the detailed models that consistently play important roles in the setting of period, amplitude and phases. It appears that most circadian clock features are under shared control by local parameters and by the more global "key parameters". Lastly, it is shown that sensitivity analysis, in particular period sensitivity analysis, can be very useful in parameter estimation for oscillatory systems biology models. In an approach termed "feature-based parameter fitting", the model's parameter values are selected based on their impact on the "features" of an oscillation (period, phases, amplitudes) rather than concentration data points. It is discussed how this approach changes the cost function during the parameter estimation optimization, and when it can be beneficial.
(cont.) A minimal model system from circadian biology, the Goodwin oscillator, is taken as an example. Overall, in this thesis it is shown that the contributions made to the theoretical understanding of sensitivities in oscillatory systems are relevant and useful in trying to answer questions that are currently open in circadian biology. In some cases, the theory could indicate exactly which experiments or detailed mechanistic studies are needed in order to perform meaningful mathematical analysis of the system as a whole. It is shown that, provided the biologically relevant quantities are analyzed, a network-wide understanding of the interplay between network function and topology can be gained and differences in performance between models of different size or topology can be quantified.
by Anna Katharina Wilkins.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Lu. "Computational Study of Turbulent Combustion Systems and Global Reactor Networks." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78804.

Full text
Abstract:
A numerical study of turbulent combustion systems was pursued to examine different computational modeling techniques, namely computational fluid dynamics (CFD) and chemical reactor network (CRN) methods. Both methods have been studied and analyzed as individual techniques as well as a coupled approach to pursue better understandings of the mechanisms and interactions between turbulent flow and mixing, ignition behavior and pollutant formation. A thorough analysis and comparison of both turbulence models and chemistry representation methods was executed and simulations were compared and validated with experimental works. An extensive study of turbulence modeling methods, and the optimization of modeling techniques including turbulence intensity and computational domain size have been conducted. The final CFD model has demonstrated good predictive performance for different turbulent bluff-body flames. The NOx formation and the effects of fuel mixtures indicated that the addition of hydrogen to the fuel and non-flammable diluents like CO2 and H2O contribute to the reduction of NOx. The second part of the study focused on developing chemical models and methods that include the detailed gaseous reaction mechanism of GRI-Mech 3.0 but cost less computational time. A new chemical reactor network has been created based on the CFD results of combustion characteristics and flow fields. The proposed CRN has been validated with the temperature and species emission for different bluff-body flames and has shown the capability of being applied to general bluff-body systems. Specifically, the rate of production of NOx and the sensitivity analysis based on the CRN results helped to summarize the reduced reaction mechanism, which not only provided a promising method to generate representative reactions from hundreds of species and reactions in gaseous mechanism but also presented valuable information of the combustion mechanisms and NOx formation. Finally, the proposed reduced reaction mechanism from the sensitivity analysis was applied to the CFD simulations, which created a fully coupled process between CFD and CRN, and the results from the reduced reaction mechanism have shown good predictions compared with the probability density function method.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Holland, Troy Michael. "A Comprehensive Coal Conversion Model Extended to Oxy-Coal Conditions." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6525.

Full text
Abstract:
CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. However, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. This work extends a comprehensive char conversion code (the Carbon Conversion Kinetics or CCK model), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work, the CCK model was thoroughly investigated with a global sensitivity analysis. The sensitivity analysis highlighted several submodels in the CCK code, which were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include a greatly extended annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the Chemical Percolation Devolatilization (CPD) model. The resultant Carbon Conversion Kinetics for oxy-coal combustion (CCK/oxy) model predictions were compared to oxy-coal data, and further compared to parallel data sets obtained at near conventional conditions.
APA, Harvard, Vancouver, ISO, and other styles
8

GOMES, MARCELO da S. "Determinacao de elementos metalicos em sedimentos da Baia do Almirantado, Ilha Rei George, Penisula Antartica." reponame:Repositório Institucional do IPEN, 1999. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10762.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:43:43Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:58:06Z (GMT). No. of bitstreams: 1 06649.pdf: 21672592 bytes, checksum: ce3be21667c4c3939460ddeff380f31c (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
9

Poduri, Shripriya Darshini. "THEORETICAL MODELING AND ANALYSIS OF AMMONIA GAS SENSING PROPERTIES OF VERTICALLY ALIGNED MULTIWALLED CARBON NANOTUBE RESISTIVE SENSORS AND ENHANCING THEIR SENSITIVITY." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/51.

Full text
Abstract:
Vertically aligned Multiwalled Carbon Nanotubes (MWCNTs) were grown in the pores of Anodized Aluminum Oxide (AAO) templates and investigated for resistive sensor applications. High Sensitivity of 23% to low concentration (100 ppm) of ammonia was observed. An equivalent circuit model was developed to understand the current flow path in the resistive sensor. This helped us in achieving high sensitivities through amorphous carbon (a-C) layer thickness tailoring by employing post-growth processing techniques like plasma etching. A simulation model in MATLAB was developed to calculate the device resistance and the change in the sensitivity as a function of device parameters. The steady state response and transient response of the model to the number of ammonia molecules and its adsorption rate were studied. Effects of oxygen plasma, argon plasma and water plasma etch on thinning of the a-C layer were studied. In order to enhance the sensitivity, the top and bottom a-C layers were replaced by a more conductive metal layer. This also helped in understanding the current flow in the device and in the estimation of the resistivity of the a-C layer.
APA, Harvard, Vancouver, ISO, and other styles
10

Griffiths, Michael Lee. "Multivariate calibration for ICP-AES." Thesis, University of Plymouth, 2001. http://hdl.handle.net/10026.1/1942.

Full text
Abstract:
The analysis of metals is now a major application area for ICP-AES, however, the technique suffers from both spectral and non-spectral interferences. This thesis details the application of univariate and multivariate calibration methods for the prediction of Pt, Pd, and Rh in acid-digested and of Au, Ag and Pd in fusion-digested autocatalyst samples. Of all the univariate calibration methods investigated matrix matching proved the most accurate method with relative root mean square errors (RRMSEs) for Pt, Pd and Rh of 2.4, 3.7, and 2.4 % for a series of synihelic lest solutions, and 12.0, 2.4, and 8.0 % for autocatalyst samples. In comparison, the multivariate calibration method (PLSl) yielded average relative errors for Pt, Pd, and RJi of 5.8, 3.0, and 3.5 % in the test solutions, and 32.0, 7.5, and 75.0 % in the autocatalyst samples. A variable selection procedure has been developed enabling multivariate models to be built using large parts of the atomic emission spectrum. The first stage identified and removed wavelengths whose PLS regression coefficients were equal to zero. The second stage ranked the remaining wavelengths according to their PLS regression coefficient and estimated standard error ratio. The algorithms were applied to the emission spectra for the determination of Pt, Pd and Rh in a synthetic matrix. For independent test samples variable selection gave RRMSEs of 5.3, 2.5 and 1.7 % for Pt, Pd and Rh respectively compared with 8.3, 7.0 and 3.1 % when using integrated atomic emission lines. Variable selection was then applied for the prediction of Au, Ag and Pd in independent test fusion digests. This resulted in RRMSEs of 74.2, 8.8 and 12.2 % for Au, Ag and Pd respectively which were comparable to those obtained using a more traditional univariate calibration approach. A preliminary study has shown that calibration drift can be corrected using Piecewise Direct Standardisation (PDS). The application of PDS to synthetic test samples analysed 10 days apart resulted in RRMSEs of 4.14, 3.03 and 1.88%, compared to 73.04, 44.39 and 28.06 % without correction, for Pt, Pd, and Rh respectively.
APA, Harvard, Vancouver, ISO, and other styles
11

Singh, Kumaresh. "Efficient Computational Tools for Variational Data Assimilation and Information Content Estimation." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/39125.

Full text
Abstract:
The overall goals of this dissertation are to advance the field of chemical data assimilation, and to develop efficient computational tools that allow the atmospheric science community benefit from state of the art assimilation methodologies. Data assimilation is the procedure to combine data from observations with model predictions to obtain a more accurate representation of the state of the atmosphere. As models become more complex, determining the relationships between pollutants and their sources and sinks becomes computationally more challenging. The construction of an adjoint model ( capable of efficiently computing sensitivities of a few model outputs with respect to many input parameters ) is a difficult, labor intensive, and error prone task. This work develops adjoint systems for two of the most widely used chemical transport models: Harvardâ s GEOS-Chem global model and for Environmental Protection Agencyâ s regional CMAQ regional air quality model. Both GEOS-Chem and CMAQ adjoint models are now used by the atmospheric science community to perform sensitivity analysis and data assimilation studies. Despite the continuous increase in capabilities, models remain imperfect and models alone cannot provide accurate long term forecasts. Observations of the atmospheric composition are now routinely taken from sondes, ground stations, aircraft, and satellites, etc. This work develops three and four dimensional variational data assimilation capabilities for GEOS-Chem and CMAQ which allow to estimate chemical states that best fit the observed reality. Most data assimilation systems to date use diagonal approximations of the background covariance matrix which ignore error correlations and may lead to inaccurate estimates. This dissertation develops computationally efficient representations of covariance matrices that allow to capture spatial error correlations in data assimilation. Not all observations used in data assimilation are of equal importance. Erroneous and redundant observations not only affect the quality of an estimate but also add unnecessary computational expense to the assimilation system. This work proposes techniques to quantify the information content of observations used in assimilation; information-theoretic metrics are used. The four dimensional variational approach to data assimilation provides accurate estimates but requires an adjoint construction, and uses considerable computational resources. This work studies versions of the four dimensional variational methods (Quasi 4D-Var) that use approximate gradients and are less expensive to develop and run. Variational and Kalman filter approaches are both used in data assimilation, but their relative merits and disadvantages in the context of chemical data assimilation have not been assessed. This work provides a careful comparison on a chemical assimilation problem with real data sets. The assimilation experiments performed here demonstrate for the first time the benefit of using satellite data to improve estimates of tropospheric ozone.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Waddick, Caitlin Janson. "Healthy residential developments: reducing pollutant exposures for vulnerable populations with multiple chemical sensitivities." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37270.

Full text
Abstract:
Many serious illnesses are linked to everyday exposures to toxic chemicals. In the U.S., most chemical exposure comes from common consumer products such as pesticides, fragranced products, cleaning supplies, and building materials--products so widely used that people consider them "safe." As the links between everyday toxic exposures and potential health effects become better understood, evidence increasingly shows that reducing exposures can create a healthier society. Although some individuals may choose to build a healthy home and maintain a healthy household, they are still exposed to pollutants at their residences from the actions of others, such as to pesticides that are used by neighbors, businesses, and governments. They need healthy residential developments in environmentally healthy communities. This research investigates "healthy residential developments," defined as a property that aims to reduce pollutant exposures to the extent required by vulnerable populations, which for this research are individuals with multiple chemical sensitivities (MCS). Through a case study approach, this research investigates two exemplars of healthy residential developments, and explains how and why they form and continue. It also examines their implementation methods, and implications for planning and policy. Primary data collection methods included in-person interviews, telephone interviews, and site visits. Research strategies included the analysis of interview data, and categorical aggregation using thematic categories within and across cases. The categories focused on factors of formation and continuation for the two healthy residential developments. Findings include the challenges of people disabled with MCS to find safe housing; the importance of planning to address these challenges; the role of individuals, funding, and zoning in the formation of healthy residential developments; the role of funding, safe maintenance, and property management in their continuation; and, the need for affordable and safe housing for vulnerable populations. Future research can address the need to develop methods to create and sustain healthy residential developments, understand and reduce sources of exposure that initiate and trigger chemical sensitivity, and investigate experiences and implementation strategies in other countries.
APA, Harvard, Vancouver, ISO, and other styles
13

Adotey, Bless. "MATHEMATICAL MODELING OF CLOSTRIDIUM THERMOCELLUM’S METABOLIC RESPONSES TO ENVIRONMENTAL PERTURBATION." UKnowledge, 2011. http://uknowledge.uky.edu/bae_etds/1.

Full text
Abstract:
Clostridium thermocellum is a thermophilic anaerobe that is capable of producing ethanol directly from lignocellulosic compounds, however this organism suffers from low ethanol tolerance and low ethanol yields. In vivo mathematical modeling studies based on steady state traditional metabolic flux analysis, metabolic control analysis, transient and steady states’ flux spectrum analysis (FSA) were conducted on C. thermocellum’s central metabolism. The models were developed in Matrix Laboratory software ( MATLAB® (The Language of Technical Computing), R2008b, Version 7.7.0.471)) based on known stoichiometry from C. thermocellum pathway and known physical constraints. Growth on cellobiose from Metabolic flux analysis (MFA) and Metabolic control analysis (MCA) of wild type (WT) and ethanol adapted (EA) cells showed that, at lower than optimum exogenous ethanol levels, ethanol to acetate (E/A) ratios increased by approximately 29% in WT cells and 7% in EA cells. Sensitivity analyses of the MFA and MCA models indicated that the effects of variability in experimental data on model predictions were minimal (within ±5% differences in predictions if the experimental data varied up to ±20%). Steady state FSA model predictions showed that, an optimum hydrogen flux of ~5mM/hr in the presence of pressure equal to or above 7MPa inhibits ferrodoxin hydrogenase which causes NAD re-oxidation in the system to increase ethanol yields to about 3.5 mol ethanol/mol cellobiose.
APA, Harvard, Vancouver, ISO, and other styles
14

Ghosh, Saikat. "Model Development and Validation of Pesticide Volatilization from Soil and Crop Surfaces Post Spraying during Agricultural Practices." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1588610082125279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Song, Shin Miin, and shinmiin@singnet com sg. "Comprehensive two-dimensional gas chromatography (GCxGC ) for drug analysis." RMIT University. Applied Sciences, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080627.114511.

Full text
Abstract:
Separation technologies have occupied a central role in the current practices of analytical methods used for drug analysis today. As the emphasis in contemporary drug analysis shifts towards ultra-trace concentrations, the contribution from unwanted matrix interferences takes on greater significance. In order to single out a trace substance with confidence from a rapidly expanding list of drug compounds (and their metabolites) in real complex specimens, analytical technologies must evolve to keep up with such trends. Today, the task of unambiguous identification in forensic toxicology still relies heavily upon chromatographic methods based on mass spectrometric detection, in particular GC-MS in electron ionisation (EI) mode. Although the combined informing power of (EI) GC-MS has served faithfully in a myriad of drug application studies to date, we may ask if (EI) GC-MS will remain competitive in meeting the impending needs of ultra-trace drug analysis in the fut ure? To what extent of reliability can sample clean-up strategies be used in ultra-trace analysis without risking the loss of important analytes of interest? The increasing use of tandem mass spectrometry with one-dimensional (1D) chromatographic techniques (e.g. GC-MS/MS) at its simplest, considers that single-column chromatographic analysis with mass spectrometry alone is not sufficient in providing unambiguous confirmation of the identity of any given peak, particularly when there are peak-overlap. Where the mass spectra of the individual overlapping peaks are highly similar, confounding interpretation of their identities may arise. By introducing an additional resolution element in the chromatographic domain of a 1D chromatographic system, the informing power of the analytical system can also be effectively raised by the boost in resolving power from two chromatographic elements. Thus this thesis sets out to address the analytical challenges of modern drug analysis through the application of high resolut ion comprehensive two-dimensional gas chromatography (GC„eGC) to a series of representative drug studies of relevance to forensic sciences.
APA, Harvard, Vancouver, ISO, and other styles
16

Parrish, Douglas K. "Application of solid phase microextraction with gas chromatography-mass spectrometry as a rapid, reliable, and safe method for field sampling and analysis of chemical warfare agent precursors /." Download the dissertation in PDF, 2005. http://www.lrc.usuhs.mil/dissertations/pdf/Parrish2005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Хамза, Омар Адел Хамза. "Вибір параметрів силової установки із системою утилізації попутного нафтового газу." Thesis, НТУ "ХПІ", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/29868.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спе-ціальністю 05.05.03 – двигуни та енергетичні установки. – Національний технічний університет «Харківський політехнічний інститут». – Харків, 2017. Дисертаційна робота присвячена вибору схеми та параметрів силової енергетичної установки для утилізації попутного нафтового газу. В роботі проаналізовано можливість використання різних силових установок для утилізації попутного нафтового газу. Розроблено схеми енергогенеруючих установок з використанням газо-турбінних і газопоршневих двигунів внутрішнього згоряння для виробництва еле-ктричної енергії за рахунок утилізації попутного нафтового газу на нафтовидобувних і нафтопереробних підприємств. Використано енергоексергетичний метод для оцінки ефективності запропонованих схем. Проведено економічний аналіз доцільності побудови енергогенеруючих потужностей, що будуть споживати попутний нафтовий газ з аналізом чутливості для таких параметрів як зміна ціни на електроенергію та вплив високих температур навколишнього середовища. При зміні температури навколишнього середовища з +15 до +45 °С кількість енергії, що виробляється для проекту А буде зменшуватися на 26%, для проекту В – на 10,9%. Визначено, що попри більшу вартість проекту В ($2 843 009.55) супроти проекту А ($1 964 434.69), термін окупності складає: для проекту А – 6 років, 1 місяць; для проекту В – 3 роки, 8 місяців. Обґрунтована доцільність використання поршневого ДВЗ у складі енергогенеруючої установки.
Thesis for the degree of candidate of technical sciences by specialty 05.05.03 - engines and power units. - National Technical University "Kharkiv Polytechnic Institute". - Kharkiv, 2017. The thesis is devoted to the choice of the scheme and parameters of the power plant for utilization of associated petroleum gas. The paper analyzes the possibility of using various power plants for utilization of associated petroleum gas. The schemes of power units using gas turbine and gas piston internal combustion engines to generate power electricity have been developed by using associated petroleum gas in oil refinery. The anergy-exergy method was used to analys the effectiveness of the proposed schemes. An economic analysis of the feasibility of constructing power generating capacities that will consume the associated oil gas with an analysis of sensitivity for such parameters as a change in the price of electricity and the impact of high ambient temperatures has been carried out. If the ambient temperature is changed from +15 to + 45 ° C, the amount of energy generated for Project A will be reduced by 26%, for Project B - by 10.9%. It is determined that despite the high cost of Project B ($ 2,843,009.55) against Project A ($ 1964,434.69), the payback period is: for Project A - 6 years, 1 month; For Project B - 3 years, 8 months. The expediency of using the piston-internal ICE as part of the power generating unit is substantiated.
APA, Harvard, Vancouver, ISO, and other styles
18

Hamzah, Omar Adel Hamzah. "Parameter selection the powerplant with recovery system Off-gas in the refinery." Thesis, NTU "KhPI", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/29869.

Full text
Abstract:
Dissertation for the candidate degree in specialty 05.05.03 "Engines and Power Plants". - National Technical University "Kharkiv Polytechnic Institute ", Kharkiv 2017. The thesis is devoted to actual problem - the selection of schemes and power parameters of the power plant for utilization of associated gas. The problem of associated gas flaring continuously rising international conferences on conservation of the environment are held under the auspices of the UN and the World Bank. In particular, the World Conference on Climate in Paris (COP21) in 2015, and was nominated Global Initiatives to eradicate the practice of flaring of associated gas in the oil industry. Worldwide, it was supported by 45 oil companies, governments and other parties through which CO2 emissions can be reduced by 100 million tons per year. The adopted program "Zero Routine Flaring by 2030" provides end to the practice of burning of associated gas by 2030. This Initiatives supported and Iraq, which in 2015 take the second place in the world with the burning of associated gas in flares. Associated petroleum gas is 2% of product yield refineries in Iraq. Given the number of refineries and their power daily when they are flaring loss a lot of energy and is a significant pollution of the territory not only as chemical emissions but as a heat is released during the combustion of associated gas. The work uses a comprehensive approach to the selection circuit and power parameters power plant for utilization of associated gas. The possible options for the utilization of associated gas. According to the options two power plants are taken the first one based on gas turbine engine and the second based on gas turbine engine and a piston engine. The question examined in terms of exergic-anergy balance installation and obtain the best technical and economic performance, taking into account the climatic characteristics of Iraq The features of physical and chemical composition of associated petroleum gas in the refinery in Iraq, in particular methane determined the number, the method of firm Caterpillar. The methane number of the gas fuel affects the choice piston power plant. Significant impact on the choice of installing recycling schemes associated gas temperature features render the region. For their consideration set average temperature for the region. Conducted thermal calculations allowed to analyze the impact of environmental temperature on performance of power plants and conduct a feasibility study best selection circuit installation. Implementation exergic-anergy balance for power plants proposed scheme has allowed to confirm significant reduction in thermal pollution and show the most attractive from that perspective scheme. Economic calculations allowed to determine the payback period of the projects and installations prove the economic feasibility of their construction. The most economically attractive project. The analysis of the economic risks of sensitivity to changes in prices of electricity and to changes in ambient temperature. Similar calculations of sensitivity analysis performed for the two plants power plants. Based on the analysis, commissioned by the Iraqi side was developed based business project for energy generation capacity on the basis of energy utilization units. The results of the study will not only get the necessary electrical energy that can be used not only in the enterprise, but also to improve the environment in accordance with international agreements. The results of the research will be used in the construction of new units in the refineries in Iraq according to a letter from the Ministry of Industry and minerals.
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.05.03 «Двигуни та енергетичні установки» (14 – Електрична інженерія). – Національний технічний університет «Харківський політехнічний інститут», Харків, 2017. Дисертаційна робота присвячена актуальному питанню - вибору схеми та параметрів силової енергетичної установки для утилізації попутного нафтового газу. Проблема спалювання попутних газів постійно підіймається на міжнародних конференціях зі збереження навколишнього середовища які проходять під егідою ООН та Всесвітнього банку. Зокрема, на Всесвітній конференції з клімату в Парижі (COP21) у 2015 році, була висунута та підтримана глобальна ініціатива з викорінення практики спалювання попутного газу в нафтовій промисловості. В усьому світі її підтримали 45 нафтових компаній, урядів та інших сторін завдяки яким викиди CO2 можуть скоротитись на 100 мільйонів тонн на рік. Прийнята програма “Zero Routine Flaring by 2030” передбачає покінчити з практикою спалювання супутнього нафтового газу до 2030 року. Цю ініціативу підтримала і держава Ірак, яка на 2015 рік займала друге місце у світі зі спалювання попутних газів у факелах. Попутний нафтовий газ складає 2% від виходу продуктів нафтопереробного підприємства в Іраку. Враховуючи кількість нафтопереробних підприємств та їх потужності, щоденно, при його спалюванні у факелах, втрачається велика кількість енергії та відбувається значне забруднення навколишньої території не тільки хімічними викидами а і теплотою яка виділяється при згорянні попутного нафтового газу. У роботі використано комплексний підхід до вибору схеми та параметрів силової енергетичної установки для утилізації попутного нафтового газу. Розглянуто можливі варіанти утилізації супутнього нафтового газу. Серед варіантів взято енергетичну установку на базі газотурбінного двигуна та установку на базі газотурбінного двигуна який діє сумісно з поршневим двигуном. Поставлене питання розглянуто з точки зору енерго-ексергетичного балансу установки та отримання найкращих техніко-економічних показників з урахуванням кліматичних особливостей регіону держави Ірак. Розглянуто особливості фізико-хімічного складу супутнього нафтового газу на нафтопереробному заводі держави Ірак, зокрема проведено визначення метанового числа, за методикою фірми Caterpillar. Метанове число газового палива впливає на вибір поршневої енергетичної установки. Значний вплив на вибір схеми установки з утилізації супутнього нафтового газу оказують температурні особливості регіону. Для їх урахування визначено середню температуру для регіону. Проведені теплові розрахунки дозволили проаналізувати вплив температури навколишнього середовища на показники енергетичних установок та провести економічне обґрунтування обрання найкращої схеми установки. Виконання енерго-ексергетичного балансу для запропонованих схем енергогенеруючих установок дозволило підтвердити значне зменшення теплового забруднення навколишнього середовища та вказати на найбільш привабливу з цієї точки зору схему. Економічні розрахунки дозволили визначити термін окупності запропонованих проектів установок та довести економічну доцільність їх побудови. Визначено найбільш економічно привабливий проект. Проведено аналіз економічних ризиків чутливості до зміни ціни електроенергії та до зміни температури навколишнього середовища. Подібні розрахунки аналізу чутливості проведено для заводів з двома енергетичними установками. На основі проведеного аналізу, на замовлення Іракської сторони, було розроблено основу бізнес проекту для енергогенеруючих потужностей на базі енергетичних утилізаційних установок. Виконання результатів дослідження дозволить не тільки отримати необхідну електричну енергію, яку можна використовувати не тільки на підприємстві, а і покращити стан навколишнього середовища у відповідності до міжнародних домовленостей. Результати дисертаційного дослідження будуть використані при будівництві нових енергоблоків на нафтопереробних заводах Іраку згідно листа від Міністерства промисловості і природних ресурсів.
APA, Harvard, Vancouver, ISO, and other styles
19

Hanning-Lee, Mark Adrian. "A study of atom and radical kinetics." Thesis, University of Oxford, 1990. http://ora.ox.ac.uk/objects/uuid:89cabe5d-7cc8-43b3-8c2a-686563ff1b3f.

Full text
Abstract:
This thesis describes the measurement of rate constants for gas phase reactions as a function of temperature (285 ≤ T/K ≤ 850) and pressure (48 ≤ P/Torr ≤ 700). One or both reactants was monitored directly in real time, using time–resolved resonance fluorescence (for atoms) and u.v. absorption (for radicals). Reactants were produced by exciplex laser flash photolysis. The technique was used to measure rate constants to high precision for the following reactions under the stated conditions: • H+O2+He->HO2+He and H+O2−→OH+O, for 800 ≤ T/K ≤ 850 and 100 ≤ P/Torr ≤ 259. A time–resolved study was performed at conditions close to criticality in the H2–O2 system. The competition between the two reactions affected the behaviour of the system after photolysis, and the rate constants were inferred from this behaviour. • H+C2H4+He<-->C2H5+He (T = 800 K, 97 ≤ P/Torr ≤ 600). The reactions were well into the fall–off region at all conditions studied. At 800 K, the system was studied under equilibrating conditions. The study provided values of the forward and reverse rate constants at high temperatures and enabled a test of a new theory of reversible unimolecular reactions. The controversial standard enthalpy of formation of ethyl, DH0f,298 (C2H5), was determined to be 120.2±0.8 kJ mol−1. Master Equation calculations showed that reversible and irreversible treatments of an equilibrating system should yield the same value for both thermal rate constants. • H+C3H5+He->C3H6+He (T = 291 K, 98 ≤ P/Torr ≤ 600) and O+C3H5 −→ products (286 ≤ T/K ≤ 500, 48 ≤ P/Torr ≤ 348). Both reactions were pressure–independent, and the latter was also independent of temperature with a value of (2.0±0.2) ×10−10 cm3 molecule−1 s−1. • H+C2H2+He<-->C2H3+He (298 ≤ T/K ≤ 845, 50 ≤ P/Torr ≤ 600). At 845 K, both reactions were in the fall–off region; rate constants were used to determine the standard enthalpy of formation of vinyl, ¢H0f,298 (C2H3), as 293±7 kJ mol−1. The value of this quantity has until recently been very controversial. • H+CH4 <--> CH3+H2. The standard enthalpy of formation of methyl, DH0 f,298 (CH3), was determined by re–analysing existing kinetic data at T = 825 K and 875 K. A value of 144.7±1.1 kJ mol−1 was determined. Preliminary models were examined to describe the loss of reactants from the observation region by diffusion and pump–out. Such models, including diffusion and drift, should prove useful in describing the loss of reactive species in many slow–flow systems, enabling more accurate rate constants to be determined.
APA, Harvard, Vancouver, ISO, and other styles
20

Skinner, Michael A. "Hapsite® gas chromatograph-mass spectrometer (GC/MS) variability assessment /." Download the thesis in PDF, 2005. http://www.lrc.usuhs.mil/dissertations/pdf/Skinner2005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

SANTOPOLO, LUISA. "Chromate resistance and biofilm development in Pseudomonas alcaliphila 34: molecular bases." Doctoral thesis, 2013. http://hdl.handle.net/2158/803671.

Full text
Abstract:
Hexavalent chromium [Cr(VI)] is considered an environmental priority pollutant for both its high dangerousness towards human health and its wide diffusion. In contrast, the trivalent form of chromium [Cr(III)] is much less toxic and insoluble. Hence, the basic process for chromium detoxification is the transformation of Cr(VI) to Cr(III). Bioremediation, exploiting microorganisms' ability to reduce Cr(VI) to Cr(III), represents a considerable alternative to traditional physic-chemical technologies, which are uneconomical, mainly for treatment of wide contaminated sites, and can create, in turn, environmental problems. A noticeable contribution to develop an efficient bioremediation approach can be provided by the use of microbial biofilm. Biofilms, the form commonly taken by microorganisms in environment, are able to contrast environmental stress (nutrient limitation, sudden pH changes, toxic compounds, etc.) and to detoxify contaminants more efficiently than microorganisms grown under planktonic conditions. Pseudomonas alcaliphila 34 is a chromate hyper-resistant and biofilm-producing bacterium, previously characterized in terms of hundreds of biochemical attributes and its chromate-reducing capability in the presence of different carbon/energy sources, and proposed for bioremediation processes. In this work Pseudomonas alcaliphila 34 was investigated in order to give a significant contribution to the development of an efficient biological system for the remediation of chromate contaminated soils. The general aim of the project was reached with the achievement of the following intermediate objectives: i. The development of a procedure, combining the Calgary Biofilm Device (MBEC device) and Phenotype MicroArray (PM) technology, for a wide-scale analysis of toxic chemicals susceptibility of biofilm and planktonic culture. ii. The sequencing, assembling and annotation of P. alcaliphila 34 genome. iii. The investigation of molecular bases implicated in chromate resistance and biofilm development in P. alcaliphila 34 strain by transcriptome analysis. The development of an integrated system combining two high-throughput technologies, MBEC device/PM, offered the possibility to obtain an extensive characterization of toxic compounds susceptibility of P. alcaliphila 34 biofilm and planktonic culture. The common assumption that biofilms are more tolerant than planktonic cells was confuted and was showed that cultures in the stationary phase were often more tolerant than biofilms in presence of the majority of chemicals used. Therefore, any conclusions regarding biofilm and planktonic culture resistance should take into account the growth phase of planktonic cultures and the nature of chemicals. Annotation of P. alcaliphila 34 genome allowed the identification of 4,983 protein-coding sequences and 61 tRNAs. Genome analysis indicated that P. alcaliphila 34 possesses a putative chrBACF operon that is involved in the high chromate resistance of the bacterium. Mercuric and arsenic resistance operons and many genes encoding putative multidrug resistance efflux systems were also identified on the genome. From temporal genomic profiling of P. alcaliphila 34 planktonic cultures exposed to an acute chromate stress the overexpression of genes involved in sulfur metabolism was highlighted, as well as the activation of oxidative stress response system and mechanism related to DNA repair. The chromate shock response of P. alcaliphila 34 was also characterized by the enhanced expression of genetic pathways relative to iron acquisition and metabolism, and the down-regulation of copper metabolism, suggesting a correlation between chromate exposure and the pathways of these two metals. The analysis of differentially expressed genes related to the early developing P. alcaliphila 34 biofilm revealed the induction of the pathway involved in flagellar motility and the down-regulation of a type IV pilus metabolic system. Interestingly, from the transcriptomic analysis of P. alcaliphila 34 response to an acute chromate challenge, the presence of pathways involved in biofilm formation has emerged. This observation indicates that biofilm formation may be a survival strategy to deal with chromate. In conclusion, obtained data have shown that : • the MBEC device/PM approach is a reliable, repeatable, accurate, and quick method to evaluate the effect of toxic chemicals (i.e., antibiotics, biocides, heavy metals) on metabolic activity of microbial biofilms; • the acquired know-how indicated that it is necessary to determine at what stage of growth the microorganism shows the best fitness in presence of a given pollutants to plan bioremediation processes; • the transcriptomic analysis provided insights into the molecular mechanisms related to biofilm development and chromate resistance. Furthermore, the obtained information may be used to design a biological system for bioremediation of chromate contaminated soils using biofilm of the highly Cr(VI)-resistant P. alcaliphila 34 strain.
APA, Harvard, Vancouver, ISO, and other styles
22

Niranjan, S. C. "Sensitivity analysis of a mechanistic growth model of Escherichia coli." Thesis, 1991. http://hdl.handle.net/1911/16521.

Full text
Abstract:
A detailed mechanistic model, with a large number of metabolites and parameters, describing growth of a single cell of the bacterium Escherichia coli is simulated and the resulting predictions are compared with available reported experimental observations. The effects of small time-invariant perturbations in each of the parameters on final model predictions of metabolite concentrations and doubling times are systematically investigated using the theory of sensitivity analysis. This is quantified by defining (linear) sensitivity coefficients $\underline{\Lambda},$ and are subsequently used to identify sensitive parameters in the model description. Growth rate effects due to parametric perturbations are studied by comparing the total phase shift obtained with that for a nominal system description. The prevailing behavior of kinetic and polymerization rate constants compared to that of saturation constants is established. Results obtained are successfully compared with available experimental data. Implications to Metabolic Control Analysis are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Rao, Sirigudi Rahul. "Biomass to ethanol process simulation, validation and sensitivity analysis of a gasifier and a bioreactor /." 2005. http://digital.library.okstate.edu/etd/umi-okstate-1527.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pavlovich, James Gilbert. "Ion pairing of nucleotides with surfactants for enhanced sensitivity in liquid matrix assisted secondary ion mass spectrometry." Thesis, 1993. http://hdl.handle.net/1957/36760.

Full text
Abstract:
In particle induced desorption-ionization mass spectrometry the strength of an analyte's signal under a given set of bombardment conditions is usually considered to be representative of the analytes relative surface activity. This rationale is generally used to explain differences in the technique's sensitivity between and within various classes of compound. In liquid matrix assisted secondary ion mass spectrometry (SIMS) sensitivity enhancement of ionic analytes by pairing with surface active counterions has been demonstrated by several groups. This technique has been utilized in this work to achieve a 10,000 fold enhancement in the signal for ATP on a double focusing magnetic sector instrument and to detect femtomole quantities of nucleoside monophosphates on a time-of-flight instrument. The analyte's signal, however, is dependent on both the analyte bulk concentration and that of the surfactant. Additionally, the surfactant concentration that produces the maximum analyte signal changes with the analyte concentration. In this study, this phenomenon has been modeled in terms of conventional solution equilibria and surface chemical principles. It is assumed that the initial surface composition and the bulk concentration are the boundary conditions of a steady state established by the competing processes of surface sputtering and surface replenishment from the bulk during analysis. Calculated surface excesses correlate well with observed relative ion intensities, suggesting that equilibrium conditions are approached in the sample matrices despite the outwardly dynamic nature of the sputtering processes.
Graduation date: 1994
APA, Harvard, Vancouver, ISO, and other styles
25

Gower, Stephanie Karen. "A Computer-Based Decision Tool for Prioritizing the Reduction of Airborne Chemical Emissions from Canadian Oil Refineries Using Estimated Health Impacts." Thesis, 2007. http://hdl.handle.net/10012/2758.

Full text
Abstract:
Petroleum refineries emit a variety of airborne substances which may be harmful to human health. HEIDI II (Health Effects Indicators Decision Index II) is a computer-based decision analysis tool which assesses airborne emissions from Canada's oil refineries for reduction, based on ordinal ranking of estimated health impacts. The model was designed by a project team within NERAM (Network for Environmental Risk Assessment and Management) and assembled with significant stakeholder consultation. HEIDI II is publicly available as a deterministic Excel-based tool which ranks 31 air pollutants based on predicted disease incidence or estimated DALYS (disability adjusted life years). The model includes calculations to account for average annual emissions, ambient concentrations, stack height, meteorology/dispersion, photodegradation, and the population distribution around each refinery. Different formulations of continuous dose-response functions were applied to nonthreshold-acting air toxics, threshold-acting air toxics, and nonthreshold-acting CACs (criteria air contaminants). An updated probabilistic version of HEIDI II was developed using Matlab code to account for parameter uncertainty and identify key leverage variables. Sensitivity analyses indicate that parameter uncertainty in the model variables for annual emissions and for concentration-response/toxicological slopes have the greatest leverage on predicted health impacts. Scenario analyses suggest that the geographic distribution of population density around a refinery site is an important predictor of total health impact. Several ranking metrics (predicted case incidence, simple DALY, and complex DALY) and ordinal ranking approaches (deterministic model, average from Monte Carlo simulation, test of stochastic dominance) were used to identify priority substances for reduction; the results were similar in each case. The predicted impacts of primary and secondary particulate matter (PM) consistently outweighed those of the air toxics. Nickel, PAH (polycyclic aromatic hydrocarbons), BTEX (benzene, toluene, ethylbenzene and xylene), sulphuric acid, and vanadium were consistently identified as priority air toxics at refineries where they were reported emissions. For many substances, the difference in rank order is indeterminate when parametric uncertainty and variability are considered.
APA, Harvard, Vancouver, ISO, and other styles
26

McCanna, David. "Development of Sensitive In Vitro Assays to Assess the Ocular Toxicity Potential of Chemicals and Ophthalmic Products." Thesis, 2009. http://hdl.handle.net/10012/4338.

Full text
Abstract:
The utilization of in vitro tests with a tiered testing strategy for detection of mild ocular irritants can reduce the use of animals for testing, provide mechanistic data on toxic effects, and reduce the uncertainty associated with dose selection for clinical trials. The first section of this thesis describes how in vitro methods can be used to improve the prediction of the toxicity of chemicals and ophthalmic products. The proper utilization of in vitro methods can accurately predict toxic threshold levels and reduce animal use in product development. Sections two, three and four describe the development of new sensitive in vitro methods for predicting ocular toxicity. Maintaining the barrier function of the cornea is critical for the prevention of the penetration of infections microorganisms and irritating chemicals into the eye. Chapter 2 describes the development of a method for assessing the effects of chemicals on tight junctions using a human corneal epithelial and canine kidney epithelial cell line. In Chapter 3 a method that uses a primary organ culture for assessing single instillation and multiple instillation toxic effects is described. The ScanTox system was shown to be an ideal system to monitor the toxic effects over time as multiple readings can be taken of treated bovine lenses using the nondestructive method of assessing for the lens optical quality. Confirmations of toxic effects were made with the utilization of the viability dye alamarBlue. Chapter 4 describes the development of sensitive in vitro assays for detecting ocular toxicity by measuring the effects of chemicals on the mitochondrial integrity of bovine cornea, bovine lens epithelium and corneal epithelial cells, using fluorescent dyes. The goal of this research was to develop an in vitro test battery that can be used to accurately predict the ocular toxicity of new chemicals and ophthalmic formulations. By comparing the toxicity seen in vivo animals and humans with the toxicity response in these new in vitro methods, it was demonstrated that these in vitro methods can be utilized in a tiered testing strategy in the development of new chemicals and ophthalmic formulations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography