To see the other types of publications on this topic, follow the link: Modeling of electronic processes.

Dissertations / Theses on the topic 'Modeling of electronic processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Modeling of electronic processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gagliardi, Alessio. "Theoretical modeling and simulation of electron-phonon scattering processes in molecular electronic devices." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=98556282X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Qian, Zhiguang. "Computer experiments [electronic resource] : design, modeling and integration /." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11480.

Full text
Abstract:
The use of computer modeling is fast increasing in almost every scientific, engineering and business arena. This dissertation investigates some challenging issues in design, modeling and analysis of computer experiments, which will consist of four major parts. In the first part, a new approach is developed to combine data from approximate and detailed simulations to build a surrogate model based on some stochastic models. In the second part, we propose some Bayesian hierarchical Gaussian process models to integrate data from different types of experiments. The third part concerns the development of latent variable models for computer experiments with multivariate response with application to data center temperature modeling. The last chapter is devoted to the development of nested space-filling designs for multiple experiments with different levels of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
3

Ganesan, Admanathan. "Modeling of distributed layouts for dynamic period cases." Thesis, Wichita State University, 2006. http://hdl.handle.net/10057/1482.

Full text
Abstract:
In this work, a systematic methodology to construct distributed layouts has been developed. Previous researches in this field suggest distributed layouts as an alternative to process layouts. But there has been no systematic methodology so far to develop distributed layouts. Earlier works concentrate on evaluating different production schedules for randomly distributed resources throughout the plant floor. As opposed to former approaches, in this work, distributed layouts are developed based on actual production and routing data. Taking into account the exact capacity requirements of machines, a methodical approach to distribute resources rather than random assignment is considered. The need for developing process layouts is analyzed and justified using product similarity and cell utilization. Process layouts are developed only when the given production data meets process layout requirements. The efficiency of proposed approach relative to traditional process layout strategy has been evaluated in terms of material handling cost for both single and multi period settings. In a multi period setting, the impact of demand disturbances on both process and distributed layouts is studied using case studies. From the case study results, it is concluded that distributed layouts constructed using proposed approach performs exceedingly well over traditional process layout approach. Distributed layouts proved to be efficient and robust for both single and multi-period cases. Even huge fluctuations in demand level of products had only little impact on distributed layouts whereas process layouts suffered tremendous loss in terms of material handling cost. It turned out that for all cases, the improvement in efficiency of distributed layouts was more than 30% over process layouts.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Industrial and Manufacturing Engineering.
APA, Harvard, Vancouver, ISO, and other styles
4

Hontz, Eric Richard. "Electronic processes in organic optoelectronics : insights gained through modeling and magnetic field effects." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98794.

Full text
Abstract:
Thesis: Ph. D. in Physical Chemistry, Massachusetts Institute of Technology, Department of Chemistry, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 185-232).
Organic photovoltaics (OPVs) and organic light-emitting diodes (LEDs) are organic optoelectronics offering a number of unique benefits that may play an important role in the future of clean energy generation and efficient energy consumption. In this thesis, we explore key electronic processes in OPVs and OLEDs, with a major focus on quantum-mechanical kinetic modeling of magnetic field effects (MFEs) that probe underlying subprocesses. Certain organics are capable of dividing excited states in a process termed singlet fission, which can increase the maximum theoretical efficiency of an OPV by a factor of nearly 1/3. The MFEs on photocurrent measurements from our collaborators are combined with theoretical models to determine optimal device architectures for singlet fission OPVs, allowing us to exceed the conventional limit of one electron per photon. We also use MFEs to determine the spin of charge transfer states most efficient at generating photocurrent and demonstrate microscopic insight into the mechanism of their diffusion, offering new design principles for the engineering of donor-acceptor interfaces in OPVs. Thermally activated delayed fluorescence (TADF) is becoming an increasingly important OLED technology that extracts light from non-emissive triplet states via reverse intersystem crossing (RISC) to the bright singlet state. We use MFEs to prove a rather surprising finding that in TADF materials composed of donor-acceptor bends, the electron-hole distance fluctuates as a function of time, resulting in spontaneous cycling between states that are advantageous to fluorescence at one moment and then advantageous to RISC at another. Combined with additional topics in the fields of metal organic frameworks and reaction pathfinding methods, the work in this thesis provides insight into how to achieve optimal performance in OPV and OLED devices, which may serve an important role in the future of our energy landscape.
by Eric Richard Hontz.
Ph. D. in Physical Chemistry
APA, Harvard, Vancouver, ISO, and other styles
5

Cho, Hyun Cheol. "Dynamic Bayesian networks for online stochastic modeling." abstract and full text PDF (free order & download UNR users only), 2006. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3221394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tóth, G. (Géza). "Computer modeling supported fabrication processes for electronics applications." Doctoral thesis, University of Oulu, 2007. http://urn.fi/urn:isbn:9789514284717.

Full text
Abstract:
Abstract The main objective of this thesis is to study unique cases for computer-assisted finite element modeling (FEM) of thermal, mechanical and thermo-mechanical problems related to silicon and carbon. Computational modeling contributed to solve scientific problems either by validating the experimental results obtained earlier or by predicting the behavior of a particular system. In the model generation phase, emphasis is placed on simplification of a physical problem without loosing the validity or important details. As a consequence of reasonably reduced variables and also degrees of freedom of the elements in our models, the simulations could be performed using a commercial FEM software package, ANSYS®. To test the capabilities of the method (i) a steady-state finite element thermal analysis has been accomplished and verified by experiments for the case of laser-assisted heating of different materials. (ii) Mechanisms (Dember and Seebeck effects) responsible for the reduction of gold ions and deposition of metallic gold on p-type semiconductors from liquid precursors have been investigated by computing the surface temperature profiles of silicon wafers exposed to laser irradiation. (iii) Temperature field in a multi-component system caused by laser illumination was modeled to determine the heat affected zone in the case of laser soldering of flip-chips on transparent printed circuit board assemblies. (iv) Origin of the experimentally observed residual strain in thermally oxidized porous silicon structures was revealed by computing the strain fields in silicon-silicon oxide porous materials considering both intrinsic and thermal stress components. (v) Finally, we demonstrated that Joule heat generated on a silicon chip can be removed efficiently using micro-fin structures made from aligned carbon nanotubes. Computational fluid dynamics and thermal-electric finite element models were developed to study the steady-state laminar coolant flow and also the temperature distribution for the chips. The presented novel results have potential in silicon and carbon nanotube based technologies, including deeper understanding of the processes and problems in manufacturing electronic devices.
APA, Harvard, Vancouver, ISO, and other styles
7

Shantaram, Sandeep Lall Pradeep. "Explicit finite element modeling in conjunction with digital image correlation based life prediction of lead-free electronics under shock-impact." Auburn, Ala, 2009. http://hdl.handle.net/10415/1894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Egorova, Dassia. "Modeling of ultrafast electron transfer processes multi-level Redfield theory and beyond /." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=967134420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Hong. "Numerical modelling of the tilt casting processes of titanium alumindes." Thesis, University of Greenwich, 2008. http://gala.gre.ac.uk/6336/.

Full text
Abstract:
This research has investigated the modelling and optimisation of the tilt casting process of Titanium Aluminides (TiAl). This study is carried out in parallel with the experimental research undertaken in IRC at the University of Birmingham. They propose to use tilt casting inside a vacuum chamber and attempt to combine this tilt casting process with Induction Skull Melting (ISM). A totally novel process is developing for investment casting, which is suitable for casting gamma TiAl. As it is known, gamma TiAl alloys has great properties including low density, high specific yield strength, high specific stiffness, good oxidation resistance and good creep resistance at high temperature [Clemens -2000][Appel et at. -2000]. A worldwide research effort has been made to develop gamma TiAl because it can offer a great potential for reducing the weight of high performance components and also engine of power generation gas turbine. Titanium alloys are very reactive at molten condition, and so, they are melted in an ISM crucible in order to avoid crucible contamination. There is still a big challenge to produce a long blade, up to 40 cm, due to the low superheat provided by the Induction Skull Melting (ISM) furnace which is widely used to melt the alloys. Here computational simulation has been seen important to predict the casting defects and to help optimise the experimental process. Computational modelling for the casting process involves a range of interactions of physical phenomena such as heat transfer, free surface fluid flow, solidification and so on. A number of free surface modelling techniques are applied to simulate the interface between the molten metal entering the mould in the filling phase, and the gas escaping. The CFD code PHYSICA developed in the University of Greenwich is used to simulate the above physical phenomena and to simulate the fluid flow both within the rotating mould cavity/crucible assembly and in the porous mould wall (including vents). Modelling the mould in a finite volume method is cumbersome, so an alternative 3D/1D coupled transient heat transfer model has been developed in this study. It is based on the fact that the mould filling for titanium aluminide (TiAl) is carried out during a few seconds and the thermal conductivity of the mould material is very low. Heat can be assumed to transfer mainly in a direction perpendicular to the mould wall ID. ID transient heat transfer model is governed by ID heat conduction equation in the mould part where the coordinates of each defined cell centre were calculated rather than meshing them. The coupling method between ID and 3D model is presented. The model is then validated using two simple geometries which describe two similar states in the mould filling as test cases. It has been applied to model short thin and long blades, especially to obtain accurate thermal boundaries. Comparisons with experiments have also been done. Across the presentation of the results, the factors affect the quality of the casting in the mould filling have been discussed. This thesis also presents a novel Counter Diffusion Method which was developed with suggestions from my supervisors as a corrective mechanism to counter numerical diffusion. This is a novel method to discretise the free surface equation fully implicitly in a fast, efficient way without numerical diffusion. Validation of the novel method was undertaken against the classical collapsing column experiment. The results showed that they are in good agreement. Then the method has been used to model a long thin blade for TiAl. A huge reduction in computational time is seen when the geometry is complex and massive amount of mesh cells are generated. That greatly speeds up the simulations. Solidification is modeled during the cooling which is following the filling stage. Gap formation between metal and mould is covered and the effects of the gap and gap size are presented by the application of model on a long twisted turbine blade.
APA, Harvard, Vancouver, ISO, and other styles
10

Hwang, Jung Yoon. "Spatial stochastic processes for yield and reliability management with applications to nano electronics." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1500.

Full text
Abstract:
This study uses the spatial features of defects on the wafers to examine the detection and control of process variation in semiconductor fabrication. It applies spatial stochastic process to semiconductor yield modeling and the extrinsic reliabil- ity estimation model. New yield models of integrated circuits based on the spatial point process are established. The defect density which varies according to location on the wafer is modeled by the spatial nonhomogeneous Poisson process. And, in order to capture the variations in defect patterns between wafers, a random coeff- cient model and model-based clustering are applied. Model-based clustering is also applied to the fabrication process control for detecting these defect clusters that are generated by assignable causes. An extrinsic reliability model using defect data and a statistical defect growth model are developed based on the new yield model.
APA, Harvard, Vancouver, ISO, and other styles
11

Brauner, J. Steven. "Impacts of Sequential Microbial Electron Accepting Processes on Natural Attenuation of Selected Petroleum Hydrocarbons in the Subsurface Environment." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/26339.

Full text
Abstract:
Regulatory acceptance of monitored natural attenuation (MNA) requires demonstration that natural processes, such as sorption and biodegradation, attenuate specific contaminants of concern on a time scale that is comparable to other remediation options while concurrently preventing contaminant migration to site-specific points of contact. Two of the tools used to demonstrate the efficacy of MNA, microcosm experiments and numerical fate and transport modeling, were examined in this study. In the first phase of this work, laboratory microcosm studies were initiated as part of an overall MNA site assessment to determine whether a native microbial consortia collected with a soil sample from a petroleum-hydrocarbon contaminated site was capable of biodegrading specific polynuclear aromatic hydrocarbon (PAH) compounds. Results indicated that selected PAH compounds were biodegraded under simulated natural conditions using oxygen and sulfate as electron acceptors. In the second phase of this study, a numerical experiment was conducted using the three-dimensional, multiple substrate, multiple electron acceptor fate and transport model SEAM3D (Waddill and Widdowson, 1997) to evaluate the impact of including iron(III)-reducing conditions during numerical simulations of natural attenuation. Results for this phase of the study indicated that the mass of hydrocarbon simulated as biodegraded by the iron(III)-reducing population was significantly larger than hydrocarbon biodegradation under aerobic conditions. The final component of research used the SEAM3D model to interpret field observations recorded during a natural attenuation experiment where the fate and transport of selected hydrocarbon contaminants (BTEX and naphthalene) was tracked through an extremely heterogeneous, but well-instrumented test aquifer. Results from the calibrated model for the NATS experiment indicated that the majority of the contaminant remained in the non-aqueous phase during the first year of the experiment, and that aerobic biodegradation was the dominant natural attenuation process. Model results were particularly sensitive to the rate of contaminant release and the starting mass of electron acceptor.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Othman, Hisham H. A. "A novel reduced-complexity approach to hidden Markov modeling of two-dimensional processes with application to face recognition." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6162.

Full text
Abstract:
The 2-D Hidden Markov Model (HMM) is an extension of the traditional 1-D HMM that has shown distinctive efficiency in modeling 1-D signals. Unlike 1-D HMMs, 2-D HMMs are known for their prohibitively high complexity. This encouraged many researchers to work on alternatives such as Pseudo 2-D HMM and Embedded HMM for 2-D recognition applications to avoid the complexity problem. Those applications include, but are not limited to, Face Recognition, Optical Character Recognition, Face Detection, Image Retrieval, and Object Recognition. The Hidden Layer's complexity of a typical second-order 2-D HMM is normally in the order of (N3). The term "Hidden Layer" refers to the computations of the probabilities of state transition and N is the number of states in the model. In this thesis, a low complexity high performance 2-D Hidden Markov Model (HMM) is proposed and is applied to the problem of Face Recognition. The proposed model is a true 2-D HMM. The complexity of the Hidden Layer is brought down to the order of (2N2) using a basic assumption of conditional independence between vertical and horizontal state transitions. This assumption allows replacing the 3-D state transition matrix with two 2-D transition matrices. HMM complexity is always addressed in the literature from the Hidden Layer perspective, yet the complexity of the observation layer is not trivial. The mixtures of the proposed model are tied for lower observation layer complexity. The performance and the complexity of the proposed model with tied mixtures are investigated while applied to the problem of face recognition. The proposed face recognition system achieves recognition rates up to 100% on the AT&T facial database with complexity that is comparable to that of 1-D HMM.
APA, Harvard, Vancouver, ISO, and other styles
13

Pei, Yuqing. "Computer simulation of fundamental processes in high voltage circuit breakers based on an automated modelling platform." Thesis, University of Liverpool, 2014. http://livrepository.liverpool.ac.uk/2007107/.

Full text
Abstract:
Auto-expansion circuit breakers utilize the arc’s energy to generate the flow conditions required for current interruption. The operation of this type of circuit breaker is extremely complex and its interruption capability depends on the whole arcing history as well as a number of geometric factors. On the other hand, circuit breaker development based on test is extremely expensive and time consuming. The accumulated understanding of the underlying physical processes so far enables arc models be used as a tool for optimum design of switchgear product such as high voltage circuit breakers. For academic research, there is often a need to study the performance of a newly developed arc model by inspecting the distribution of relevant physical quantities during a simulation and their sensitivity to model parameters in an efficient and convenient approach. However the effective use of computer simulation by design engineers has been hindered by the complexity encountered in model implementation. This thesis presents the development and structure of an automated simulation tool, the Integrated Simulation and Evaluation Environment (ISEE), for the arcing process in gas-blast circuit breakers. The functionalities of ISEE are identified and developed based on the experience in real product design, which include visual creation and definition of components, automatic setup of arc models based on a commercial CFD software package as equation solver, simulation task management, and visualization of computational results in “real-time” mode. This is the first automated simulation platform in the community of switching arc simulation. Using ISEE as the simulation tool, different designs of auto-expansion circuit breakers have been investigated to reveal the fundamental characteristics of the arcing process under different test duties. Before attempting to investigate the capability of an auto-expansion circuit breaker, the fundamental issue of determining the turbulence parameter of the Prandtl mixing length model is addressed. Previous studies on turbulence arcs were mostly concerned with simple converging-diverging nozzles. There has been little work on real circuit breaker nozzles. In order to calibrate the turbulence parameter, real arcing conditions including interrupting currents, contact travels, and transient recovery voltages of two commercial circuit breakers, with rated voltage of 145 kV and 245 kV, have been used together with the geometry of the circuit breakers to calibrate the range of the turbulence parameter. The effect of nozzle ablation has been considered. All together 6 cases have been used for three circuit breakers with each pair of cases corresponding to a success and failure in its thermal recovery process. It has been found that a single parameter of 0.35 is applicable to all three circuit breakers with an auxiliary nozzle and a main nozzle of converge-flat throat-diverge shape. It must be noted that this value is obtained with the definition of thermal radius introduced in Chapter 3 and the assumption that the parameter linearly changes with the interrupting current from 0.05 at 15 kA to 0.35 at current zero. Using the calibrated turbulence model, a computational study of the thermal interruption performance of a 145 kV, 60 Hz auto-expansion circuit breaker with different arc durations has been carried out in Chapter 4. The relation between pressure peak and current peak in the auto-expansion circuit breaker is discussed. It has been found that a larger average mass flux in the main nozzle indicates a better interruption environment, enabling the circuit breaker to withstand a larger rate of rise of recovery voltage after current zero. Another important finding is that the auxiliary nozzle plays an important role in an auto-expansion circuit breaker both at the high current phase and during the current zero period. Therefore, the proper design and use of an auxiliary nozzle is a key factor to enhance the thermal interruption capability of high voltage auto-expansion circuit breakers. In Chapter 5 of the thesis, the transient pressure variation in auto-expansion circuit breakers was studied. The pressure variation has an extremely complex pattern and the pressure changes in different ways depending on the location in the arcing chamber. It is shown, for the first time, that the time lag between the current peak and pressure peak in the expansion volume can be explained by using an energy flow rate balance method, that is flow reversal occurs when the enthalpy exhaustion rate from the contact space equals the electrical power input. Following the flow reversal, a high enthalpy flow rate from the expansion volume into the contact gap first occurs for a short while (1 ms), which is followed by a high mass flow rate of relatively cool gas at less than 2000 K. This high mass flow rate causes a surplus in mass flow rate into the contact gap and results in the last temporary pressure peak in the contact space before the pressure and flow field finally settle down for arc quenching at current zero. The pressure change under different conditions, i.e. different arc durations, different current levels and different length of the heating channel, has also been studied in details. In summary the present research leads to original findings in three aspects of the operation of auto-expansion circuit breakers, i.e. the calibration of the turbulence parameter for the Prandtl mixing length model, interruption performance with different arc durations, and the transient pressure variation in the arcing process. The results are expected to provide useful information for the optimum design of auto-expansion circuit breakers.
APA, Harvard, Vancouver, ISO, and other styles
14

Senalp, Erdem Turker. "Cascade Modeling Of Nonlinear Systems." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608578/index.pdf.

Full text
Abstract:
Modeling of nonlinear systems based on special Hammerstein forms has been considered. In Hammerstein system modeling a static nonlinearity is connected to a dynamic linearity in cascade form. Fundamental contributions of this work are: 1) Introduction of Bezier curve nonlinearity representations
2) Introduction of B-Spline curve nonlinearity representations instead of polynomials in cascade modeling. As a result, local control in nonlinear system modeling is achieved. Thus, unexpected variations of the output can be modeled more closely. As an important demonstration case, a model is developed and named as Middle East Technical University Neural Networks and Cascade Model (METU-NN-C). Application examples are chosen by considering the Near-Earth space processes, which are important for navigation, telecommunication and many other technical applications. It is demonstrated that the models developed based on the contributions of this work are especially more accurate under disturbed conditions, which are quantified by considering Space Weather parameters. Examples include forecasting of Total Electron Content (TEC), and mapping
estimation of joint angle of simple forced pendulum
estimation of joint angles of spring loaded inverted double pendulum with forced table
identification of Van der Pol oscillator
and identification of speakers. The operation performance results of the International Reference Ionosphere (IRI-2001), METU Neural Networks (METU-NN) and METU-NN-C models are compared qualitatively and quantitatively. As a numerical example, in forecasting the TEC by using the METU-NN-C having Bezier curves in nonlinearity representation, the average absolute error is 1.11 TECu. The new cascade models are shown to be promising for system designers and operators.
APA, Harvard, Vancouver, ISO, and other styles
15

Lade, Nancy. "Sequential Electron Acceptor Model of Intrinsic Bioremediation at a BTEX Contaminated LUST Site in Laurel Bay, South Carolina." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/35009.

Full text
Abstract:
Contaminant transport modeling is being used more often at petroleum hydrocarbon contaminated sites in an attempt to aid engineers in evaluating the feasibility of natural attenuation as a remediation alternative in groundwater systems. In this research, a three-dimensional sequential electron acceptor computer model, SEAM3D, developed by Waddill and Widdowson (1997) was used to simulate contaminant transport at a leaking underground storage tank site in Beaufort, South Carolina. Gasoline containing benzene, toluene, ethylbenzene, and xylene (BTEX) as well as methyl tertiary butyl ether (MTBE) leaked into the subsurface at the site late in 1990, and monitoring of the water table elevations and contaminant concentrations began in 1993. Using the field data, the groundwater flow model MODFLOW was used to develop and calibrate a flow model for the Laurel Bay site using GMS (Groundwater Modeling System) v2.1. MODFLOW was coupled with the SEAM3D contaminant transport model, and the available concentration levels were used to calibrate, verify, and validate the site model. The results indicated that SEAM3D simulated complex, interconnected processes including biodegradation, and the transport of multiple hydrocarbon compounds, electron acceptors, and end products over time and space at a specific petroleum hydrocarbon contaminated site. Once the model was calibrated and verified, the model output was used to study the changes in contaminant mass distribution, contaminant mass loss, and mass loss rates for each terminal electron accepting process (TEAP) over time. It was found that the natural attenuation capacity of the aquifer was insufficient to stabilize the plume and prevent it from reaching the defined point of contact (POC). Contamination was shown to have reached the POC by 1994, just four years into the simulation. Results indicated that despite oxygen limitation within the BTEX plume, aerobic biodegradation was responsible for the greatest amount of mass loss, close to 70 %, relative to the sum of the anaerobic processes after 20 years.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
16

Du, Rong. "Secure electronic tendering." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16606/1/Rong_Du_Thesis.pdf.

Full text
Abstract:
Tendering is a method for entering into a sales contract. Numerous electronic tendering systems have been established with the intent of improving the efficiency of the tendering process. Although providing adequate security services is a desired feature in an e-tendering system, current e-tendering systems are usually designed with little consideration of security and legal compliance. This research focuses on designing secure protocols for e-tendering systems. It involves developing methodologies for establishing security requirements, constructing security protocols and using formal methods in protocol security verification. The implication is that it may prove suitable for developing secure protocols in other electronic business domains. In depth investigations are conducted into a range of issues in relation to establishing generic security requirements for e-tendering systems. The outcomes are presented in a form of basic and advanced security requirements for e-tendering process. This analysis shows that advanced security services are required to secure e-tender negotiation integrity and the submission process. Two generic issues discovered in the course of this research, functional difference and functional limitations, are fundamental in constructing secure protocols for tender negotiation and submission processes. Functional difference identification derives advanced security requirements. Functional limitation assessment defines how the logic of generic security mechanisms should be constructed. These principles form a proactive analysis applied prior to the construction of security protocols. Security protocols have been successfully constructed using generic cryptographic security mechanisms. These protocols are secure e-tender negotiation integrity protocol suite, and secure e-tender submission protocols. Their security has been verified progressively during the design. Verification results show that protocols are secure against common threat scenarios. The primary contribution of this stage are the procedures developed for the complex e-business protocol analysis using formal methods. The research shows that proactive analysis has made this formal security verification possible and practical for complex protocols. These primary outcomes have raised awareness of security issues in e-tendering. The security solutions proposed in the protocol format are the first in e-tendering with verifiable security against common threat scenarios, and which are also practical for implementation. The procedures developed for securing the e-tendering process are generic and can be applied to other business domains. The study has made improvements in: establishing adequate security for a business process; applying proactive analysis prior to secure protocol construction; and verifying security of complex e-business protocols using tool aided formal methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Du, Rong. "Secure electronic tendering." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16606/.

Full text
Abstract:
Tendering is a method for entering into a sales contract. Numerous electronic tendering systems have been established with the intent of improving the efficiency of the tendering process. Although providing adequate security services is a desired feature in an e-tendering system, current e-tendering systems are usually designed with little consideration of security and legal compliance. This research focuses on designing secure protocols for e-tendering systems. It involves developing methodologies for establishing security requirements, constructing security protocols and using formal methods in protocol security verification. The implication is that it may prove suitable for developing secure protocols in other electronic business domains. In depth investigations are conducted into a range of issues in relation to establishing generic security requirements for e-tendering systems. The outcomes are presented in a form of basic and advanced security requirements for e-tendering process. This analysis shows that advanced security services are required to secure e-tender negotiation integrity and the submission process. Two generic issues discovered in the course of this research, functional difference and functional limitations, are fundamental in constructing secure protocols for tender negotiation and submission processes. Functional difference identification derives advanced security requirements. Functional limitation assessment defines how the logic of generic security mechanisms should be constructed. These principles form a proactive analysis applied prior to the construction of security protocols. Security protocols have been successfully constructed using generic cryptographic security mechanisms. These protocols are secure e-tender negotiation integrity protocol suite, and secure e-tender submission protocols. Their security has been verified progressively during the design. Verification results show that protocols are secure against common threat scenarios. The primary contribution of this stage are the procedures developed for the complex e-business protocol analysis using formal methods. The research shows that proactive analysis has made this formal security verification possible and practical for complex protocols. These primary outcomes have raised awareness of security issues in e-tendering. The security solutions proposed in the protocol format are the first in e-tendering with verifiable security against common threat scenarios, and which are also practical for implementation. The procedures developed for securing the e-tendering process are generic and can be applied to other business domains. The study has made improvements in: establishing adequate security for a business process; applying proactive analysis prior to secure protocol construction; and verifying security of complex e-business protocols using tool aided formal methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Bonner, J. K. “Kirk”, and Silveira Carl de. "ISO 9001 Registration for the Electronic Hardware Fabrication Process at the Jet Propulsion Laboratory." International Foundation for Telemetering, 1998. http://hdl.handle.net/10150/609236.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California
More and more companies and organizations are recognizing the benefits to be gained by achieving ISO 9000 registration. An effort is underway at JPL to become ISO 9001 registered. To facilitate this activity, the entire laboratory has been divided into processes, each one having a designated process owner. This paper concentrates more specifically on one of these processes, namely, the Packaging and Fabrication of Electronic Hardware (PAFEH), and the effort being undertaken to ensure that this process will successfully pass registration. A comprehensive approach is being utilized by the Electronic Packaging and Fabrication Section to bring this about.
APA, Harvard, Vancouver, ISO, and other styles
19

Jolly, Mariette. "Study of many-body dynamics in ion-ion/atom collisions : a joint experimental and theoretical investigation." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS243.

Full text
Abstract:
Étudier les collisions ion-atome et ion-ion permet d'appréhender les probabilités des processus électroniques tels que la capture, l'ionisation et/ou l'excitation en maîtrisant le nombre d'électrons initialement liés à chaque partenaire de la collision. En allant de l'étude d'un système à trois corps (les deux noyaux avec un seul électron) vers des systèmes plus complexes impliquant des électrons supplémentaires permet d'examiner des effets sur la dynamique globale des électrons et par conséquent sur les sections efficaces des processus élémentaires.Dans la section théorique de la thèse, des calculs de sections efficaces sont effectués pour des systèmes ion-atome et ion-ion contenant jusqu'à deux électrons. Le carbone, l'hydrogène, l'azote et l'hélium ont été choisis comme partenaires de collision. Cela est réalisé en utilisant une approximation semi-classique non perturbative, qui consiste à traiter le mouvement des ions de manière classique, tandis que la dynamique des électrons est décrite en utilisant la mécanique quantique. Les collisions sont simulées à l'aide d'un programme “collision solver”, à partir duquel des sections efficaces pour différents processus peuvent être extraites.La partie expérimentale de la thèse est dédiée au développement technique, aux tests et à la caractérisation de divers instruments essentiels pour la réalisation d'expériences précises de collision ion-atome et ion-ion. Deux sources d'ions et les lignes de faisceau correspondantes ont été installées pour réaliser des collisions ion-ion garantissant qu'une large gamme de conditions expérimentales et de types de collisions. Une chambre de collision, un jet gazeux, un spectromètre d'ions et son détecteur associé, un système de détection de rayons X ont été développés et caractérisées pour réaliser les premières études.Dans l'ensemble, cette thèse combine de nouveaux calculs théoriques pour les collisions ion-atome et ion-ion avec des avancées expérimentales vers une configuration capable d'explorer une large gamme de systèmes de collision. Cette double approche est très bénéfique pour améliorer la compréhension de la dynamique des électrons dans les collisions ion-matière. Ces connaissances sont essentielles tant pour la recherche fondamentale que pour les applications pratiques dans divers domaines scientifiques et technologiques, tels que les plasmas astrophysiques, la recherche sur la fusion par confinement inertiel ou encore l'hadronthérapie
Studying ion-atom and ion-ion collisions allows us to understand the probabilities of electronic processes such as capture, ionization, and/or excitation by controlling the number of electrons initially bound to each collision partner. By progressing from the study of a three-body system (the two nuclei with a single electron) to more complex systems involving additional electrons, we can examine the effects on the overall electron dynamics and consequently on the cross-sections of elementary processes.In the theoretical section of the thesis, cross-section calculations are performed for ion-atom and ion-ion systems containing up to two electrons. Carbon, hydrogen, nitrogen, helium and their respective ions have been chosen as collision partners. This is achieved using a semi-classical non-perturbative approach: the relative motion of the partners is treated classically, while the electron dynamics is described quantally. The collisions are simulated using a “collision solver” program, from which cross sections for different processes can be extracted.The experimental part of the thesis is dedicated to the technical development, rigorous testing, and thorough characterization of various instruments critical for conducting precise ion-atom and ion-ion collision experiments. Two ion sources and their respective beamlines were set up to perform ion ion collisions, ensuring a large range of possible experimental conditions and collision systems can be explored. A collision chamber, gaseous jet, an ion spectrometer and its associated detector, as well as an x-ray detection system were developed and characterized to perform the preliminary experiments.Overall, this thesis combines new theoretical calculations for ion-atom and ion-ion collisions with experimental advancements towards a set-up capable of exploring a wide range of collision systems. The dual approach is very beneficial for enhancing the understanding of electron dynamics in ion-matter collisions. This knowledge is essential for both fundamental research and practical applications in various scientific and technological fields, such as astrophysical plasma, inertial confinement fusion research or hadrontherapy
APA, Harvard, Vancouver, ISO, and other styles
20

Balla, Tobias. "Modelling of microelectronic processes and materials." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/348865/.

Full text
Abstract:
Organic electronics promises the creation of electronic components on flexible materials at low temperatures, by fast techniques and more environmentally friendly processes. The research followed two directions. The first part focused on the manufacturing technique nanoimprint lithography (NIL). A comprehensive review was undertaken and process capabilities were compared for trends. It was seen that small feature sizes (< 50 nm) have not been replicated over areas greater than 4 mm2, while aspect ratios greater than 10 have not been achieved. A questionnaire addressing market opportunities suggested NIL is likely to compete for the production of devices that currently use electron beam lithography and laser writing processes that are seeking to change their business strategy from a differentiation base to a cost reduction. NIL must also prove to customers that it is an economical investment. However, improvements in stamp creation, analysis techniques and overlay alignment need to be addressed for a larger share of the microfabrication market. It was apparent that physical limits exist to which imprints can be produced and an analytical model could predict these. A model was created to describe the de-embossing step and to explore how the various material properties and process variables interact. It showed a very strong dependence on the achievable aspect ratio on the pattern area ratio and the interfacial shear stress; that de-embossing using fluorinated coatings and current standard polymers is unlikely to fail for post radii on the order of 100 nm due to adhesion and that large area ratios and aspect ratios are more easily achieved by maintaining the polymer/stamp Young’s moduli ratio (RE) in the range 0.003 to 5. The second part of the research looked at the formation of crescent singularities in thin sheet materials, which affects the production of polymer electronic based devices produced by the sponsoring company. The author compared an analytical model by Cerda and Mahadevan for the formation of developable cones (d-cones) to a finite element (FE) model and showed that explicit elements could mimic the formation of a d-cone. Different elements were analysed for their suitability and the Belytschko-Lin-Tsay (BT) element was chosen based on its speed, robustness and similarity to the analytical results. An adapted three-point bend test set-up was conceived that would enable specific attributes to be independently varied, to understand their effect on d-cone formation in thin sheets. Digital image correlation (DIC) was used to calculate the displacements and strains. The same set-up was modelled using an FE model with the chosen BT element. The DIC results showed a variation in strain with plunger displacement before the visual appearance of a developable cone and that it occurred between 0.1 and 0.4 % in-plane strain. The FE data showed a similar trend to the DIC results, showing a change in strain once a d-cone began to form. Improvements and suggestions were then made advising how to make the DIC and FE models more accurate.
APA, Harvard, Vancouver, ISO, and other styles
21

Jayaweera, Prasad M. "A Unified Framework for e-Commerce Systems Development : Business Process Pattern Perspective." Doctoral thesis, Kista : Dept. of Computer and Systems Sciences [Institutionen för Data- och Systemvetenskap], Univ./KTH, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

McCann, Scott R. "Experimental and theoretical assessment of thin glass panels as interposers for microelectronic packages." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51889.

Full text
Abstract:
As the microelectronic industry moves toward stacking of dies to achieve greater performance and smaller footprint, there are several reliability concerns when assembling the stacked dies on current organic substrates. These concerns include excessive warpage, interconnect cracking, die cracking, and others. Silicon interposers are being developed to assemble the stacked dies, and then the silicon interposers are assembled on organic substrates. Although such an approach could address stacked-die to interposer reliability concerns, there are still reliability concerns between the silicon interposer and the organic substrate. This work examines the use of diced glass panel as an interposer, as glass provides intermediate coefficient of thermal expansion between silicon and organics, good mechanical rigidity, large-area panel processing for low cost, planarity, and better electrical properties. However, glass is brittle and low in thermal conductivity, and there is very little work in existing literature to examine glass as a potential interposer material. Starting with a 150 x 150 mm glass panel with a thickness of 100 µm, this work has built alternating layers of dielectric and copper on both sides of the panel. The panels have gone through typical cleanroom processes such as lithography, electroplating, etc. Upon fabrication, the panels are diced into individual substrates of 25 x 25 mm and a 10 x 10 mm flip chip with a solder bump pitch of 75 um is then reflow attached to the glass substrate followed by underfill dispensing and curing. The warpage of the flip-chip assembly is measured. In parallel to the experiments, numerical models have been developed. These models account for viscoplastic behavior of the solder. The models also mimic material addition and etching through element “birth-and-death” approach. The warpage from the models has been compared against experimental measurements for glass substrates with flip chip assembly. It is seen that the glass substrates provide significantly lower warpage compared to organic substrates, and thus could be a potential candidate for future 3D systems.
APA, Harvard, Vancouver, ISO, and other styles
23

Horton, Mark Edward. "Electronic processes in polyacetylene." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mukherjee, Prithwiraj. "Modeling complex decision processes." Thesis, Cergy-Pontoise, Ecole supérieure des sciences économiques et commerciales, 2014. http://www.theses.fr/2014ESEC0007.

Full text
Abstract:
Cette thèse contient trois essais traitant de la modélisation des processus de décision complexes dans le marketing. Ces trois essais traitent chacun d'un aspect différent de la prise de décisions complexes, que ce soit au niveau individuel ou au niveau du réseau. Les essais 1 et 2 de cette thèse utilisent des modèles à base d'agents. L’essai 1 est une extension de Goldenberg, Libai, et Muller (2010), qui utilisent un modèle à base d'agents pour démontrer que, contrairement à l'intuition, les produits avec des externalités de réseau ont tendance à se diffuser plus lentement que ceux sans externalités (l'effet dit «chilling»). Dans leur étude, ils utilisent un voisinage de Moore bidimensionnel afin de modéliser le réseau sous-jacent dans le marché des nouveaux produits. En accord avec d'autres études démontrant que la structure du réseau affecte la dynamique de diffusion, j’adapte leurs simulations pour les données de réseau du monde réel. Je trouve que, si les grands réseaux et les réseaux avec degré moyen supérieur ont une plus grande tendance à compenser cet effet de « chilling », le regroupement pourrait l'améliorer. Je démontre également que, pour les mêmes paramètres de haut niveau, un cumul de plusieurs conditions au niveau micro-locales pourrait finir par accélérer la diffusion des externalités de réseau, de fait le rendant plus rapide que sans externalités de réseau. L’essai 2 traite de la controverse entourant la commercialisation de régimes à paliers multiples (MLM) et des questions de la rentabilité de leur force de vente indépendante. M’appuyant sur une littérature rare dans ce domaine, je construis un modèle à base d’agent afin de mesurer la croissance d'un système MLM sur un réseau social. Contrairement au travail existant qui néglige le rôle des dépenses des recrues d'entreprise sur la décision de se joindre, j’inclus les mêmes variables, afin de démontrer qu'il a des effets non négligeables sur la prolifération des régimes MLM. Dans l'essai 3, je construis un nouveau modèle de préférences fondé sur la notion d'ancrage. Ce modèle vectoriel est basé sur le modèle d'utilité (1966) multi-attributs de Lancaster, permettant de faire varier les poids selon le contexte. Les modèles dépendants du contexte sont importants dans l’étude des choix des consommateurs, par exemple, pour expliquer l’adoption de nouveaux produits, le succès d’un produit, et la dynamique du marché. Les modèles dépendants du contexte peuvent être utilisés dans les analyses conjointes, afin de fournir des données d'entrée calibrées pour instancier des modèles à base d'agents qui simulent la croissance de nouveaux produits. Ainsi, l’essai 3 est une petite mais importante pièce du puzzle dans l’ensemble des processus de décision complexes. L'approche de modélisation proposée peut être utilisée pour simuler les processus de décision individuels avec des scénarios de simulation concernant les options disponibles à un seul consommateur, et donc être utilisé pour construire une simulation à base d'agents de l'ensemble du marché
This thesis contains three essays dealing with the modeling of complex decision processes in marketing. Each of these deals with a different aspect of complex decision making, either at the individual or at the network level. Essays 1 and 2 in this dissertation are studies using agent-based models. Essay 1 is an extension of Goldenberg, Libai, and Muller (2010), who use an agent-based model to demonstrate that contrary to intuition, products with network externalities tend to diffuse slower than those without (the "chilling" effect). In their study, they use a simple 2-dimensional Moore neighborhood as the underlying network substrate depicting the market for new product adoption. In keeping with other studies demonstrating that network structure affects diffusion dynamics, I adapt their simulations for real-world network data and find that while larger networks and networks with higher average degree tend to offset this chilling effect, clustering could enhance it. I also demonstrate that for the same high-level parameters, a cumulation of many local micro-level conditions could end up speeding diffusion with network externalities, actually making it faster than without network externalities. Essay 2 deals with the controversy surrounding multilevel marketing (MLM) schemes and questions of their profitability to their freelance sales force. Building on the sparse literature in this field, I build an agent-based model of the growth of an MLM scheme on a social network. Unlike extant work which neglects the role of recruits' business expenses on the decision to join, I include the same, and show that it has non-trivial effects on the proliferation of MLM schemes. In essay 3, I build a new model of preferences based on the notion of anchoring. This vectorbased model is based on Lancaster's (1966) multiattribute utility model, but allows the weights to be shaped by context. Context-dependent models are important in studying consumer choices, as for example, in explaining new product adoptions, new product takeoff, and market dynamics. Context dependent choice models can be used in conjoint analyses to provide calibrated input data to instantiate agent-based models that simulate new product growth. Thus, Essay 3 is a small but important piece in the overall jigsaw puzzle of complex decision processes. The proposed modeling approach can be used to simulate individual decision processes with what-if scenarios regarding options available to a single consumer, and thus be used to build an agent-based simulation of an entire market
APA, Harvard, Vancouver, ISO, and other styles
25

Taoutioui, Abdelmalek. "Étude théorique de la dynamique électronique au cours de collisions entre des ions et des cibles atomiques hautement excitées." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS323.

Full text
Abstract:
La compréhension des processus électroniques ayant lieu au cours de collisions atomiques présente un intérêt important pour la modélisation de phénomènes observés dans des systèmes complexes tels que les milieux atmosphériques, astrophysiques et biologiques. Ces processus se manifestent au niveau microscopique mais jouent un rôle déterminant sur l’évolution et les propriétés physico-chimiques de ces systèmes macroscopiques. D’un point de vue fondamental, les collisions sont également des "laboratoires" privilégiés pour l'étude de systèmes quantiques à N-corps. Les collisions ion-atome sont d'une importance cruciale dans plusieurs domaines tels que celui concernant la fusion nucléaire par confinement magnétique : la compréhension des processus électroniques est nécessaire pour la modélisation de ce type de plasmas. Cette thèse se situe dans ce contexte et est dédiée à l’étude théorique des processus électroniques qui se manifestent au cours de collisions entre des ions et des cibles atomiques hautement excitées. Dans ce travail, nous avons privilégié la modélisation de la capture électronique et le calcul des sections efficaces pour le système proton-hydrogène. Nous avons adopté deux approches théoriques non perturbatives : la méthode classique CTMC et une méthode semi-classique SCAOCC. Nous avons calculé les sections efficaces totales et partielles de capture pour des cibles excitée initialement jusqu’à la couche n=7. Des comparaisons entre les résultats issus de ces méthodes et les quelques données théoriques disponibles sont réalisées et représentent l’originalité du travail. Des lois d’échelle empiriques pour les sections efficaces totales sont également décrites
The understanding of the electronic processes taking place during atomic collisions is of great interest for the modeling of phenomena observed in complex systems such as atmospheric, astrophysical and biological environments. These processes manifest at the microscopic level but play a determining role on the evolution and the physicochemical properties of these macroscopic systems. From a fundamental point of view, collisions are also privileged "laboratories" for the study of quantum systems with N-bodies. Ion-atom collisions are of crucial importance in several fields such as that concerning nuclear fusion by magnetic confinement: the understanding of the electronic processes is necessary for the modeling of this type of plasmas. This thesis is situated in this context and is dedicated to the theoretical study of the electronic processes that occur during collisions between ions and highly excited atomic targets. In this work, we privileged the modeling of the electronic capture and the computation of the cross sections for the proton-hydrogen system. We adopted two non-perturbative theoretical approaches: the classical CTMC method and a semi-classical SCAOCC method. We calculated the total and partial capture cross sections for targets initially excited up to the n = 7 layer. Comparisons between the results of these methods and the few theoretical data available are made and represent the originality of the work. Empirical scale laws for total cross sections are also described
APA, Harvard, Vancouver, ISO, and other styles
26

Nielssen, Johan. "Information modeling of manufacturing processes." Doctoral thesis, KTH, Production Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3628.

Full text
Abstract:

The innovation process is an important process for our primemotor of welfare, manufacturing. During this process, theprerequisites for manufacturing are set. To set the bestpossible prerequisites consideration about products,manufacturing processes, and manufacturing resources must bemade concurrently, which also means involving several differentdisciplines in a collaborative effort.

As a consequence of involving different disciplines, thecommunication of engineering information may be hindered. Thereason is that different disciplines use different terminologyfor the same concept and sometimes have the same terminologyfor different concepts. This may result in difficultiesunderstanding each other, which may, in turn, result inunnecessary loss of quality and productivity.

The main objective of this thesis is to identify informationconcepts (i.e. information requirements) for process planningin a concurrent engineering environment, and to formally definethe corresponding terminology. The work is based on casestudies at Volvo Car Corporation, involving management of weldspot and location system information, and at ABB Body-in-White,involving tender preparation information.

The results are presented in the thesis in terms of aninformation model, the Product-Process-Resource (PPR)information model, and two corroborated hypotheses. The PPRinformation model defines the identified informationrequirements in the scope of the thesis whereas the hypothesesconcern how, e.g., modularization can be used in informationmodeling.

The PPR information model provides the base for aninformation platform in a concurrent engineeringenvironment.

The PPR information model enable model based documentationand, thus, traceability of the evolution of the product,process, and manufacturing resource designs, and theirinterrelations.

Keywords:Information Modeling, Process Planning,Concurrent Engineering, Information Management

APA, Harvard, Vancouver, ISO, and other styles
27

Vedin, Jörgen. "Numerical modeling of auroral processes." Doctoral thesis, Umeå University, Physics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1117.

Full text
Abstract:

One of the most conspicuous problems in space physics for the last decades has been to theoretically describe how the large parallel electric fields on auroral field lines can be generated. There is strong observational evidence of such electric fields, and stationary theory supports the need for electric fields accelerating electrons to the ionosphere where they generate auroras. However, dynamic models have not been able to reproduce these electric fields. This thesis sheds some light on this incompatibility and shows that the missing ingredient in previous dynamic models is a correct description of the electron temperature. As the electrons accelerate towards the ionosphere, their velocity along the magnetic field line will increase. In the converging magnetic field lines, the mirror force will convert much of the parallel velocity into perpendicular velocity. The result of the acceleration and mirroring will be a velocity distribution with a significantly higher temperature in the auroral acceleration region than above. The enhanced temperature corresponds to strong electron pressure gradients that balance the parallel electric fields. Thus, in regions with electron acceleration along converging magnetic field lines, the electron temperature increase is a fundamental process and must be included in any model that aims to describe the build up of parallel electric fields. The development of such a model has been hampered by the difficulty to describe the temperature variation. This thesis shows that a local equation of state cannot be used, but the electron temperature variations must be descibed as a nonlocal response to the state of the auroral flux tube. The nonlocal response can be accomplished by the particle-fluid model presented in this thesis. This new dynamic model is a combination of a fluid model and a Particle-In-Cell (PIC) model and results in large parallel electric fields consistent with in-situ observations.

APA, Harvard, Vancouver, ISO, and other styles
28

Vedin, Jörgen. "Numerical modeling of auroral processes /." Umeå : Dept. of Physics, Umeå Univ, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sharma, Sandeep Ph D. Massachusetts Institute of Technology. "Predictive modeling of combustion processes." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54583.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 161-169).
Recently, there has been an increasing interest in improving the efficiency and lowering the emissions from operating combustors, e.g. internal combustion (IC) engines and gas turbines. Different fuels, additives etc. are used in these combustors to try to find the optimal operating conditions and fuel combination which gives the best results. This process is ad-hoc and costly, and the expertise gained on one system cannot easily be transfered to other situations. To improve this process a more fundamental understanding of chemistry and physical processes is required. The fundamental constants like rate coefficients of elementary reactions are readily transferable enabling us to use results from one set of experiments or calculations in a different situation. In our group we have taken this approach and developed the software Reaction Mechanism Generator (RMG), which generates chemical mechanism for oxidation and pyrolysis of a given fuel under a set of user-defined physical conditions. RMG uses group additivity values to generate thermochemistry of molecules and has a database of rate coefficients of elementary reactions. These two sets of data are used to generate chemical kinetic mechanism in a systematic manner. The reaction mechanisms generated by RMG are purely predictive and elementary rate coefficient from any reliable source can be added to RMG database to improve the quality of its predictions. The goal of my thesis was two fold, first to extend the capabilities and database of RMG and to release it as an open source software for the chemical kinetic community to use.
(cont.) The second was to take a practical system of interest and use RMG to generate the chemical mechanism and thereby demonstrate the utility of RMG in generating predictive chemical mechanisms for practical situations. As a part of the second step our hope was to generate new chemical insights into soot formation processes which are of great interest. The three most important contributions of the thesis are listed below. 1. My work with RMG has resulted in order of magnitude improvements in the cpu and memory usage of RMG and it has added many useful features to RMG like ac- curate sensitivity analysis for better interpreting the final mechanism. I have also worked on extending the database of RMG, by adding thermochemistry of ringed species that cannot be treated adequately by group additivity. Also kinetic rate rules for intramolecular-H-migration reactions in OOQOOH molecules were added to RMG database, which are important in predicting the low temperature oxidation of alkanes. 2. Recently there have been considerable advances in the methodology for rate coefficient calculations for loose transition states, i.e transition states that are not saddle points. These type of transition states are encountered often in radical-radical reactions. In addition to these advances there has been significant progress in accurate calculation of the pressure dependent rate coefficients for complicated potential energy surfaces with multiple wells and multiple product channels. The method is based on the master equation formulation of the problem. These detailed equations are then appropriately coarse-grained to calculate the phenomenological rate coefficients.
(cont.) I have used these state of the art techniques to calculate the rate coefficients for the formation of various aromatic species like benzene and styrene. The rate coefficients predicted by these methods were tested under certain conditions and are in good agreement with experimental data. 3. Finally to model a two-dimensional diffusion flame we have developed a solver that is able to solve a complicated set of highly coupled differential equations in an efficient manner to give accurate results. The solver in conjunction with chemistry that is developed using techniques mentioned in the last two points is used to solve the mole fraction profiles in the diffusion flame. The results of the simulations are compared to the experimental measurements and this process gives us insight into soot formation in diffusion flames.
by Sandeep Sharma.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
30

Andrade, Restrepo Martín. "Mathematical modeling and evolutionary processes." Thesis, Sorbonne Paris Cité, 2019. http://www.theses.fr/2019USPCC021.

Full text
Abstract:
La recherche présentée dans cette thèse concerne différents sujets dans le domaine de la biomathématique. J’aborde diverses questions en biologie (et liées aux systèmes complexes) avec des méthodes mathématiques et numériques. Ces questions sont les suivantes: (i) Les processus passifs sont-ils suffisants pour justifier la distribution asymétrique des protéines endommagées pendant et après la cytokinèse de la levure? (ii) Quels processus sont à l’origine des schémas complexes d’expansion de l’amyloïde bêta dans le cerveau des patients atteints de la maladie d’Alzheimer? (iii) Qu’y a-t-il derrière la dichotomie de ‘clusters’ vs. ‘cline-like’ dans les modèles d’évolution le long de gradients environnementaux? (iv) Comment cette dichotomie affecte-t-elle la dynamique spatiale des invasions? (v) Comment la multi-stabilité se manifeste-t-elle dans ces modèles? Ces questions sont abordées (à différentes échelles, certaines totalement et certaines partiellement) avec différentes méthodes théoriques. Les résultats devraient permettre de mieux comprendre les processus biologiques analysés et de motiver la poursuite des travaux expérimentaux et empiriques susceptibles de contribuer à résoudre les incertitudes persistantes
The research presented in this thesis concerns different topics in the field of Biomathematics. I address diverse questions arising in biology (and related to complex systems) with mathematical and numerical methods. These questions are: (i) Are passive-processes enough to justify the asymmetric distribution of damaged proteins during and after yeast cytokinesis? (ii) What processes are behind the complex patterns of expansion of Amyloid beta in the brains of patients with Alzheimer’s disease? (iii) What is behind the clustering and cline-like dichotomy in models of evolution along environmental gradients? (iv) How does this dichotomy affect the spatial dynamics of invasions and range expansions? (v) How does multi-stability manifest in these models? These questions are approached (at different scales, some fully and some partially) with different theoretical methods. Results are expected to shed light on the biological processes analyzed and to motivate further experimental and empirical work which can help solve lingering uncertainties
APA, Harvard, Vancouver, ISO, and other styles
31

Lytsenko, M., Тетяна Олександрівна Маринич, Татьяна Александровна Маринич, and Tetiana Oleksandrivna Marynych. "Econometric modeling of nonstationary processes." Thesis, Karazin National University, 2015. http://essuir.sumdu.edu.ua/handle/123456789/68631.

Full text
Abstract:
Econometric research of nonstationary time series on causality, cointegration relation and adequate simulation methods was conducted. VAR and VEC models were found to be the most appropriate ways to make reliable prediction and scenario analysis of macro financial data under unstable economic conditions. These econometric techniques were approbated on the financial indicators of Ukrainian economy.
APA, Harvard, Vancouver, ISO, and other styles
32

Sharma, Chetan M. Eng Massachusetts Institute of Technology. "Automatic modeling of machining processes." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130833.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 47-48).
3 axis CNC milling is a ubiquitous manufacturing method in industry due to its versatility and precision. The fundamental parameters that dictate cutting performance ("speeds, feeds, and engagement") must be manually set by the machine programmer; proper operation therefore relies heavily on operator skill. In this thesis, an intelligent CNC controller is presented that uses low-cost sensors to fit an analytical model of cutting forces. The analytical nature of this model allows for favorable convergence characteristics and low computational costs. This is used to optimize cutting feeds with respect to process constraints for future movements; as more data is collected, the model continuously reinforced. This intelligent controller therefore abstracts out some of the complexities of machining and makes the process more approachable.
by Chetan Sharma.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
33

Szymkiewicz, Paul M. "Towards modeling of retrofit processes." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53891.

Full text
Abstract:
Energy retrofits can be executed by a building owner with or without the supervision of a third-party agent. We define process models to capture third-party energy retrofit inspection activities, and refine, augment, and generalize those models to then examine the impact of third-party retrofit inspections. Buildings included in the study vary considerably in type, and so do retrofit programs applied to those buildings.
APA, Harvard, Vancouver, ISO, and other styles
34

Hubler, David K. "Modeling Electrochemical Water Treatment Processes." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/265367.

Full text
Abstract:
Several electrochemical processes are modeled at process levels and atomic scales. Processes are presented for acid generation and ion exchange media regeneration, along with corresponding process models. Transport and reaction processes in individual ion exchange beads are also modeled. Acids of mild strength (pH = ~1-2) are generated from electrolyte solutions and their strength is effectively modeled as a function of time. The regeneration of ion exchange media is also modeled, to close agreement with measurements, and the process model is reconciled with a model for solute flux from an individual ion exchange bead. Together, the models show that the "gentle" regeneration process is controlled by the plating rate. Processes interior to the particle are controlled by diffusion, but all processes are faster than the characteristic time for plating. In a separate process, an electrochemical method is used to produce hypochlorite for disinfection. The process generates perchlorate as a toxic byproduct. Density function theory is used to construct an atomic-scale model of the mechanism for producing perchlorate, as well as the aging of the boron-doped diamond anode used in the process. The mechanism shows that the boron-doped diamond surface plays an important role in chemisorbing and stabilizing radicals of oxychlorine anions, allowing the radicals to live long enough to react and form higher ions like perchlorate. Wear mechanisms that occur on the anode are shown to oxidize and etch the surface, changing its chemical functionality over time. As the surface ages, the overpotential for water oxidation is decreased, decreasing the efficiency of the electrode.
APA, Harvard, Vancouver, ISO, and other styles
35

Bressan, Fernando. "Multiphysics modeling for electroheat processes." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424122.

Full text
Abstract:
Microwave heating is a process widely used for industrial processes, scientific applications, medical treatments, as well as for household appliances. The advantage of this technology is mainly related to the velocity of this heating process, important factor in order to meet the needs of the market. In fact, the heat sources are localized inside the load under heating, so reducing the process time. However, because of the involved frequencies and, sometimes, due to the costs of the devices involved in such processes, generally the quality (uniformity) of the microwave heating is pretty penalized. The progress of semiconductor technologies, along with the high quality and efficiency required from the consumers, seem to be the key points for the innovations in this sector. The multi-physic numerical modeling, coupled with the optimization techniques ever more efficient, leads to an accurate design of the heating process, as well as of the corresponding devices. The aim of the present work consists in the development of numerical multiphysics models, simulating microwave heating processes for household applications (microwave ovens) and medical devices (hyperthermia treatments). The experimental validations confirm the reliability of the proposed methods
Il riscaldamento a microonde è un processo largamente impiegato nei settori industriale, medico e domestico. Il vantaggio legato all'utilizzo di questa tecnologia consiste nell'elevata velocità del processo di riscaldamento, fattore di rilevante importanza al fine di soddisfare le esigenze del mercato. Le sorgenti di calore vengono infatti localizzate direttamente nel carico oggetto di riscaldamento, riducendo i tempi di processo. Tuttavia, a causa delle frequenze in gioco e talvolta dei costi legati ai dispositivi coinvolti in tali processi, generalmente la qualità (uniformità) del riscaldamento viene penalizzata. Il progresso nel settore delle tecnologie dei semiconduttori, assieme alle richieste di qualità ed efficienza sempre più stringenti da parte dei consumatori, sembrano essere i punti chiave per l'innovazione tecnologica in questo settore. L'impiego di metodologie di simulazione multifisiche al calcolatore, accoppiate a tecniche di ottimizzazione sempre più performanti, permette un'accurata progettazione del processo di riscaldamento e dei relativi dispositivi. Il presente lavoro si pone l'obiettivo di sviluppare modelli numerici multifisici nel settore dei riscaldamenti a microonde per uso domestico (forni a microonde) e medicale (trattamenti di ipertermia). Le validazioni sperimentali sviluppate confermano l'affidabilità delle soluzioni e dei metodi proposti.
APA, Harvard, Vancouver, ISO, and other styles
36

Doi, Kentaro. "Theoretical studies on electronic processes in electronic and optical materials." 京都大学 (Kyoto University), 2007. http://hdl.handle.net/2433/136332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jones, R. E. "Electronic processes in electroluminescent device structures." Thesis, Durham University, 1986. http://etheses.dur.ac.uk/7031/.

Full text
Abstract:
Electronic processes in two different electroluminescent device structures, the forward biassed metal/thick insulator/semiconductor (MIS) diode and the high field metal/insulator/metal (MIM) panel, are investigated. Models are produced to explain the behaviour of two particular MIS systems which have been studied experimentally. One of these systems is the Au/cadmium stearate/n-GaP structure, where the insulator is deposited using Langmuir-Blodgett (LB) technology. The other is the Au/i-ZnS/n-ZnS structure. In the MIS devices electroluminescence occurs as a result of the recombination of electrons and holes in the semiconductor and so it is necessary to have an efficient minority carrier (hole) injection mechanism. Attention is paid to the impact excitation of the electron gas in the metal by the electrons injected from the semiconductor because this has been proposed by other workers as a process for producing holes in the metal that are energetically capable of entering the semiconductor valence band, provided they can traverse the insulator. The characteristics of the LB film devices are found to be best described by assuming the minority carrier injection to be limited by the hole transport through the insulator. Hopping between interface states on the successive LB layers is proposed as the transport mechanism. However, the device incorporating a II-VI semi-insulator is shown to be more characteristic of hole transport in the insulator valence band and a minority carrier injection which is limited by the supply of holes from the metal. In high field MIM panels the mechanism of electroluminescence is quite different with impurity centres being impact excited or impact ionised by injected electrons and subsequently luminescing. Such devices driven by a dc signal are susceptible to the formation of high current filaments which burn out and result in device failure. A model is developed which predicts that there is a voltage range over which the device can exist in either a low current state or two higher current states and the resultant instability is expected to be destructive. Current-voltage characteristics are produced using this model and their general features are found to be relatively insensitive to material and device parameters. In order to understand the evolution of the electrical state of the MIM device after switch-on, a time dependent theory of system behaviour is also developed. This is particularly important as the devices are usually driven by a pulsed signal. For an homogeneous system the current is found to converge to the lower current state of the steady state characteristic.
APA, Harvard, Vancouver, ISO, and other styles
38

Montoliu, Álvaro Carles. "Study, Modelling and Implementation of the Level Set Method Used in Micromachining Processes." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/58609.

Full text
Abstract:
[EN] The main topic of the present thesis is the improvement of fabrication processes simulation by means of the Level Set (LS) method. The LS is a mathematical approach used for evolving fronts according to a motion defined by certain laws. The main advantage of this method is that the front is embedded inside a higher dimensional function such that updating this function instead of directly the front itself enables a trivial handling of complex situations like the splitting or coalescing of multiple fronts. In particular, this document is focused on wet and dry etching processes, which are widely used in the micromachining process of Micro-Electro-Mechanical Systems (MEMS). A MEMS is a system formed by mechanical elements, sensors, actuators, and electronics. These devices have gained a lot of popularity in last decades and are employed in several industry fields such as automotive security, motion sensors, and smartphones. Wet etching process consists in removing selectively substrate material (e.g. silicon or quartz) with a liquid solution in order to form a certain structure. This is a complex process since the result of a particular experiment depends on many factors, such as crystallographic structure of the material, etchant solution or its temperature. Similarly, dry etching processes are used for removing substrate material, however, gaseous substances are employed in the etching stage. In both cases, the usage of a simulator capable of predicting accurately the result of a certain experiment would imply a significant reduction of design time and costs. There exist a few LS-based wet etching simulators but they have many limitations and they have never been validated with real experiments. On the other hand, atomistic models are currently considered the most advanced simulators. Nevertheless, atomistic simulators present some drawbacks like the requirement of a prior calibration process in order to use the experimental data. Additionally, a lot of effort must be invested to create an atomistic model for simulating the etching process of substrate materials with different atomistic structures. Furthermore, the final result is always formed by unconnected atoms, which makes difficult a proper visualization and understanding of complex structures, thus, usually an additional visualization technique must be employed. For its part, dry etching simulators usually employ an explicit representation technique to evolve the surface being etched according to etching models. This strategy can produce unrealistic results, specially in complex situations like the interaction of multiple surfaces. Despite some models that use implicit representation have been published, they have never been directly compared with real experiments and computational performance of the implementations have not been properly analysed. The commented limitations are addressed in the various chapters of the present thesis, producing the following contributions: - An efficient LS implementation in order to improve the visual representation of atomistic wet etching simulators. This implementation produces continuous surfaces from atomistic results. - Definition of a new LS-based model which can directly use experimental data of many etchant solutions (such as KOH, TMAH, NH4HF2, and IPA and Triton additives) to simulate wet etching processes of various substrate materials (e.g. silicon and quartz). - Validation of the developed wet etching simulator by comparing it to experimental and atomistic simulator results. - Implementation of a LS-based tool which evolves the surface being etched according to dry etching models in order to enable the simulation of complex processes. This implementation is also validated experimentally. - Acceleration of the developed wet and dry etching simulators by using Graphics Processing Units (GPUs).
[ES] El tema principal de la presente tesis consiste en mejorar la simulación de los procesos de fabricación utilizando el método Level Set (LS). El LS es una técnica matemática utilizada para la evolución de frentes según un movimiento definido por unas leyes. La principal ventaja de este método es que el frente está embebido dentro de una función definida en una dimensión superior. Actualizar dicha función en lugar del propio frente permite tratar de forma trivial situaciones complejas como la separación o la colisión de diversos frentes. En concreto, este documento se centra en los procesos de atacado húmedo y seco, los cuales son ampliamente utilizados en el proceso de fabricación de Sistemas Micro-Electro-Mecánicos (MEMS, de sus siglas en inglés). Un MEMS es un sistema formado por elementos mecánicos, sensores, actuadores y electrónica. Estos dispositivos hoy en día son utilizados en muchos campos de la industria como la seguridad automovilística, sensores de movimiento y teléfonos inteligentes. El proceso de atacado húmedo consiste en eliminar de forma selectiva el material del sustrato (por ejemplo, silicio o cuarzo) con una solución líquida con el fin de formar una estructura específica. Éste es un proceso complejo pues el resultado depende de muchos factores, tales como la estructura cristalográfica del material, la solución atacante o su temperatura. De forma similar, los procesos de atacado seco son utilizados para eliminar el material del sustrato, sin embargo, se utilizan sustancias gaseosas en la fase de atacado. En ambos casos, la utilización de un simulador capaz de predecir de forma precisa el resultado de un experimento concreto implicaría una reducción significativa del tiempo de diseño y de los costes. Existen unos pocos simuladores del proceso de atacado húmedo basados en el método LS, no obstante tienen muchas limitaciones y nunca han sido validados con experimentos reales. Por otro lado, los simuladores atomísticos son hoy en día considerados los simuladores más avanzados pero tienen algunos inconvenientes como la necesidad de un proceso de calibración previo para poder utilizar los datos experimentales. Además, debe invertirse mucho esfuerzo para crear un modelo atomístico para la simulación de materiales de sustrato con distintas estructuras atomísticas. Asimismo, el resultado final siempre está formado por átomos inconexos que dificultan una correcta visualización y un correcto entendimiento de aquellas estructuras complejas, por tanto, normalmente debe emplearse una técnica adicional para la visualización de dichos resultados. Por su parte, los simuladores del proceso de atacado seco normalmente utilizan técnicas de representación explícita para evolucionar, según los modelos de atacado, la superficie que está siendo atacada. Esta técnica puede producir resultados poco realistas, sobre todo en situaciones complejas como la interacción de múltiples superficies. A pesar de que unos pocos modelos son capaces de solventar estos problemas, nunca han sido comparados con experimentos reales ni el rendimiento computacional de las correspondientes implementaciones ha sido adecuadamente analizado. Las expuestas limitaciones son abordadas en la presente tesis y se han producido las siguientes contribuciones: - Implementación eficiente del método LS para mejorar la representación visual de los simuladores atomísticos del proceso de atacado húmedo. - Definición de un nuevo modelo basado en el LS que pueda usar directamente los datos experimentales de muchos atacantes para simular el proceso de atacado húmedo de diversos materiales de sustrato. - Validación del simulador comparándolo con resultados experimentales y con los de simuladores atomísticos. - Implementación de una herramienta basada en el método LS que evolucione la superficie que está siendo atacada según los modelos de atacado seco para habilitar la simulación de procesos comple
[CAT] El tema principal de la present tesi consisteix en millorar la simulació de processos de fabricació mitjançant el mètode Level Set (LS). El LS és una tècnica matemàtica utilitzada per a l'evolució de fronts segons un moviment definit per unes lleis en concret. El principal avantatge d'aquest mètode és que el front està embegut dins d'una funció definida en una dimensió superior. D'aquesta forma, actualitzar la dita funció en lloc del propi front, permet tractar de forma trivial situacions complexes com la separació o la col·lisió de diversos fronts. En concret, aquest document es centra en els processos d'atacat humit i sec, els quals són àmpliament utilitzats en el procés de fabricació de Sistemes Micro-Electro-Mecànics (MEMS, de les sigles en anglès). Un MEMS és un sistema format per elements mecànics, sensors, actuadors i electrònica. Aquests dispositius han guanyat molta popularitat en les últimes dècades i són utilitzats en molts camps de la indústria, com la seguretat automobilística, sensors de moviment i telèfons intel·ligents. El procés d'atacat humit consisteix en eliminar de forma selectiva el material del substrat (per exemple, silici o quars) amb una solució líquida, amb la finalitat de formar una estructura específica. Aquest és un procés complex ja que el resultat de un determinat experiment depèn de molts factors, com l'estructura cristal·logràfica del material, la solució atacant o la seva temperatura. De manera similar, els processos d'atacat sec son utilitzats per a eliminar el material del substrat, no obstant, s'utilitzen substàncies gasoses en la fase d'atacat. En ambdós casos, la utilització d'un simulador capaç de predir de forma precisa el resultat d'un experiment en concret implicaria una reducció significativa del temps de disseny i dels costos. Existeixen uns pocs simuladors del procés d'atacat humit basats en el mètode LS, no obstant tenen moltes limitacions i mai han sigut validats amb experiments reals. Per la seva part, els simuladors atomístics tenen alguns inconvenients com la necessitat d'un procés de calibratge previ per a poder utilitzar les dades experimentals. A més, deu invertir-se molt d'esforç per crear un model atomístic per a la simulació de materials de substrat amb diferents estructures atomístiques. Així mateix, el resultat final sempre està format per àtoms inconnexos que dificulten una correcta visualització i un correcte enteniment d'aquelles estructures complexes, per tant, normalment deu emprar-se una tècnica addicional per a la visualització d'aquests resultats. D'altra banda, els simuladors del procés d'atacat sec normalment utilitzen tècniques de representació explícita per evolucionar, segons els models d'atacat, la superfície que està sent atacada. Aquesta tècnica pot introduir resultats poc realistes, sobretot en situacions complexes com per exemple la interacció de múltiples superfícies. A pesar que uns pocs models son capaços de resoldre aquests problemes, mai han sigut comparats amb experiments reals ni tampoc el rendiment computacional de les corresponents implementacions ha sigut adequadament analitzat. Les exposades limitacions son abordades en els diferents capítols de la present tesi i s'han produït les següents contribucions: - Implementació eficient del mètode LS per millorar la representació visual dels simuladors atomístics del procés d'atacat humit. - Definició d'un nou model basat en el mètode LS que puga utilitzar directament les dades experimentals de molts atacants per a simular el procés d'atacat humit de diversos materials de substrat. - Validació del simulador d'atacat humit desenvolupat comparant-lo amb resultats experimentals i amb els de simuladors atomístics. - Implementació d'una ferramenta basada en el mètode LS que evolucione la superfície que està sent atacada segons els models d'atacat sec per, d'aquesta forma, habilitar la simulació de processo
Montoliu Álvaro, C. (2015). Study, Modelling and Implementation of the Level Set Method Used in Micromachining Processes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/58609
TESIS
APA, Harvard, Vancouver, ISO, and other styles
39

Su, Jiann-Cherng. "Residual stress modeling in machining processes." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14030.

Full text
Abstract:
Thesis (Ph.D)--Mechanical Engineering, Georgia Institute of Technology, 2007.
Committee Chair: Liang, Steven Y.; Committee Member: Garmestani, Hamid; Committee Member: Huang, Yong; Committee Member: Melkote, Shreyes N.; Committee Member: Neu, Richard W. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
40

Dong, Wen S. M. Massachusetts Institute of Technology. "Influence modeling of complex stochastic processes." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37386.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.
Includes bibliographical references (leaves 75-76).
A complex stochastic process involving human behaviors or human group behaviors is computationally hard to model with a hidden Markov process. This is because the state space of such behaviors is often a Cartesian product of a large number of constituent probability spaces, and is exponentially large. A sample for those stochastic processes is normally composed of a large collection of heterogeneous constituent samples. How to combine those heterogeneous constituent samples in a consistent and stable way is another difficulty for the hidden Markov process modeling. A latent structure influence process models human behaviors and human group behaviors by emulating the work of a team of experts. In such a team, each expert concentrates on one constituent probability space, investigates one type of constituent samples, and/or employ one type of technique. An expert improves his work by considering the results from the other experts, instead of the raw data for them. Compared with the hidden Markov process, the latent structure influence process is more expressive, more stable to outliers, and less likely to overfit. It can be used to study the interaction of over 100 persons and get good results.
(cont.) This thesis is organized in the following way. Chapter 0 reviews the notation and the background concepts necessary to develop this thesis. Chapter 1 describes the intuition behind the latent structure influence process and the situations where it outperforms the other dynamic models. In Chapter 2, we give inference algorithms based on two different interpretations of the influence model. Chapter 3 applies the influence algorithms to various toy data sets and real-world data sets. We hope our demonstrations of the influence modeling could serve as templates for the readers to develop other applications. In Chapter 4, we conclude with the rationale and other considerations for influence modeling.
by Wen Dong.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
41

Mehrabi, M. Reza. "Modeling transport processes in directional solidification." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/11999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Arruda, Guilherme Ferraz de. "Modeling spreading processes in complex networks." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-20072018-160836/.

Full text
Abstract:
Mathematical modeling of spreading processes have been largely studied in the literature, and its presented a boom in the past few years. This is a fundamental task on the understanding and prediction of real spreading processes on top of a population and are subject to many structural and dynamical constraints. Aiming at a better understanding of this processes, we focused in two task: the modeling and the analysis of both dynamical and structural aspects of these processes. Initially, we proposed a new and general model that unifies epidemic and rumor spreading. Besides, regarding the analysis of these processes, we extended the classical formalism to multilayer networks, in which the theory was lacking. Interestingly, this study opened up new challenges concerning the understanding of multilayer networks. More specifically, regarding their spectral properties. In this thesis, we analyzed such processes on top of single and multilayer networks. Thus, throughout our analysis, we followed three complementary approaches: (i) analytical, (ii) numerical and (iii) simulations, mainly Monte Carlo simulations. Our main results are: (i) a new unifying model, enabling us to model and understand spreading processes on large systems, (ii) characterization of new phenomena on multilayer networks, such as layer-wise localization and the barrier effect and (iii) an spectral analysis of multilayer systems, suggesting a universal parameter and proposing a new analytical tool for its analysis. Our contributions enable further research on modeling of spreading processes, also emphasizing the importance of considering the complete multilayer structure instead of any coarse-graining. Additionally, it can be directly applied on the prediction and modeling real processes. Thus, aside from the theoretical interest and its mathematical implications, it also presents important social impact.
A modelagem matemática dos processos de disseminação tem sido amplamente estudada na literatura, sendo que o seu estudo apresentou um boom nos últimos anos. Esta é uma tarefa fundamental na compreensão e previsão de epidemias reais e propagação de rumores numa população, ademais, estas estão sujeitas a muitas restrições estruturais e dinâmicas. Com o objetivo de entender melhor esses processos, nos concentramos em duas tarefas: a de modelagem e a de análise de aspectos dinâmicos e estruturais. No primeiro, propomos um modelo novo e geral que une a epidemia e propagação de rumores. Também, no que diz respeito à análise desses processos, estendemos o formalismo clássico às redes multicamadas, onde tal teoria era inexistente. Curiosamente, este estudo abriu novos desafios relacionados à compreensão de redes multicamadas, mais especificamente em relação às suas propriedades espectrais. Nessa tese, analisamos esses processos em redes de uma e múltiplas camadas. Ao longo de nossas análises seguimos três abordagens complementares: (i) análises analíticas, (ii) experimentos numéricos e (iii) simulações de Monte Carlo. Assim, nossos principais resultados são: (i) um novo modelo que unifica as dinâmicas de rumor e epidemias, nos permitindo modelar e entender tais processos em grandes sistemas, (ii) caracterização de novos fenômenos em redes multicamadas, como a localização em camadas e o efeito barreira e (iii) uma análise espectral de sistemas multicamadas, sugerindo um parâmetro de escala universal e propondo uma nova ferramenta analítica para sua análise. Nossas contribuições permitem que novas pesquisas sobre modelagem de processos de propagação, enfatizando também a importância de se considerar a estrutura multicamada. Dessa forma, as nossas contribuições podem ser diretamente aplicadas à predição e modelagem de processos reais. Além do interesse teórico e matemático, nosso trabalho também apresenta implicações sociais importantes.
APA, Harvard, Vancouver, ISO, and other styles
43

ABU, HAMMAD AYMAN ABDALLAH. "SIMULATION MODELING OF MANUFACTURED HOUSING PROCESSES." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1015617645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Welf, Erik Steven. "Integrative modeling of cell adhesion processes." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 285 p, 2009. http://proquest.umi.com/pqdweb?did=1833641671&sid=4&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph.D.)--University of Delaware, 2009.
Principal faculty advisors: Babatunde Ogunnaike, Dept. of Chemical Engineering, and Ulhas P. Naik, Dept. of Biological Sciences. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
45

Celebi, Kemal. "Optical modeling of organic electronic devices." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45431.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Physics, 2008.
Includes bibliographical references (p. 51-53).
Organic materials, with their superior photoluminescence and absorbance properties have revolutionized the technologies for displays and solar energy conversion. Due to the large transition dipoles, the localization of excited states or excitons in organic materials necessitates optical models that extend beyond classical far field methods. In this thesis we propose an extended near field calculation method using dyadic Green's functions and demonstrate the applications of both our extended model and traditional far field models for different types of devices such as surface plasmon detectors, cavity organic light emitting devices and organic photovoltaics with external antennas.
by Kemal Celebi.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
46

Rajagopalan, Dharmashankar. "Opto-Electronic Processes in SrS:Cu ACTFEL Devices." UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_theses/273.

Full text
Abstract:
The a. c. thin film electroluminescent (ACTFEL) devices are of scientific interest due to their applications in large area, flat panel displays. Of particular interest to the research community is the mechanism of electron transport and luminance in these devices. Toward this end, a physical model and a mathematical model for SrS:Cu ACTFEL Devices were developed and published earlier by our group. The purpose of this thesis is to obtain a qualitative and quantitative match between experiment and theory. A brief summary of the model can be found here [1]. Effects of variation in drive parameters in experimental steady state measurements, and analysis of VIL (Voltage-Current-Luminance) plots for different simulated device and drive parameters are performed. The effects of voltage amplitude, activator concentration, interface energy levels, and critical field for dipole collapse were studied. The plots matched qualitatively in that all major experimental features were produced in the simulated waveforms. The measured and the simulated peak currents are 72.5 mA/cm2 and 66.42 mA/cm2 for VA = 123 V. Experimental and theoretical charge transferred per pulse were 2.75 C/cm2 and 2.26 C/cm2. Peak experimental and simulated luminance values for VA = 123 V were 531 cd/m2 and 49150 cd/m2. Total experimental and simulated luminance values for VA = 123 V case were 6.2 cd/m2 and 561.2 cd/m2 respectively. The large difference is attributed to the loss factors such as optical losses (due to total internal reflection), scattering of electrons by impurities in the bulk phosphor layer, and concentration quenching; these have not been incorporated in the model yet.
APA, Harvard, Vancouver, ISO, and other styles
47

Toliautas, Stepas. "Electronic excitation processes of photoactive organic molecules." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140929_100526-37294.

Full text
Abstract:
Evolution of the electronic excitation is a general process that can be used to explain many natural and artificial phenomena, such as photosynthesis in plants and bacteria, biological mechanism of vision, and operating principles of optomechanical and optoelectronic devices. This process is theoretically modeled by solving the time-dependent Schroedinger equation. However, such treatment is too computationally expensive to be used for practical molecular systems. Therefore, either models of the structure of the systems or the solving procedure itself must be simplified to get the desired results. The main goal of the research presented in this dissertation was to study processes caused by the electronic excitation in photoactive molecules using computational methods of electronic structure (i. e. solving the simpler time-independent Schroedinger equation) and to construct the potential energy surface models describing the energy relaxation in the investigated molecules. It is shown that the results of different investigations performed using the same procedure provide explanations of different phenomena in various compounds, such as: proton transfer in polar solvent, performed by a functional group of the bacteriorhodopsin protein; optomechanical cycle of the indolo-benzoxazine compound; efficient phosphorescence of the silicon-based organic polymer; and optical properties of organometallic emitter compound with additional charge-carrier groups.
Elektroninio sužadinimo evoliucija šviesai jautriose molekulėse yra reiškinys, kuriuo remiantis įmanoma nagrinėti daugelį natūralių ir dirbtinių procesų: augalų ir bakterijų fotosintezę, regos mechanizmą, optomechaninių bei optoelektroninių prietaisų (pavyzdžiui, organinių šviestukų) veikimą. Teoriškai šis reiškinys modeliuojamas sprendžiant laikinę Šriodingerio lygtį. Deja, toks sprendimas realiems, praktiškai panaudojamiems junginiams šiandien yra per sudėtingas uždavinys, todėl jį tenka keisti supaprastinant nagrinėjamų junginių modelius arba sprendimo metodiką. Šioje disertacijoje aprašomų tyrimų tikslas buvo elektroninės struktūros skaičiavimų metodais (t. y. sprendžiant paprastesnę nuostoviąją Šriodingerio lygtį) ištirti elektroninio sužadinimo sukeltus procesus fotoaktyviose molekulėse ir sudaryti sužadinimo relaksaciją apibūdinančius potencinės energijos paviršių modelius. Parodoma, jog ta pačia metodika atliekamų tyrimų rezultatai paaiškina įvairiuose junginiuose vykstančius reiškinius: bakteriorodopsino baltymo funkcinės grupės vykdomą protono pernašą poliniame tirpiklyje, indolo-benzoksazino junginio optomechaninį ciklą, našią fosforescenciją organiniame silicio polimere bei šviestukams naudojamo metaloorganinio komplekso su prijungtomis krūvininkų pernašos grupėmis ypatybes.
APA, Harvard, Vancouver, ISO, and other styles
48

Burdis, Mark Samuel. "Electronic conduction processes low-dimensional semiconductor systems." Thesis, University of Exeter, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Enfati, Niugat Abdel Hafiz. "Optical and electronic quantum processes in microstructures." Thesis, University of Essex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Meyer, Andreas, Sergey Smirnov, and Mathias Weske. "Data in business processes." Universität Potsdam, 2011. http://opus.kobv.de/ubp/volltexte/2011/5304/.

Full text
Abstract:
Process and data are equally important for business process management. Process data is especially relevant in the context of automated business processes, process controlling, and representation of organizations' core assets. One can discover many process modeling languages, each having a specific set of data modeling capabilities and the level of data awareness. The level of data awareness and data modeling capabilities vary significantly from one language to another. This paper evaluates several process modeling languages with respect to the role of data. To find a common ground for comparison, we develop a framework, which systematically organizes process- and data-related aspects of the modeling languages elaborating on the data aspects. Once the framework is in place, we compare twelve process modeling languages against it. We generalize the results of the comparison and identify clusters of similar languages with respect to data awareness.
Prozesse und Daten sind gleichermaßen wichtig für das Geschäftsprozessmanagement. Prozessdaten sind dabei insbesondere im Kontext der Automatisierung von Geschäftsprozessen, dem Prozesscontrolling und der Repräsentation der Vermögensgegenstände von Organisationen relevant. Es existieren viele Prozessmodellierungssprachen, von denen jede die Darstellung von Daten durch eine fest spezifizierte Menge an Modellierungskonstrukten ermöglicht. Allerdings unterscheiden sich diese Darstellungenund damit der Grad der Datenmodellierung stark untereinander. Dieser Report evaluiert verschiedene Prozessmodellierungssprachen bezüglich der Unterstützung von Datenmodellierung. Als einheitliche Grundlage entwickeln wir ein Framework, welches prozess- und datenrelevante Aspekte systematisch organisiert. Die Kriterien legen dabei das Hauptaugenmerk auf die datenrelevanten Aspekte. Nach Einführung des Frameworks vergleichen wir zwölf Prozessmodellierungssprachen gegen dieses. Wir generalisieren die Erkenntnisse aus den Vergleichen und identifizieren Cluster bezüglich des Grades der Datenmodellierung, in welche die einzelnen Sprachen eingeordnet werden.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography