Dissertations / Theses on the topic 'Systems and processes engineering'

To see the other types of publications on this topic, follow the link: Systems and processes engineering.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Systems and processes engineering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Heng, Jiin Shyang. "On systems engineering processes in system-of-systems acquisition." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5689.

Full text
Abstract:
Approved for public release; distribution is unlimited.
S results show that a low-risk SoS acquisition could continue with the current SE process as the benefits derived from an extensive front-end SE process are limited. Conversely, a high-risk SoS acquisition should adopt the SoS SE process proposed herein to enhance the SoS acquisition program's chance of success. It is highrisk SoS acquisitions such as the US Army's Future Combat System, the US Coast Guard's Deep Water System, the Joint Tactical Radio System (JTRS), and Homeland Security's SBInet that would likely benefit from the proposed SoS SE process.
APA, Harvard, Vancouver, ISO, and other styles
2

Ball, Linden John. "Cognitive processes in engineering design." Thesis, University of Plymouth, 1990. http://hdl.handle.net/10026.1/674.

Full text
Abstract:
The central aim of the current research programme was to gain an understanding of the cognitive processes involved in engineering design. Since little previous empirical research has investigated this domain, two major exploratory studies were undertaken here. Study One monitored seven final-year students tackling extended design projects. Diary and interview data were used to construct detailed design behaviour graphs that decomposed activities into structured representations reflecting the goals and subgoals that were pursued. Study Two involved individual observation (using video) of six professional engineers "thinking-aloud" as they tackled a small-scale design problem in a laboratory setting. A taxonomic scheme was developed to classify all verbal protocol units and other observable behaviours. In interpreting the data extensive use was made of theoretical concepts (e. g. schemas and mental models) deriving from current research on human problem solving and thinking. Evidence indicated that the engineers studied had many similar methods of working which could be described at a high level of abstraction in terms of a common "design schema". A central aspect of this schema was a problem reduction strategy which was used to break down complex design problems into more manageable subproblems. The data additionally revealed certain differences in design strategy between engineers' solution modelling activities and also showed up tendencies toward error and suboptimal performance. In this latter respect a particularly common tendency was for designers to "satisfice", that is to focus exclusively on initial solution concepts rather than comparing alternatives with the aim of optimising choices. The general implications of the present findings are discussed in relation to both the training of design skills and the development of intelligent computer systems to aid or automate the design process. A final, smaller scale of experimental study is also reported which investigated the possibility of improving design processes via subtle interventions aimed at imposing greater structure on design behaviours.
APA, Harvard, Vancouver, ISO, and other styles
3

Johnson, Kipp M. "Tailoring systems engineering processes for rapid space acquisitions." Thesis, Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/5203.

Full text
Abstract:
Approved for public release; distribution is unlimited
The Self-Awareness Space Situational Awareness (SASSA) program is a congressionally initiated technology demonstration program run by the Air Force, Space and Missile System Center (SMC), Los Angeles Air Force Base. Initiated October 2008, SASSA is investigating the feasibility of a highly flexible and adaptable satellite payload system for detecting satellite threats, both natural and manmade. The SASSA program was given cost and schedule limitations with a mandate to deliver hardware for demonstration in 24 months, considered a "rapid acquisition" by AF and SMC standards. This study provides an assessment of how the SASSA program tailored systems engineering processes to implement a "rapid space acquisition." Acquisition and engineering standards define a roadmap for military procurements to produce the most effective product at the most reasonable cost. Refinement of these standards over time is critical to the continued success of acquisition systems to evolve a current and effective military. This study reviews the SASSA concept and technology demonstration, surveys standard systems engineering guidance, catalogues systems engineering processes tailored, and assesses effectiveness of this tailoring. This study will provide observation and assessment of real-world results, successful and unsuccessful, for the purposes of capturing and documenting lessons learned towards successfully accomplishing rapid space acquisitions.
APA, Harvard, Vancouver, ISO, and other styles
4

Begin, Michael P. "Systems Engineering Processes for the Acquisition of Prognostic and Health Management Systems." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/17323.

Full text
Abstract:
Prognostic and Health Management (PHM) systems often experience delayed fielding and lengthened maturation cycles due to their relative immaturity and the fact that they are regarded as non-flight critical systems. The national fiscal crisis and rising debt of the U.S. have each placed increased scrutiny on military systems acquisition and procurement practices. The Defense Department is pushing for greater emphasis on fundamental systems engineering practices earlier in the acquisition phase, with the expectation of fewer schedule slips and budget overruns. The acquisition of PHM systems could also benefit from increased systems engineering rigor early in their development. A 2007 directive from the DoD states that PHM systems be implemented into current weapon systems equipment, and materiel sustainment programs where technically feasible and beneficial. This research examines the definition of PHM requirements and a method for developing a solution neutral architecture for PHM systems. The thesis also identifies software development practices and acquisition processes for military propulsion PHM systems. The conclusion of this research is that the Defense Department can deliver the warfighter a capable PHM system on-time and within budget through the establishment of better procurement and systems engineering practices.
APA, Harvard, Vancouver, ISO, and other styles
5

Kazeem, Mukaila. "Developing a Profitable Photography Business Based on System Engineering Principles & Processes." Digital Commons at Loyola Marymount University and Loyola Law School, 2010. https://digitalcommons.lmu.edu/etd/414.

Full text
Abstract:
System engineering is a robust approach to the design, creation, and operation of systems. In simple terms, the approach consists of identification and quantification of system goals, creation of alternative system design concepts, performance of design trades, selection and implementation of the best design, verification that the design is properly built and integrated, and post-implementation assessment of how well the system meets ( or met) the goals. The purpose of the document is to use the processes and guidelines found in Systems Engineering to make Mukaila: the photographer a lean and profitable small business. The photography industry is filled with hundreds of photographers. The main reason that many photographic businesses fail or don't reach profitability is because the fail to have a proper business and marketing plan. According to IBIS World Market research, the photography industry made approximately $9 Billion in revenue. At the same time, the number of photographic opportunities more than double every year. Ranging from headshots for actors to school portraits for high school seniors, the potential for growth of a small photography business is incredible. To take advantage of this growth in the market the company has developed an extensive marketing plan. The key to succeeding where other photographers have failed is to have very aggressive pricing. The company has laid out very specific requirements. Solutions for every requirement are provided throughout the rest of this document. This begins with performing trade studies and continues with laying out the Systems Architecture. This is followed by eliminating waste with Value Stream Maps and ends with Risk Management analysis. By the end of this document, it will also support the theory that Systems Engineering can not only be used for complex engineering projects, but can also be applied effectively to non-technical fields.
APA, Harvard, Vancouver, ISO, and other styles
6

Abdimomunova, Leyla (Leyla M. ). "Organizational assessment processes for enterprise transformation." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62764.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 97-99).
Enterprise transformation is a dynamic process that builds upon and affects organizational processes. Organizational assessment plays critical role in planning and execution of enterprise transformation. It allows the assessment of an enterprise's current capabilities as well as for identification and prioritization of improvements needed to drive the enterprise transformation process. Despite the benefits that organizational assessment has to offer, many organizations fail to exploit them due to unfavorable organizational culture, unsatisfactory assessment processes or mismatch between assessment tool and broader transformation approach. This thesis focuses mainly on a model of organizational assessment and how it can be improved to better support enterprise transformation. We argue that the assessment process spans beyond performing the assessment itself. For the assessment to provide the expected benefit, organizations must first of all create an environment ensuring a clear understanding of the role assessment plays in the enterprise transformation process. To this end they must promote open and frequent discussion about the current state of the enterprise and future goals. The assessment process must be carefully planned to ensure it runs effectively and efficiently and that assessment results are accurate and reliable. Assessment results must be analyzed and turned into specific recommendations and action plans. At the same time, the assessment process itself must be evaluated and adjusted, if necessary, for the next assessment cycle. Based on literature review and case studies of five large aerospace companies, we recommend a five-phase assessment process model that includes mechanisms to change organizational behavior through pre-assessment phases. It also allows for adjustment of the assessment process itself based on the results and experience of participants so that it better suits the organization's needs and practices.
by Leyla Abdimomunova.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
7

Lam, Rosaly. "Integrating ISSE and SE Processes in Information System Development." Digital Commons at Loyola Marymount University and Loyola Law School, 2007. https://digitalcommons.lmu.edu/etd/411.

Full text
Abstract:
Over the last few decades, the computer has become a powerful modem convenience. Its uses range from simple word processing to sophisticated programming device. With the explosion of Internet usage, it has become advantageous for organizations to share information or allow outsiders to access their data. Unfortunately, crime follows. Conspiracies, data thefts, security frauds, corporate espionages, etc. are on the rise. Research has shown that information systems without any security protection are extremely vulnerable. My personal work experience also has shown it is expensive to build security into an information system after it has been deployed. In this report, the reviewer is introduced to the concept of Information Assurance (IA), the role of Systems Engineering (SE) in Information System development, the importance of combining SE with Information Systems Security Engineering (ISSE), and finally, the suggested parallel SE & ISSE activities during the development of an information system.
APA, Harvard, Vancouver, ISO, and other styles
8

Clegg, Ben. "A systems approach to reengineering business processes towards concurrent engineering principles." Thesis, De Montfort University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Oswald, W. Andrew (William Andrew). "Understanding technology development processes theory & practice." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/90699.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 75-77).
Technology development is hard for management to understand and hard for practitioners to explain, however it is an essential component of innovation. While there are standard and predictable processes for product development, many of these techniques don't apply well to technology development. Are there common processes for technology development that can make it predictable, or is it unpredictable like basic research and invention? In this thesis, after building a foundation by looking at product development processes, I survey some of the literature on technology development processes and compare them to a handful of case studies from a variety of industries. I then summarize the observations from the cases and build a generic model for technology development that can be used to provide insights into how to monitor and manage technology projects. One of the observations from the product development literature is that looping and iteration is problematic for establishing accurate schedules which becomes one of the fundamental disconnects between management and engineering. Technologists rely heavily on iteration as a tool for gaining knowledge and combined with other risks, technology development may appear "out of control". To mitigate these risks, technologists have developed a variety of approaches including: building a series of prototypes of increasing fidelity and using them as a form of communication, simultaneously developing multiple technologies as a hedge against failure or predicting and developing technologies they think will be needed outside of formal channels. Finally, I use my model to provide some insights as to how management can understand technology development projects. This gives technologists and non-technical managers a common ground for communication.
by W. Andrew Oswald.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
10

Ajmera, Sameer K. (Sameer Kumar) 1975. "Microchemical systems for kinetic studies of catalytic processes." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/16821.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2002.
Includes bibliographical references.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Silicon microfabrication techniques and scale-up by replication have for decades fueled spectacular advances in the electronics industry. More recently, with the rise of microfluidics, microfabrication has enabled the development of microchemical systems for a variety of chemical and biological applications. This work focuses on the development of these systems for improved gas phase heterogeneous catalysis research. The catalyst development process often requires fundamental information such as reaction rate constants, activation energies, and reaction mechanisms to gauge and understand catalyst performance. To this end, we have examined the ability of microreactors with a variety of geometries to efficiently obtain accurate kinetic information. This work primarily focuses on microfabricated packed-bed reactors that utilize standard catalyst particles and briefly explores the use of membrane based reactors to obtain kinetic information. Initial studies with microfabricated packed-beds led to the development of a microfabricated silicon reactor that incorporates a novel cross-flow design with a short pass multiple flow-channel geometry to reduce the gradients that often confound kinetics in macroscale reactors. The cross-flow geometry minimizes pressure drop though the particle bed and incorporates a passive flow distribution system composed of an array of shallow flow channels. Combined experiments and modeling confirm the even distribution of flow across the wide catalyst bed with a pressure drop [approx.] 1600 times smaller than typical microfabricated packed-bed configurations.
(cont.) Coupled with the inherent heat and mass transfer advantages at the sub-millimeter length scale achievable through microfabrication, the cross-flow microreactor has been shown to operate in near-gradientless conditions and is an advantageous design for catalyst testing. The ability of microfabricated packed-beds to obtain accurate catalytic information has been demonstrated through experiments with phosgene generation over activated carbon, and CO oxidation and acetylene hydrogenation over a variety of noble metals on alumina. The advantages of using microreactors for catalyst testing is quantitatively highlighted throughout this work.
by Sameer K. Ajmera.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Rana, Farhan 1971. "Electron tunneling processes in Si/SiO₂ systems." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rupani, Sidharth. "Standardization of product development processes in multi-project organizations." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/91082.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Engineering Systems Division, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 120-126).
An important question for a large company with multiple product development projects is how standard or varied the sets of activities it uses to conceive, design, and commercialize products should be across the organization. To help address this question, this project is comprised of three research activities to improve understanding of the influence of standardization of product development processes on performance. Previous research indicates that process standardization has many positive (improved efficiency, knowledge transfer, decision making and resource allocation) and negative (reduced creativity, innovation, adaptation and learning, employee satisfaction) performance effects. Even focusing on specific performance outcomes, the influence of process standardization is contested. The first phase was a set of theory-building case studies at five large companies that develop electromechanical assembled products. One important lesson from the case studies was that to appropriately evaluate the impact of standardization on performance it is essential to disaggregate the process into its individual 'dimensions' (activities, deliverables, tools, etc.) because standardization on different dimensions of the process impacts performance outcomes quite differently. Another lesson was that companies differ in their process standardization approach because of differences in their portfolio characteristics and in their strategic priorities across performance outcomes. Based on the importance of focusing on individual process dimensions, a broad and systematic literature study was conducted with the aim of better capturing the current state of knowledge. This literature study resulted in a framework to characterize the problem space, a comprehensive set of relevant project characteristics, process dimensions, and performance outcomes and a summary of the established links, contested links, and unexplored links between these elements. Focusing on one set of contested links from the literature, the final research activity was a detailed empirical study at one company. The goal was to study the effect of variation in project-level product development processes, operating under the guidance of an established process standard, on project performance. The purpose-assembled data set includes measures of project characteristics, process dimensions, and project performance outcomes for 15 projects. Statistical analyses were performed to examine the relationships between process variation and project performance outcomes. Where possible, the statistical analyses were supported and enriched with available qualitative data. The results indicated that, at this company, process variation in the form of both customization and deviation was associated with negative net outcomes. Customization (in the form of combining project reviews) was associated with reduced development time and development cost, but also with lower quality, likely because of reduced testing. On net, in dollar terms, combining reviews was associated with negative outcomes. Specific deviations (in the form of waived deliverables) were also associated with negative performance consequences. Results also supported the lessons from Phase 1. Variation on different process dimensions was associated with different performance outcomes. Disaggregation was important, with many insights lost when deviations were aggregated. This project enhanced our understanding of the performance impacts of product development process standardization. The case studies highlighted the importance of disaggregating to individual process dimensions to correctly evaluate the effects of standardization. The systematic literature study resulted in a framework for organizational decision making about process standardization and a summary of the current state of knowledge - elements, established links, contested links, and unexplored links. The detailed empirical study at one company examined one set of contested links - between process standardization and project performance - and found that process variation in the form of both customization and deviation was associated with net negative effects on project performance.
by Sidharth Rupani.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Rocha, Andrea M. "Computational Discovery of Phenotype Related Biochemical Processes for Engineering." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3315.

Full text
Abstract:
Application of bioengineering technologies for enhanced biological hydrogen production is a promising approach that may play a vital role in sustainable energy. Due to the ability of several naturally occurring microorganisms to generate hydrogen through varying metabolic processes, biological hydrogen has become an attractive alternative energy and fuel source. One area of particular interest is the production of biological hydrogen in organically-rich engineered systems, such as those associated with waste treatment. Despite the potential for high energy yields, hydrogen yields generated by bacteria in waste systems are often limited due to a focus on microbial utilization of organic material towards cellular growth rather than production of biogas. To address this concern and to improve upon current technological applications, metabolic engineering approaches may be applied to known hydrogen producing organisms. However, to successfully modify metabolic pathways, full understanding of metabolic networks involved in expression of microbial traits in hydrogen producing organisms is necessary. Because microbial communities associated with hydrogen production are capable of exhibiting a number of phenotypes, attempts to apply metabolic engineering concepts have been restricted due to limited information regarding complex metabolic processes and regulatory networks involved in expression of microbial traits associated with biohydrogen production. To bridge this gap, this dissertation focuses on identification of phenotype-related biochemical processes within sets of phenotype-expressing organisms. Specifically, through co-development and application of evolutionary genome-scale phenotype-centric comparative network analysis tools, metabolic and cellular components related to three phenotypes (i.e., dark fermentative, hydrogen production and acid tolerance) were identified. The computational tools employed for the systematic elucidation of key phenotype-related genes and subsystems consisted of two complementary methods. The first method, the Network Instance-Based Biased Subgraph Search (NIBBS) algorithm, identified phenotype-related metabolic genes and subsystems through comparative analysis of multiple genome-scale metabolic networks. The second method was the multiple alignments of metabolic pathways for identification of conserved metabolic sub-systems in small sets of phenotype-expressing microorganisms. For both methodologies, key metabolic genes and sub-systems that are likely to be related to hydrogen production and acid-tolerance were identified and hypotheses regarding their role in phenotype expression were generated. In addition, analysis of hydrogen producing enzymes generated by NIBBS revealed the potential interplay, or cross-talk, between metabolic pathways. To identify phenotype-related subnetworks, three complementary approaches were applied to individual, and sets of phenotype-expressing microorganisms. In the first method, the Dense ENriched Subgraph Enumeration (DENSE) algorithm, partial "prior knowledge" about the proteins involved in phenotype-related processes are utilized to identify dense, enriched sets of known phenotype-related proteins in Clostridium acetobutylicum. The second approach utilized a bi-clustering algorithm to identify phenotype-related functional association modules associated with metabolic controls of phenotype-related pathways. Last, through comparison of hundreds of genome-scale networks of functionally associated proteins, the á, â-motifs approach, was applied to identify phenotype-related subsystems. Application of methodologies for identification of subnetworks resulted in detection of regulatory proteins, transporters, and signaling proteins predicted to be related to phenotype-expression. Through analysis of protein interactions, clues to the functional roles and associations of previously uncharacterized proteins were identified (DENSE) and hypotheses regarding potentially important acid-tolerant mechanisms were generated (á, â-motifs). Similar to the NIBBS algorithm, analysis of functional modules predicted by the bi-clustering algorithm suggest cross-talk is occurring between pathways associated with hydrogen production. The ability of these phenotype-centric comparative network analysis tools to identify both known and potentially new biochemical process is important for providing further understanding and insights into metabolic networks and system controls involved in the expression of microbial traits. In particular, identification of phenotype-related metabolic components through a systems approach provides the underlying foundation for the development of improved bioengineering technologies and experimental design for enhanced biological hydrogen production.
APA, Harvard, Vancouver, ISO, and other styles
14

Shen, Gwo-Chyau. "Adaptive inferential control for chemical processes /." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487329662147068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Rojas, Gomez Victor Daniel. "Organizational processes analysis of product development in the automotive industry." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107366.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, Engineering and Management Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 74-75).
This thesis provides an analysis of specific process phases associated with the vehicle components development process at Ford Motor Company. I will be using the Organizational Process as the foundation to explore opportunities to improve the existing process. As with any other organization, Ford Motor Company has areas of opportunity in the organizational arena. Being on the verge of the next automotive revolution, the organization needs to analyze whether or not it is in the right position to develop the cars for the future. With more than 100 years of history the company faces some legacy challenges that permeate in the culture of today's organization. Being formed around figures of cult and the scars left by turning the company around to avoid bankruptcy could inhibit Ford from keeping pace in a demanding and changing industry. In Ford's current organization, the product development engineers play a key role in engineering and developing the vehicles that people will drive in the years to come. The challenges of simultaneously developing trucks, high performance cars, autonomous, electric and hybrids vehicles, while keeping up with innovation requires engineers to be on top of their competencies. It also requires an organizational environment that supports them. A comprehensive analysis of the process of developing automotive components is presented using the three lenses framework. This methodology reveals performance challenges in three categories or lenses: strategic design, cultural and political. The organizational process analysis presents a desired state and the paths to achieve that change. It is proven that inefficiencies in the engineering process create higher cost in reworks, which could impair the ability to compete with technology companies looking to disrupt the industry.
by Victor Daniel Rojas Gomez.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
16

Al-Duri, Bushra Abdul-Aziz Abdul-Karim. "Mass transfer processes in single and multicomponent batch adsorption systems." Thesis, Queen's University Belfast, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.258225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Yuan, Heyang. "Bioelectrochemical Systems: Microbiology, Catalysts, Processes and Applications." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/79910.

Full text
Abstract:
The treatment of water and wastewater is energy intensive, and there is an urgent need to develop new approaches to address the water-energy challenges. Bioelectrochemical systems (BES) are energy-efficient technologies that can treat wastewater and simultaneously achieve multiple functions such as energy generation, hydrogen production and/or desalination. The objectives of this dissertation are to understand the fundamental microbiology of BES, develop cost-effective cathode catalysts, optimize the process engineering and identify the application niches. It has been shown in Chapter 2 that electrochemically active bacteria can take advantage of shuttle-mediated EET and create optimal anode salinities for their dominance. A novel statistical model has been developed based on the taxonomic data to understand and predict functional dynamics and current production. In Chapter 3, 4 and 5, three cathode catalyst (i.e., N- and S- co-doped porous carbon nanosheets, N-doped bamboo-like CNTs and MoS2 coated on CNTs) have been synthesized and showed effective catalysis of oxygen reduction reaction or hydrogen evolution reaction in BES. Chapter 6, 7 and 8 have demonstrated how BES can be combined with forward osmosis to enhance desalination or achieve self-powered hydrogen production. Mathematical models have been developed to predict the performance of the integrated systems. In Chapter 9, BES have been used as a research platform to understand the fate and removal of antibiotic resistant genes under anaerobic conditions. The studies in this dissertation have collectively demonstrated that BES may hold great promise for energy-efficient water and wastewater treatment.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Chunguang S. M. Massachusetts Institute of Technology. "Enterprise architecture processes : comparing EA and CLIOS in the Veterans Health Administration." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/76512.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 92-94).
There are numerous frameworks for abstracting an enterprise complex system into a model for purposes of analysis and design. Examples of such frameworks include the Complex Large-scale Interconnected Open Social-technical System (CLIOS) process for handling enterprise system architecture, the Enterprise Architecture eight views (EA) for diagnosing and improving overall enterprise performance, and the Enterprise Strategic Analysis for Transformation (ESAT). In addition to helping identify and manage complexity, emergent behavior and the requirements of many stakeholders, all of these frameworks help identify enterprise-wide processes, bringing value-added flow between enterprises and their stakeholders. This thesis evaluates the applicability of integrating these frameworks into a hybrid process in ongoing programs and to determine if a standard process can be generated through an integrative, interdisciplinary approach using the above models and frameworks. Enterprise Architecture eight views framework as developed at MIT is designed to create enterprise-level transformations in large, complex socio-technical enterprises. In the past 15 years of research at LAI, these enterprise developments have been applied and validated in the government and in other industries including aerospace, transportation, healthcare case, defense acquisition and logistics. The CLIOS process, also developed at MIT, is designed to work with Complex, Largescale, Integrated, Open, Socio-technical systems, creating strategies for stakeholders to reach goals through enterprise development. This process has been used heavily in transportation systems, energy distribution, and regional strategic transportation planning. This thesis will apply both of these frameworks to the case of Veterans Affairs health care enterprise to evaluate its effectiveness. Based on insights from self-assessments and the organization's strategy, a transformation plan will be generated for the Veterans Affairs organization's current state and preferred future state. These outcomes will help to identify the strengths of the merged methodology.
by Chunguang Wang.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
19

Lotz, Marco. "Modelling of process systems with Genetic Programming /." Thesis, Link to the online version, 2006. http://hdl.handle.net/10019/570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Daberkow, Debora Daniela. "A formulation of metamodel implementation processes for complex systems design." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/12478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ezolino, Juan Stefano. "Design for automation in manufacturing systems and processes." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104311.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Thesis: S.M. in Engineering Systems, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 88-89).
The Widget' industry has changed significantly over the last 20 years. Although Company A benefited from their historically strong market position for a long time, the market share of widgets has, at this point, been evenly divided between Company A and Company B. There is therefore market pressure for Company A to reassess the way it does business to be more competitive. Automation initiatives in the Widget industry have historically been slow to be implemented, and there has been hesitation to change the way widgets and their parts are designed and manufactured due to the complexity of the widget product. But in order to work in a more competitive global market, companies must question many of the established assumptions regarding their products in order to achieve efficiency gains and improve safety standards in their production system. The ultimate goal of the project was to align the design, manufacturing, and business processes with new technology capabilities and the goals of the company. By doing this, the cost of producing a widget would be decreased, while increasing in-process quality and repeatability. This thesis focuses on ways in which to show the value of improving the design of a widget to enable more efficient production systems, while ensuring the risk of injury to the mechanics is continuously lowered through increased process control and standardization. In order to understand what it means for engineers across the company to design parts and assemblies with automated manufacturing processes in mind, a list of high-level technical design principles needed to be developed. A group of 17 design and production engineers was assembled for a workshop, representing all of the widget programs, R&D, Product Development, Fabrication, Engineering Operations, Manufacturing Operations, and IT. Through two days of activities, a list of ten principles was developed that could be applied to any widget part or assembly that was intended to be manufactured through automation. After the Design for Automation (DfA) principles were established and agreed-upon, it was necessary to find ways to effectively implement new tools and methodologies into the established design process.
by Juan Stefano Ezolino.
M.B.A.
S.M. in Engineering Systems
APA, Harvard, Vancouver, ISO, and other styles
22

Conradie, Alex van Eck. "Neurocontroller development for nonlinear processes utilising evolutionary reinforcement learning." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51841.

Full text
Abstract:
Thesis (MEng)--University of Stellenbosch, 2000.
ENGLISH ABSTRACT: The growth in intelligent control has primarily been a reaction to the realisation that nonlinear control theory has been unable to provide practical solutions to present day control challenges. Consequently the chemical industry may be cited for numerous instances of overdesign, which result as an attempt to avoiding operation near or within complex (often more economically viable) operating regimes. Within these complex operating regimes robust control system performance may prove difficult to achieve using conventional (algorithmic) control methodologies. Biological neuronal control mechanisms demonstrate a remarkable ability to make accurate generalisations from sparse environmental information. Neural networks, with their ability to learn and their inherent massive parallel processing ability, introduce numerous opportunities for developing superior control structures for complex nonlinear systems. To facilitate neural network learning, reinforcement learning techniques provide a framework which allows for learning from direct interactions with a dynamic environment. lts promise as a means of automating the knowledge acquisition process is beguiling, as it provides a means of developing control strategies from cause and effect (reward and punishment) interaction information, without needing to specify how the goal is to be achieved. This study aims to establish evolutionary reinforcement learning as a powerful tool for developing robust neurocontrollers for application in highly nonlinear process systems. A novel evolutionary algorithm; Symbiotic, Adaptive Neuro-Evolution (SANE), is utilised to facilitate neurocontroller development. This study also aims to introduce SANE as a means of integrating the process design and process control development functions, to obtain a single comprehensive calculation step for maximum economic benefit. This approach thus provides a tool with which to limit the occurrence of overdesign in the process industry. To investigate the feasibility of evolutionary reinforcement learning in achieving these aims, the SANE algorithm is implemented in an event-driven software environment (developed in Delphi 4.0), which may be applied for both simulation and real world control problems. Four highly nonlinear reactor arrangements are considered in simulation studies. As a real world application, a novel batch distillation pilot plant, a Multi-Effect Batch Distillation (MEBAD) column, was constructed and commissioned. The neurocontrollers developed using SANE in the complex simulation studies, were found to exhibit excellent robustness and generalisation capabilities. In comparison with model predictive control implementations, the neurocontrollers proved far less sensitive to model parameter uncertainties, removing the need for model mismatch compensation to eliminate steady state off-set. The SANE algorithm also proved highly effective in discovering the operating region of greatest economic return, while simultaneously developing a neurocontroller for this optimal operating point. SANE, however, demonstrated limited success in learning an effective control policy for the MEBAD pilot plant (poor generalisation), possibly due to limiting the algorithm's search to a too small region of the state space and the disruptive effects of sensor noise on the evaluation process. For industrial applications, starting the evolutionary process from a random initial genetic algorithm population may prove too costly in terms of time and financial considerations. Pretraining the genetic algorithm population on approximate simulation models of the real process, may result in an acceptable search duration for the optimal control policy. The application of this neurocontrol development approach from a plantwide perspective should also have significant benefits, as individual controller interactions are so doing implicitly eliminated.
AFRIKAANSE OPSOMMING: The huidige groei in intelligente beheerstelsels is primêr 'n reaksie op die besef dat nie-liniêre beheerstelsel teorie nie instaat is daartoe om praktiese oplossings te bied vir huidige beheer kwelkwessies nie. Gevolglik kan talle insidente van oorontwerp in die chemiese nywerhede aangevoer word, wat voortvloei uit 'n poging om bedryf in of naby komplekse bedryfsgebiede (dikwels meer ekonomies vatbaar) te vermy. Die ontwikkeling van robuuste beheerstelsels, met konvensionele (algoritmiese ) beheertegnieke, in die komplekse bedryfsgebiede mag problematies wees. Biologiese neurobeheer megamsmes vertoon 'n merkwaardige vermoë om te veralgemeen vanaf yl omgewingsdata. Neurale netwerke, met hulle vermoë om te leer en hulle inherente paralleie verwerkingsvermoë, bied talle geleenthede vir die ontwikkeling van meer doeltreffende beheerstelsels vir gebruik in komplekse nieliniêre sisteme. Versterkingsleer bied a raamwerk waarbinne 'n neurale netwerk leer deur direkte interaksie met 'n dinamiese omgewing. Versterkingsleer hou belofte in vir die inwin van kennis, deur die ontwikkeling van beheerstrategieë vanaf aksie en reaksie (loon en straf) interaksies - sonder om te spesifiseer hoe die taak voltooi moet word. Hierdie studie beaam om evolutionêre versterkingsleer as 'n kragtige strategie vir die ontwikkeling van robuuste neurobeheerders in nie-liniêre prosesomgewings, te vestig. 'n Nuwe evolutionêre algoritme; Simbiotiese, Aanpasbare, Neuro-Evolusie (SANE), word aangewend vir die onwikkeling van die neurobeheerders. Hierdie studie beoog ook die daarstelling van SANE as 'n weg om prosesontwerp en prosesbeheer ontwikkeling vir maksimale ekonomiese uitkering, te integreer. Hierdie benadering bied dus 'n strategie waardeur die insidente van oorontwerp beperk kan word. Om die haalbaarheid van hierdie doelwitte, deur die gebruik van evolusionêre versterkingsleer te ondersoek, is die SANE algoritme aangewend in 'n Windows omgewing (ontwikkel in Delphi 4.0). Die Delphi programmatuur geniet toepassing in beide die simulasie en werklike beheer probleme. Vier nie-liniêre reaktore ontwerpe is oorweeg in die simulasie studies. As 'n werklike beheer toepassing, is 'n nuwe enkelladingsdistillasie kolom, 'n Multi-Effek Enkelladingskolom (MEBAD) gebou en in bedryf gestel. Die neurobeheerders vir die komplekse simulasie studies, wat deur SANE ontwikkel is, het uitstekende robuustheid en veralgemeningsvermoë ten toon gestel. In vergelyking met model voorspellingsbeheer implementasies, is gevind dat die neurobeheerders heelwat minder sensitief is vir model parameter onsekerheid. Die noodsaak na modelonsekerheid kompensasie om gestadigde toestand afset te elimineer, word gevolglik verwyder. The SANE algoritme is ook hoogs effektief vir die soek na die mees ekonomies bedryfstoestand, terwyl 'n effektiewe neurobeheerder gelyktydig vir hierdie ekonomies optimumgebied ontwikkel word. SANE het egter beperkte sukses in die leer van 'n effektiewe beheerstrategie vanaf die MEBAD toetsaanleg getoon (swak veralgemening). Die swak veralgemening kan toegeskryf word aan 'n te klein bedryfsgebied waarin die algoritme moes soek en die negatiewe effek van sensor geraas op die evaluasie proses. Vir industriële applikasies blyk dit dat die uitvoer van die evolutionêre proses vanaf 'n wisselkeurige begintoestand nie koste effektief is in terme van tyd en finansies nie. Deur die genetiese algoritme populasie vooraf op 'n benaderde modelop te lei, kan die soek tydperk na 'n optimale beheerstrategie aansienlik verkort word. Die aanwending van die neurobeheer ontwikkelingstrategie vanuit 'n aanlegwye oogpunt mag aanleiding gee tot aansienlike voordele, aaangesien individuele beheerder interaksies sodoende implisiet uitgeskakel word.
APA, Harvard, Vancouver, ISO, and other styles
23

Sravana, Kumar Karnati. "Diagnostic knowledge-based systems for batch chemical processes: hypothesis queuing and evaluation /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487858106117232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ward, Eric D. (Eric Daniel). "A socio-technical systems analysis of change processes in the design of flagship interplanetary missions." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107291.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, Engineering and Management Program, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 99-100).
In the engineering of complex systems, changes to flight hardware or software after initial release can have large impacts on project implementation. Even a comparatively small change on an assembly or subsystem can cascade into a significant amount of rework if it propagates through the system. This can happen when a change affects the interfaces with another subsystem, or if it alters the emergent behavior of the system in a significant way, and is especially critical when subsequent work has already been performed utilizing the initial version. These changes can be driven by new or modified requirements leading to changes in scope, design deficiencies discovered during analysis or test, failures during test, and other such mechanisms. In complex system development, changes are managed through engineering change requests (ECRs) that are communicated to affected elements. While the tracking of changes is critical for the ongoing engineering of a complex project, the ECRs can also reveal trends on the system level that could assist with the management of current and future projects. In an effort to identify systematic trends, this research has analyzed ECRs from two different JPL led space mission projects to classify the change activity and assess change propagation. It employs time analysis of ECR initiation throughout the lifecycle, correlates ECR generators with ECR absorbers, and considers the distribution of ECRs across subsystems. The analyzed projects are the planetary rover mission, Mars Science Laboratory (MSL), and the Earth-orbiting mission, Soil Moisture Active Passive (SMAP). This analysis has shown that there is some consistency across these projects with regard to which subsystems generate or absorb change. The relationship of the ECRSubsystem networks identifies subsystems that are absorbers of change and others that are generators of change. For the flight systems, the strongest absorbers of change were found to be avionics and the mechanical structure for the spacecraft bus, and the strongest generators of change were concentrated in the payloads. When this attribute is recognized, project management can attempt to close ECR networks by looking for ways to leverage absorbers and avoid multipliers. Alternatively, in cases where changes to a subsystem are undesirable, knowing whether it is an absorber can greatly assist with expectations and planning. This analysis identified some significant differences between the two projects as well. While SMAP followed a relatively well behaved blossom profile across the project, MSL had an avalanche of change leading to the drastic action of re-baselining the launch date. While the official reasoning for the slip of the launch date is based in technical difficulties, the avalanche profile implies that a snowballing of change may have had a significant impact as well. Furthermore, the complexity metrics applied show that MSL has a more complex nature than SMAP, with 269 ECRs in 65 Parent-Child clusters, opposed to 166 in 53 for SMAP, respectively. The Process Complexity metric confirms this, quantitatively measuring the complexity of MSL at 492, compared to 367 for SMAP. These tools and metrics confirm the intuition that MSL, as a planetary rover, is a more complex space mission than SMAP, an earth orbiter.
by Eric D. Ward.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Yan (Yan Henry) 1976. "Integrating Radio Frequency Identification (RFID) data with Electronic Data Interchange (EDI) business processes." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33326.

Full text
Abstract:
Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2005.
Includes bibliographical references (leaves 42-46).
Radio Frequency Identification (RFID) technology, an important component in the enterprise IT infrastructure, must be integrated into the legacy IT system. This thesis studies how RFID technology can be integrated into the existing Electronic Data Interchange (EDI) infrastructure, particularly how RFID can be used in the current EDI exchange process to accelerate the receiving process. After detailed review of current receiving process and relevant data specification, the author finds it possible to replace the current manual receiving process by RFID enabled automatic receiving and reconciliation process. A middleware is proposed to implement this approach.
by Yan Chen.
M.Eng.in Logistics
APA, Harvard, Vancouver, ISO, and other styles
26

Benoist, Tristan. "Open quantum systems and quantum stochastic processes." Thesis, Paris, Ecole normale supérieure, 2014. http://www.theses.fr/2014ENSU0006/document.

Full text
Abstract:
De nombreux phénomènes de physique quantique ne peuvent être compris que par l'analyse des systèmes ouverts. Un appareil de mesure, par exemple, est un système macroscopique en contact avec un système quantique. Ainsi, tout modèle d'expérience doit prendre en compte les dynamiques propres aux systèmes ouverts. Ces dynamiques peuvent être complexes : l'interaction du système avec son environnement peut modifier ses propriétés, l'interaction peu créer des effets de mémoire dans l'évolution du système, . . . Ces dynamiques sont particulièrement importantes dans l'étude des expériences d'optique quantique. Nous sommes aujourd'hui capables de manipuler individuellement des particules. Pour cela la compréhension et le contrôle de l'influence de l'environnement est crucial. Dans cette thèse nous étudions d'un point de vue théorique quelques procédures communément utilisées en optique quantique. Avant la présentation de nos résultats, nous introduisons et motivons l'utilisation de la description markovienne des systèmes quantiques ouverts. Nous présentons a la fois les équations maîtresses et le calcul stochastique quantique. Nous introduisons ensuite la notion de trajectoire quantique pour la description des mesures indirectes continues. C'est dans ce contexte que l'on présente les résultats obtenus au cours de cette thèse. Dans un premier temps, nous étudions la convergence des mesures non destructives. Nous montrons qu'elles reproduisent la réduction du paquet d'onde du système mesuré. Nous montrons que cette convergence est exponentielle avec un taux fixe. Nous bornons le temps moyen de convergence. Dans ce cadre, en utilisant les techniques de changement de mesure par martingale, nous obtenons la limite continue des trajectoires quantiques discrètes. Dans un second temps, nous étudions l'influence de l'enregistrement des résultats de mesure sur la préparation d'état par ingénierie de réservoir. Nous montrons que l'enregistrement des résultats de mesure n'a pas d'influence sur la convergence proprement dite. Cependant, nous trouvons que l'enregistrement des résultats de mesure modifie le comportement du système avant la convergence. Nous retrouvons une convergence exponentielle avec un taux équivalent au taux sans enregistrement. Mais nous trouvons aussi un nouveau taux de convergence correspondant a une stabilité asymptotique. Ce dernier taux est interprété comme une mesure non destructive ajoutée. Ainsi l'état du système ne converge qu'après un temps aléatoire. A partir de ce temps la convergence peut être bien plus rapide. Nous obtenons aussi une borne sur le temps moyen de convergence
Many quantum physics phenomena can only be understood in the context of open system analysis. For example a measurement apparatus is a macroscopic system in contact with a quantum system. Therefore any experiment model needs to take into account open system behaviors. These behaviors can be complex: the interaction of the system with its environment might modify its properties, the interaction may induce memory effects in the system evolution, ... These dynamics are particularly important when studying quantum optic experiments. We are now able to manipulate individual particles. Understanding and controlling the environment influence is therefore crucial. In this thesis we investigate at a theoretical level some commonly used quantum optic procedures. Before the presentation of our results, we introduce and motivate the Markovian approach to open quantum systems. We present both the usual master equation and quantum stochastic calculus. We then introduce the notion of quantum trajectory for the description of continuous indirect measurements. It is in this context that we present the results obtained during this thesis. First, we study the convergence of non demolition measurements. We show that they reproduce the system wave function collapse. We show that this convergence is exponential with a fixed rate. We bound the mean convergence time. In this context, we obtain the continuous time limit of discrete quantum trajectories using martingale change of measure techniques. Second, we investigate the influence of measurement outcome recording on state preparation using reservoir engineering techniques. We show that measurement outcome recording does not influence the convergence itself. Nevertheless, we find that measurement outcome recording modifies the system behavior before the convergence. We recover an exponential convergence with a rate equivalent to the rate without measurement outcome recording. But we also find a new convergence rate corresponding to an asymptotic stability. This last rate is interpreted as an added non demolition measurement. Hence, the system state converges only after a random time. At this time the convergence can be much faster. We also find a bound on the mean convergence time
APA, Harvard, Vancouver, ISO, and other styles
27

Efatmaneshnik, Mahmoud Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Towards immunization of complex engineered systems: products, processes and organizations." Publisher:University of New South Wales. Mechanical & Manufacturing Engineering, 2009. http://handle.unsw.edu.au/1959.4/43358.

Full text
Abstract:
Engineering complex systems and New Product Development (NPD) are major challenges for contemporary engineering design and must be studied at three levels of: Products, Processes and Organizations (PPO). The science of complexity indicates that complex systems share a common characteristic: they are robust yet fragile. Complex and large scale systems are robust in the face of many uncertainties and variations; however, they can collapse, when facing certain conditions. This is so since complex systems embody many subtle, intricate and nonlinear interactions. If formal modelling exercises with available computational approaches are not able to assist designers to arrive at accurate predictions, then how can we immunize our large scale and complex systems against sudden catastrophic collapse? This thesis is an investigation into complex product design. We tackle the issue first by introducing a template and/or design methodology for complex product design. This template is an integrated product design scheme which embodies and combines elements of both design theory and organization theory; in particular distributed (spatial and temporal) problem solving and adaptive team formation are brought together. This design methodology harnesses emergence and innovation through the incorporation of massive amount of numerical simulations which determines the problem structure as well as the solution space characteristics. Within the context of this design methodology three design methods based on measures of complexity are presented. Complexity measures generally reflect holistic structural characteristics of systems. At the levels of PPO, correspondingly, the Immunity Index (global modal robustness) as an objective function for solutions, the real complexity of decompositions, and the cognitive complexity of a design system are introduced These three measures are helpful in immunizing the complex PPO from chaos and catastrophic failure. In the end, a conceptual decision support system (DSS) for complex NPD based on the presented design template and the complexity measures is introduced. This support system (IMMUNE) is represented by a Multi Agent Blackboard System, and has the dual characteristic of the distributed problem solving environments and yet reflecting the centralized viewpoint to process monitoring. In other words IMMUNE advocates autonomous problem solving (design) agents that is the necessary attribute of innovative design organizations and/or innovation networks; and at the same time it promotes coherence in the design system that is usually seen in centralized systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Ummethala, Upendra V. "Control of heat conduction in manufacturing processes : a distributed parameter systems approach." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/44894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Obrigkeit, Darren Donald 1974. "Numerical solution of multicomponent population balance systems with applications to particulate processes." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/31099.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2001.
"June 2001."
Includes bibliographical references.
Population balances describe a wide variety of processes in the chemical industry and environment ranging from crystallization to atmospheric aerosols, yet the dynamics of these processes are poorly understood. A number of different mechanisms, including growth, nucleation, coagulation, and fragmentation typically drive the dynamics of population balance systems. Measurement methods are not capable of collecting data at resolutions which can explain the interactions of these processes. In order to better understand particle formation mechanisms, numerical solutions could be employed, however current numerical solutions are generally restricted to a either a limited selection of growth laws or a limited solution range. This lack of modeling ability precludes the accurate and/or fast solution of the entire class of problems involving simultaneous nucleation and growth. Using insights into the numerical stability limits of the governing equations for growth, it is possible to develop new methods which reduce solution times while expanding the solution range to include many orders of magnitude in particle size. Rigorous derivation of the representations and governing equations is presented for both single and multi-component population balance systems involving growth, coagulation, fragmentation, and nucleation sources. A survey of the representations used in numerical implementations is followed by an analysis of model complexity as new components are added. The numerical implementation of a split composition distribution method for multicomponent systems is presented, and the solution is verified against analytical results. Numerical stability requirements under varying growth rate laws are used to develop new scaling methods which enable the description of particles over many orders of magnitude in size. Numerous examples are presented to illustrate the utility of these methods and to familiarize the reader with the development and manipulations of the representations, governing equations, and numerical implementations of population balance systems.
by Darren Donald Obrigkeit.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
30

Hoehn, William Kenneth. "An integrated decision approach : combining the demand-revealing, quality function deployment, and elements of the systems engineering processes /." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-03302010-020103/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ramanan, Baheerathan. "Quantifying mass transport processes in environmental systems using magnetic resonance imaging (MRI)." Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2974/.

Full text
Abstract:
Understanding the transport behaviour of pollutants is key to enhance remediation strategies and to inform predictive models of pollutant behaviour in environmental and engineered systems. This work investigates magnetic resonance imaging (MRI) as a methodology for imaging heavy metal, molecular and nanoparticle transport in two different saturated porous systems: biofilms and saturated porous geologic media (gravel). While most renowned for its use in medicine, magnetic resonance imaging (MRI) is enabling us to image the transport of heavy metals, macro-molecules and nanoparticles inside biofilms and porous columns in real time. This is achieved using either ions which are paramagnetic (e.g. Cu2+) or molecules labelled with paramagnetic ions (e.g. Gd3+) or superparamagnetic (e.g. nanomagnetite) nanoparticles. Presence of these tracers causes a concentration dependent shortening of relaxation times (T1 or T2) of the surrounding 1H nuclei and thus creates noticeable changes in the MRI signal. Critically, this enables the transport of (super)paramagnetic ions, molecules or nanoparticles through the biofilm or porous geological media to be imaged. Moreover, the actual concentrations of molecules can be quantified, as changes in relaxation rates have a linear relationship with the concentration of the tracer molecules. Hence, MRI can be used not only to track but also to quantify the transport of (super)paramagnetic molecules inside biofilms and saturated porous columns. The key advantages of MRI over other techniques are its ability to image inside systems opaque to other methods and its ability to collect data non-invasively, hence the system is unperturbed by the analysis. In this study, the transport of Gd-DTPA, a commonly used MRI contrast agent, was successfully imaged through phototrophic biofilms of 10 and 2.5 mm thicknesses. To improve spatial resolution, for the 2.5 mm thickness biofilm, a bespoke 5 mm diameter RF coil was constructed. The comparison of spatially distributed, time-varying concentrations of Gd-DTPA inside the biofilms with diffusion models illustrated that transport was via both diffusion and advection. This work illustrated the potential of using paramagnetically labelled molecules to quantify molecular pollutant transport and fate in biofilms. MRI was also used to image heavy metal trasport in artificial biofilms (composed of agar and bacteria) to test the suitability of an existing adsorption-diffusion model to represent heavy metal transport and fate in biofilms. While the diffusion coefficients and adsorption constants estimated were appropriate, discrepancies between the model and the data illustrates models may need to be developed further to incorporate factors such as concentration dependant diffusion or cell lysis. Finally, the ability to image inside opaque systems was further exploited to image nanoparticle transport inside a coarse-grained packed column. This was undertaken to illustrate the potential for MRI to image nanoparticle pollutant transport in systems relevant to river beds and sustainable urban drainage systems (SUDS). MRI was successfully used to image the nanoparticle transport, with significant transport inhibition was observed in positively charged nanoparticles compared to negatively charged nanoparticles due to permanent attachment.
APA, Harvard, Vancouver, ISO, and other styles
32

Prisby, Craig K. (Craig Kanoa) 1971. "Coordinating the multi-retailer, single supplier procurement processes for a seasonal product with supply contracts." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/28577.

Full text
Abstract:
Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2003.
Includes bibliographical references (p. 24).
Supply contracts are used to maximize profits in a supply chain by coordinating order quantities between the suppliers and retailers. In traditional supply contracts, retailers use a newsvendor approach to maximize their profits, while the supplier's profits increase linearly as a function of the number of units supplied to retailers. Initially, retailers assume risk in the supply chain because they are facing an unknown demand, and the suppliers assume no risk. This thesis looks at an example from the garment industry where retailers order to replenish stock after a small assortment buy is placed at the start of the finite selling season. The suppliers must place production orders for the entire selling season before the selling season begins. It is clear see that the retailers assume little risk in this model, while the supplier faces significant risk, especially if its forecasting methods are not accurate. The levels of risk that each assumes in this model are reversed when compared to the traditional supply contract model. A method is developed that coordinates the retailer ordering with the supplier's production schedule. It is shown that coordinating the supply chain's ordering will lead to higher profits than the current, uncoordinated model.
by Craig K. Prisby.
M.Eng.in Logistics
APA, Harvard, Vancouver, ISO, and other styles
33

Herzog, Erik. "An approach to systems engineering tool data representation and exchange." Doctoral thesis, Linköping : Univ, 2004. http://www.ep.liu.se/diss/science_technology/08/67/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Brataas, Gunnar. "Performance engineering method for workflow systems : an integrated view of human and computerised work processes." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 1996. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1411.

Full text
Abstract:

A method for designing workflow systems which satisfy performance requirements is proposed in this thesis. Integration of human and computerised performance is particularly useful for workflow systems where human and computerised processes are intertwined. The proposed framework encompasses human and computerised resources.

Even though systematic performance engineering is not common practice in information system development, current, best practice shows that performance engineering of software is feasible, e.g. the SPE method by Connie U. Smith. Contemporary approaches to performance engineering focus on dynamic models of resource contention, e.g. queueing networks and Petri nets. Two difficulties arise for large-scale information systems. The first difficulty is to estimate appropriate parameters which capture the properties of the software and the organisation. The second difficulty is to maintain an overveiw of a complex model, which is essential both to guide the choice of parameters and to ensure that the oerformance engineering process is an intregal part of the wider system development process.

The proposed method is based on the static performance modelling method Structure and Performance (SP) developed by Peter h: Hughes. SP provides a suitable bridge between contemporary CASE tools and traditional dynamic approaches to performance evaluation, in particular because it adresses the problems of parameterisation and overveiw identified above.

The method is explored and illustrated with two case studies. The Blood Bank Case Study comprised performance engineering of a transaction-oriented information system, showing the oractical feasibility of integrating the method with CASE tools. The Gas Sales Telex Administration Case Study for Statoil looked at performance engineering of a workfloww system for telex handling, and consisted of performance modelling of human activity in interaction with a Lotus Notes computer platform.

The latter case study demonstrated the feasibility of the framework.

APA, Harvard, Vancouver, ISO, and other styles
35

Gan, Jyeh J. "Decision support systems for tool reuse and EOL processes." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39488.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; in conjunction with the Leaders for Manufacturing Program at MIT, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 68-69).
Intel® is a manufacturing company that concentrates on the fabrication process of computer chips. Over the years and into the future, Intel® has gone through multiple and advanced generations of manufacturing technology caused by new fabrication techniques and increased wafer sizes. These advances have resulted in significant opportunities for cost reduction which includes reuse of semiconductor equipment within Intel factories and sale of used semiconductor equipment. To ensure assets are transferred in a safe and timely manner, Intel developed a 6D program (Decontamination, Decommission, Demolition, Demolition-System, Delivery, and Deployment) to standardize the EOL (End of Life) process of transferring a tool from the factory to its final destination in reuse, sale, parts harvesting, donation or scrap. Like other multi-national companies, Intel® has decentralized manufacturing processes over multiple worldwide sites; most if not all the fabrication, sort, and assembly tool information is archived in multiple repositories/systems. In addition to the scattering of knowledge, the tool-related information appears not to be comprehensive, including data fields not matching across multiple systems.
(cont.) As a result, significant time is consumed to ensure the comprehensiveness and the accuracy of the required data across the multiple sites. Thus a comprehensive map of information infrastructure based on the 6D process is necessary to understand and enhance efficiencies in the knowledge flow process. Detailed mapping of databases and their meta-data will help identify the thoroughness, accuracy, redundancy, and inefficiency in the tool-related information systems as they relate to 6D. A prototype of a "one-stop-site"was developed and key Knowledge Management recommendations were proposed to enhance efficiency by further reducing costs, time, and resources.
by Jyeh J. Gan.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
36

Abu-Madi, Mahmoud A. "A computer model for heat exchange processes in mobile air-conditioning systems." Thesis, University of Brighton, 1998. https://research.brighton.ac.uk/en/studentTheses/3b31883a-c908-4435-a66b-eef044b014da.

Full text
Abstract:
The last few years have seen a rapid growth in the number of cars equipped with air-conditioning systems. The space available to fit the system is limited and the under bonnet environment is hostile. Moreover, the depletion of the stratospheric ozone has led to legislation on the phasing out of the chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs ). These substances are used as refrigerants in most refrigeration, heat pump and air-conditioning systems in service today. The aim of this research project was to study existing air-conditioning systems used in automotive applications to develop a model that simulates the components of these systems. This provides a better understanding of the effect of using different refrigerants in the system and its performance. Experimental studies of the performance of the different heat exchanger geometries used provided inputs to the model developed. Automotive air-conditioning condensers and evaporators simulation models were developed and used to compare the performance of these heat exchangers using CFC and HCFC refrigerants and the non-ozone depleting replacements. Thermodynamic properties of the new refrigerants were derived from the equation of state. The evaporator was simulated taking into consideration the mass transfe r associated with the heat transfer in humid conditions. Two types of compact heat exchangers were modelled, round tube with plane fin and plate tube with corrugated fin. These cover most automotive, domestic and industrial applications. The basic performance data of various geometries were determined experimentally. An existing thermal wind tunnel was re-instrumented and modified to improve accuracy at the low air velocities was used in this study. A new data logger linked to a personal computer was used with newly written software to collect and analyse the test data. The results for all geometries tested were correlated and presented in non-dimensional form. The test data were used to determine the effect of various geometrical parameters on the performance for an optimisation of condenser and evaporator designs. The model developed is being used by industrial collaborators for the design of heat exchangers in automotive air-conditioning systems.
APA, Harvard, Vancouver, ISO, and other styles
37

Kanumury, Rajesh. "Integrating business and engineering processes in manufacturing environment using AI concepts." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179423333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ingram, Mary Ann. "Estimation for linear systems driven by point processes with state dependent rates." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/14887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Al-Meer, Mariam A. "Reducing heart failure admissions through improved care systems and processes." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111872.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, in conjunction with the Leaders for Global Operations Program at MIT, 2017.
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 121-124).
Heart failure (HF) is a complex chronic condition that can result from any cardiac disorder that impairs the ventricle's ability to fill with or eject blood. The American Heart Association predicts that there will be about 10 million HF patients in the US by 2037, with total hospitalization costs exceeding $70 billion. This represents a considerable burden to hospitals nationwide, including the Massachusetts General Hospital (MGH) -- a leading medical center that has long grappled with patient overcrowding and capacity constraints. This thesis presents an extensive mapping of the HF care pathway at MGH, followed by the results of a detailed retrospective analysis of the general behavior of HF patients admitted to MGH. Here, we notice that the majority of HF admissions originate as self-referrals via the Emergency Department (ED) and take place on weekdays, between the hours of 9am and 6pm. Moreover, we find that about 57% of hospitalized HF patients often have no scheduled follow-up appointments with their providers in the two weeks leading up to their admissions and, similarly, about 43% have no scheduled appointments in the eight weeks post hospital discharge. These represent two critical time periods in the events of acute heart failure decompensation. In an effort to prioritize targeted outpatient care, we propose a predictive model which aims to identify patients at greatest risk of a first hospital admission following encounters with their primary care providers and/or cardiologists in any given year. We perform logit-linear regressions on multiple prior first admissions and use predictors that, among others, include clinical risk factors, socioeconomic features and histories of prior medications. Some of the model's most significant predictors, as identified by the Akaike information criterion (AIC), include patient's age, marital status, ability to speak English, estimated average income, previous administration of loop diuretics, and the total number of medications prescribed or administered. To assess the quality of our predictions, we turn to the receiver operating characteristic (ROC) and its resulting average area under the curve (AUC) of 0.712. As the team continues to focus on developing interventions that offer better care to HF patients, the value of our model lies in its ability to prioritize patient needs for outpatient care and monitoring, and to guide the allocation of limited care resources.
by Mariam A. Al-Meer.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
40

Al-Tayyar, Mohammad H. (Mohammad Haytham). "Corporate entrepreneurship and new business development : analysis of organizational frameworks, systematic processes and entrepreneurial attributes in established organizations." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90706.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 113-122).
Entrepreneurship is a distinctively individual concept. The individual entrepreneur works on his or her own to create a new business. Employees on the other hand function within the boundaries of the company. Employees that behave entrepreneurially collectively create the phenomenon of corporate entrepreneurship. In this thesis, we study the most common and overarching traits, characteristics and attributes of individual entrepreneurs. We analyze the most commonly prevalent traits and analyze how companies can be structured to foster strong sustainable corporate entrepreneurial ecosystems. The research also evaluates different corporate entrepreneurial models, types and frameworks through analyzing existing processes for creating corporate entrepreneurship and new business development. We explore concepts such as corporate venturing, corporate new business development, intrapreneurship, joint venturing, alliances, entrepreneurial human resource management, entrepreneurial organizational designs and business model innovation strategies. Specific companies that exemplified specific corporate entrepreneurship processes were analyzed such as DuPont 3M, IBM and Degussa AG. The concept of corporate entrepreneurship is instrumental in creating growth for companies but also could be a source of risk, where the example of Samsung Motors describes some of the negative impacts of corporate diversification. The research considers sustainable approaches for successfully implementing corporate entrepreneurship and new business develop and focus is given on the human interactions between the employee and the company.
by Mohammad H. Al-Tayyar.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
41

Smith, Zachary R. "Designing and implementing auxiliary operational processes." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44301.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Manufacturing Program at MIT, 2008.
Includes bibliographical references (p. 83-84).
Amazon.com, one of the largest and most profitable online retailers, has been experiencing such dramatic growth rates that it must continually update and modify its fulfillment process in order to meet customer demand for its products. As the volume of customer orders increases, management at the different fulfillment centers must determine the optimal way to increase the throughput through their facility. Many times the answer lies in improving the primary process, but occasionally it makes better sense if an auxiliary process is built or expanded to meet the increased demand.This thesis analyzes the decision criteria necessary to determine when an auxiliary process should be designed in addition to an established primary process. The author's internship project will be presented as an example of how to implement such a secondary method. The six-month LFM project focused on increasing the Fernley, Nevada fulfillment center's capacity by making improvements to its manual sortation/packaging. This process, nicknamed BIGS, was originally built to offload large and troublesome orders from the primary, automated process path. The unique labor-intensive procedures used in this process held several advantages that justified its existence and the investments necessary to expand its capacity
by Zachary R. Smith.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
42

Mäkäräinen, Minna. "Software change management processes in the development of embedded software /." Espoo [Finland] : Technical Research Centre of Finland, 2000. http://www.vtt.fi/inf/pdf/publications/2000/P416.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Leung, Wai-man Wanthy. "Evolutionary optimisation of industrial systems /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2132668X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Manyuchi, Musaida Mercy. "Measurement and behavior of the overall volumetric oxygen transfer coefficient in aerated agitated alkane based multiphase systems." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5329.

Full text
Abstract:
Thesis (MScEng (Process Engineering))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: Hydrocarbons provide excellent feed stocks for bioconversion processes to produce value added products using various micro-organisms. However, hydrocarbon-based aerobic bioprocesses may exhibit transport problems where the bioconversion is limited by oxygen supply rather than reaction kinetics. Consequently, the overall volumetric oxygen transfer coefficient (KLa) becomes critical in designing, operating and scaling up of these processes. In view of KLa importance in hydrocarbon-based processes, this work evaluated KLa measurement methodologies as well as quantification of KLa behavior in aerated agitated alkane-solid-aqueous dispersions. A widely used KLa measurement methodology, the gassing out procedure (GOP) was improved. This improvement was done to account for the dissolved oxygen (DO) transfer resistances associated with probe. These resistances result in a lag in DO response during KLa measurement. The DO probe response lag time, was incorporated into the GOP resulting in the GOP (lag) methodology. The GOP (lag) compared well with the pressure step procedure (PSP), as documented in literature, which also incorporated the probe response lag time. Using the GOP (lag), KLa was quantified in alkane-solid-aqueous dispersions, using either inert compounds (corn flour and CaCO3) or inactive yeast cells as solids to represent the micro-organisms in a hydrocarbon bioprocess. Influences of agitation, alkane concentration, solids loading and solids particle sizes and their interactions on KLa behavior in these systems were quantified. In the application of an accurate KLa measurement methodology, the DO probe response lag time was investigated. Factors affecting the lag, which included process conditions such as agitation (600-1200rpm), alkane concentration (2.5-20% (v/v), alkane chain length (n-C10-13 and n-C14-20), inert solids loading (1-10g/L) and solids particle sizes (3-14μm) as well as probe characteristics such as membrane age and electrolyte age (5 day usage), were investigated. Kp, the oxygen transfer coefficient of the probe, was determined experimentally as the inverse of the time taken for the DO to reach 63.2% of saturation after a step change in DO concentration. Kp dependence on these factors was defined using 22 factorial design experiments. Kp decreased on increased membrane age with an effect double that of Kp decrease due to electrolyte age. Additionally, increased alkane concentration decreased Kp with an effect 7 times higher compared to that of Kp decrease due to increased alkane chain length. This was in accordance to Pareto charts quantification. KLa was then calculated, using the GOP (lag), according to equation [1] which incorporates the influence of Kp. Equation 1 is derived from the simultaneous solution of the models which describe the response of the system and of the probe to a step change in DO. 1 1 * L p p p K at K t L p p La C K e K ae C K K = -  - - -  -   [1] The KLa values documented in literature from the PSP and KLa calculated by the GOP (lag) showed only a 1.6% difference. However KLa values calculated by the GOP (lag) were more accurate than KLa calculated by the GOP, with up to >40% error observed in the latter according to t-tests analyses. These results demonstrated that incorporating Kp markedly improved KLa accuracy. Consequently, the GOP (lag) was chosen as the preferred KLa measurement methodology. KLa was determined in n-C14-20-inert solid-aqueous dispersions. Experiments were conducted in a stirred tank reactor with a 5L working volume at constant aeration of 0.8vvm, 22ºC and 101.3kPa. KLa behavior across a range of agitations (600- 1200rpm), alkane concentrations (2.5-20% (v/v)), inert solids loadings (1-10g/L) and solids particle sizes (3-14μm) was defined using a 24 factorial design experiment. In these dispersions, KLa increased significantly on increased agitation with an effect 5 times higher compared to that of KLa increase due to interaction of increased alkane concentration and inert solids loading. Additionally, KLa decreased significantly on increased alkane concentration with an effect 4 times higher compared to both that of increased solids particle sizes and the interaction of increased agitation and solids particle size. In n-C14-20-yeast-aqueous dispersions, KLa was determined under narrowed process conditions better representing typical bioprocess conditions. KLa behavior across a range of agitations (600-900rpm), alkane concentrations (2.5-11.25% (v/v)) and yeast loadings (1-5.5g/L) using a 5μm-yeast cell was defined using a 23 factorial design experiment. In these dispersions, KLa increased significantly on increased agitation. Additionally, KLa decreased significantly on increased yeast loading with an effect 1.2 times higher compared to that of KLa decrease due to interaction of increased alkane concentration and yeast loading. In this study, the importance of Kp for accurate KLa measurement in alkane based systems has been quantified and an accurate and less complex methodology for its measurement applied. Further, KLa behavior in aerated alkane-solid-aqueous dispersions was quantified, demonstrating KLa enhancement on increased agitation and KLa depression on increased alkane concentration, solids loading and solids particle sizes.
AFRIKAANSE OPSOMMING: Koolwaterstowwe dien as uitstekende voervoorraad vir ´n verskeidenheid van mikroorganismes wat aangewend word in biologiese omsettingsprosesse ter vervaardiging van waardetoevoegende produkte. Hierdie biologiese omsettingsprosesse word egter vertraag weens die gebrek aan suurstoftoevoer onder aerobiese toestande. Die tempo van omsetting word dus beheer deur die volumetriese suurstofoordragkoeffisiënt (KLa) eerder as die toepaslike reaksiekinetika. Die bepaling van ´n akkurate KLa word dus krities tydens die ontwerp en opskalering van hierdie prosesse. Met dit in gedagte het hierdie studie die huidige metodes om KLa te bepaal geëvalueer en die gedrag van KLa in goed vermengde en belugde waterige alkaanmengsels met inerte vastestowwe, soos gisselle, in suspensie ondersoek. ´n Deesdae populêre metode om KLa te bepaal, die sogenaamde gasvrylatingsprosedure (GOP) is in hierdie studie verbeter. Die verbetering berus op die ontwikkeling van ´n prosedure om die suurstofoordragsweerstand van die pobe wat die hoeveelheid opgeloste suurstof (DO) meet, in berekening te bring. Hierdie weerstand veroorsaak ´n vertragin in the responstyd van die probe. Die verbeterde metode, GOP (lag), vergelyk goed met die gepubliseerde resultate van die drukstaptegniek (PSP) wat ook die responstyd in ag neem. GOP (lag) is ingespan om KLa te gekwantifiseer vir waterige alkaan-vastestof suspensies. Inerte componente soos mieliemeel, kalsiumkarbonaat en onaktiewe gisselle het gedien as die vastestof in suspensie verteenwoordigend van die mikroörganismes in ´n koolwaterstof bio-proses. Die invloed van vermengingstempo, alkaan konsentrasie, vastestof konsentrasie en partikelgrootte asook die interaksie van al die bogenoemde op KLa is kwatitatief bepaal in hierdie studie. Faktore wat die responstyd van die DO probe beïnvloed is ondersoek. Hierdie faktore is onder meer vermengingstempo (600-1200opm), alkaankonsentrasie (2.5-20% (v/v)), alkaankettinglengte (n-C10-13 en n-C14-20), vastestofkonsentrasie (1-10g/L) en partikelgrootte (3-14 μm). Faktore wat die eienskappe van die probe beïnvoed, naamlik membraan-en elektrolietouderdom (5 dae verbruik), is ook ondersoek. Kp, die suurstofoordragskoeffisiënt, is bepaal deur ´n inkrementele verandering in die suurstofkonsentrasie van die mengsel te maak en die tyd vir 63.2% versadiging van die probelesing te noteer. Die genoteerde tyd is die response tyd van die probe en Kp, die inverse van hierdie tyd. Die afhanklikheid van Kp op die bogenoemde faktore is ondersoek in ´n 22 faktorieël ontwerpte reeks eksperimente. Kp toon ´n afname met ´n toename in membraanouderdom. Hierdie afname is dubbel in grootte as dit vergelyk word met die afname relatief tot die toename in elektrolietouderdom. Verder toon Kp ´n afname met ´n toename in alkaankonsentrasie. Hierdie afname is 7 keer groter relatief tot die afname gesien met die toename in alkaan kettinglengte. Hierdie is in goeie ooreenstemming met Pareto kaarte as kwantifiseringsmetode. KLa is bereken met die inagname van Kp volgens vergelyking [1]: 1 1 * L p p p K at K t L p p La C K e K ae C K K = -  - - -  -   [1] Vergelyking [1] is afgelei vanaf die gelyktydige oplossing van die bestaande modelle wat die responstyd van die pobe vir ´n stapverandering in DO bereken. Die KLa waardes van die PSP metode uit literatuur verskil in die orde van 1.6% van dié bereken deur vergelyking [2]. Hierdie verskil is weglaatbaar. Die KLa waardes verkry uit die GOP metode wat nie Kp in berekening bring nie, verskil met meer as 40% van die huidige, verbeterde metode volgens die statistiese t-test analiese. Dit bewys dat die inagname van Kp ´n merkwaardige verbetering in die akuraatheid van KLa teweeg bring. GOP (lag) kry dus voorkeur vir die berekening van KLa verder aan in hierdie studie. KLa is bereken vir n-C14-20-water mengsels met inerte vastestofsuspensies. Die eksperimente is uitgevoer in ´n 5L geroerde reaktor met ´n konstante belugting van 0.8vvm (volume lug per volume supensie per minuut), 22ºC en 101.3kPa. Die gedrag van KLa met betrekking tot vermengingstempo (600-1200opm), alkaankonsentrasie (2.5-20% (v/v)), vastestofkonsentrasie (1-10g/L) en partikelgrootte (3-14μm) is ondersoek in ´n 24 faktorieël ontwerpte reeks eksperimente. Verder is die invloed van vloeistofviskositeit en oppervlakspanning op KLa ondersoek in ´n 23 faktorieël ontwerpte reeks eksperimente. KLa het ´n beduidende toename getoon met ´n toename in vermengingstempo. Hierdie toename was 5 keer groter as die toename relatief tot die interaksie van alkaan-en vastestofkonsentrasie. KLa het ook beduidend afgeneem met ´n toename in alkaankonsentrasie. Die toename was 4 keer groter as die toename relatief tot die toename in partikelgrootte en die interaksie van vermengingstempo en partikelgrootte. In n-C14-20-water mengsels met gisselsuspensies is KLa bepaal onder kondisies verteenwoordigend van tipiesie biologiese omsettingsprosesse. Die gedrag van KLa met betrekking tot vermengingstempo (600-900opm), alkaankonsentrasie (2.5-11.25% (v/v)) en giskonsentrasie (1-5.5g/L) met ´n partikelgroote van 5μm is ondersoek in ´n 23 faktorieël ontwerpte reeks eksperimente. Hierdie eksperimente het ´n beduidende toename in KLa met ´n toename in vermengingstempo getoon sowel as ´n beduidende afname met ´n toename in giskonsentrasie. Hierdie afname is in die orde van 1.2 keer groter in vergelyking met die interaksie van alkeen- en giskonsentrasie. Hierdie studie bring die kritieke rol wat Kp speel in die akkurate bepaling van KLa in waterige alkaansisteme met inerte vastestofsuspensies na vore. Dit stel verder ´n metodiek voor vir die akurate meting van en kwantifisering van beide Kp en KLa onder aerobiese toestande met betrekking tot vermengingstempo, alkaankonsentrasie, vastestofkonsentrasie en partikelgrootte.
APA, Harvard, Vancouver, ISO, and other styles
45

Xu, Donghai. "Phase behaviour modelling of hydrocarbon systems for compositional reservoir simulation of gas injection processes." Thesis, Heriot-Watt University, 1990. http://hdl.handle.net/10399/886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Thiel, Gregory P. "Desalination systems for the treatment of hypersaline produced water from unconventional oil and gas processes." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/107078.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2015.
Numbering for pages 3-4 duplicated. Cataloged from PDF version of thesis.
Includes bibliographical references (pages 183-195).
conventional reserves has led to a boom in the use of hydraulic fracturing to recover oil and gas in North America. Among the most significant challenges associated with hydraulic fracturing is water resource management, as large quantities of water are both consumed and produced by the process. The management of produced water, the stream of water associated with a producing well, is particularly challenging as it can be hypersaline, with salinities as high as nine times seawater. Typical disposal strategies for produced water, such as deep well injection, can be unfeasible in many unconventional resource settings as a result of regulatory, environmental, and/or economic barriers. Consequently, on-site treatment and reuse-a part of which is desalination-has emerged as a strategy in many unconventional formations. However, although desalination systems are well understood in oceanographic and brackish groundwater contexts, their performance and design at significantly higher salinities is less well explored. In this thesis, this gap is addressed from the perspective of two major themes: energy consumption and scale formation, as these can be two of the most significant costs associated with operating high-salinity produced water desalination systems. Samples of produced water were obtained from three major formations, the Marcellus in Pennsylvania, the Permian in Texas, and the Maritimes in Nova Scotia, and abstracted to design-case samples for each location. A thermodynamic framework for analyzing high salinity desalination systems was developed, and traditional and emerging desalination technologies were modeled to assess the energetic performance of treating these high-salinity waters. A novel thermodynamic parameter, known as the equipartition factor, was developed and applied to several high-salinity desalination systems to understand the limits of energy efficiency under reasonable economic constraints. For emerging systems, novel hybridizations were analyzed which show the potential for improved performance. A model for predicting scale formation was developed and used to benchmark current pre-treatment practices. An improved pretreatment process was proposed that has the potential to cut chemical costs, significantly. Ultimately, the results of the thesis show that traditional seawater desalination rules of thumb do not apply: minimum and actual energy requirements of hypersaline desalination systems exceed their seawater counterparts by an order of magnitude, evaporative desalination systems are more efficient at high salinities than lower salinities, the scale-defined operating envelope can differ from formation to formation, and optimized, targeted pretreatment strategies have the potential to greatly reduce the cost of treatment. It is hoped that the results of this thesis will better inform future high-salinity desalination system development as well as current industrial practice.
by Gregory P. Thiel.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
47

Östlin, Johan. "On Remanufacturing Systems : Analysing and Managing Material Flows and Remanufacturing Processes." Doctoral thesis, Linköpings universitet, Monteringsteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11932.

Full text
Abstract:
The aim of remanufacturing is to retrieve a product’s inherent value when the product no longer fulfils the user’s desired needs. By taking advantage of this inherent value through different product recovery alternatives, there is a potential for both economically and environmental advantageous recovery of products. Remanufacturing is a complex business due to the high degree of uncertainty in the production process, mainly caused by two factors: the quantity and the quality of returned products. These factors have implications both on the external processes, e.g. coordinating input of returned products with the demand for remanufactured products, as well as the internal processes that coordinates the operations within the factory walls. This additional complexity needs to be considered when organising the remanufacturing system. The objective of this dissertation is to explore how remanufacturing companies can become more competitive through analysing and managing material flows and remanufacturing processes. The first issue discussed in this dissertation is the drivers that make companies interested in remanufacturing products in the first place. The conclusion is that the general drivers are profit, company policy and the environmental drivers. In a general sense, the profit motivation is the most prevalent business driver, but still there are situations where this motivation is secondary to policy and environmental drivers. Secondly, the need to balance the supply of returned products with the demand for remanufactured products shows that the possible remanufacturing volumes for a product are dependent on the shape of the supply and demand distributions. By using a product life cycle perspective, the supply and demand situations can be foreseen and support is given on possible strategies in these different supply and demand situations. Thirdly, how used products are gathered from customers is categorised by seven different customer relationship types. These types all have different effects on the remanufacturing system, and the characteristics of these relationships are disused in detail. When considering the remanufacturing process within the factory walls, a generic remanufacturing process was developed that divides the remanufacturing process into five different phases; pre-disassembly, disassembly, reprocessing, reassembly and the post-assembly phase. These different phases are separated by three different key decision points in the process that also have a major impact on the material planning of the process. For the remanufacturing material planning and production planning, the possibility to apply lean principles can be difficult. One foundation for implementing lean principles in new production is the existence of standardised processes that are stable and predictable. In the remanufacturing system, the possibilities to realise a predictable process is limited by the “normal” variations in quantity and the quality of the returned cores. Even though lean principles can be problematic to implement in the remanufacturing environment, this dissertation proposes a number of solutions that can be used to make the remanufacturing process leaner.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Qiang. "Process modeling of innovative design using systems engineering." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD007/document.

Full text
Abstract:
Nous développons des modèles de processus pour décrire et gérer efficacement la conception innovante, en suivant la méthodologie DRM. D'abord, nous présentons un modèle descriptif de la conception innovante. Ce modèle reflète les processus fondamentaux qui sont utiles pour comprendre les différentes dimensions et étapes impliqués dans la conception innovante. Il permette aussi de localise les possibilités d'innovation dans ce processus, et se focalise sur les facteurs internes et externes qui influencent le succès. Deuxièmement, nous effectuons une étude empirique pour étudier la façon dont le contrôle et la flexibilité peuvent être équilibrés pour gérer l'incertitude dans la conception innovante. Après avoir identifié les pratiques de projets qui traitent de ces incertitudes en termes de contrôle et de flexibilité, des études de cas sont analysés. Cet exemple montre que le contrôle et la flexibilité peuvent coexister. En se basant sûr les résultats managériaux issu de cette étude empirique, nous développons un modèle procédurale de processus et un modèle adaptatif à base d’activité. Le premier propose le cadre conceptuel pour équilibrer l'innovation et le contrôle par la structuration des processus au niveau du projet et par l'intégration des pratiques flexibles au niveau opérationnel. Le second modèle considère la conception innovante comme un système adaptatif complexe. Il propose ainsi une méthode de conception qui construit progressivement l'architecture du processus de la conception innovante. Enfin, les deux modèles sont vérifiées en analysant un certain nombre de processus et en faisant des simulations au sein de trois projets de conception innovante
We develop a series of process models to comprehensively describe and effectively manage innovative design in order to achieve adequate balance between innovation and control, following the design research methodology (DRM). Firstly, we introduce a descriptive model of innovative design. This model reflects the actual process and pattern of innovative design, locates innovation opportunities in the process and supports a systematic perspective whose focus is the external and internal factors affecting the success of innovative design. Secondly, we perform an empirical study to investigate how control and flexibility can be balanced to manage uncertainty in innovative design. After identifying project practices that cope with these uncertainties in terms of control and flexibility, a case-study sample based on five innovative design projects from an automotive company is analyzed and shows that control and flexibility can coexist. Based on the managerial insights of the empirical study, we develop the procedural process model and the activity-based adaptive model of innovative design. The former one provides the conceptual framework to balance innovation and control by the process structuration at the project-level and the integration of flexible practices at the operation-level. The latter model considers innovative design as a complex adaptive system, and thereby proposes the method of process design that dynamically constructs the process architecture of innovative design. Finally, the two models are verified by supporting a number of process analysis and simulation within a series of innovative design projects
APA, Harvard, Vancouver, ISO, and other styles
49

Viller, Stephen Alexandre. "Human factors in requirements engineering : a method for improving requirements processes for the development of dependable systems." Thesis, Lancaster University, 1999. http://eprints.lancs.ac.uk/11686/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Holtz, Heath M. (Heath Mikal). "Re-sourcing manufacturing processes in metal forming operations." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34859.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Manufacturing Program at MIT, 2005.
Includes bibliographical references (p. 75-76).
Deciding which activities to conduct in-house and which to outsource has become increasingly important due to its implications on a company's supply chain and overall business model. A number of factors can lead a company to outsource manufacturing processes. As a result of this outsourcing, the supply chain can become very complex and overwhelming to manage. This thesis will analyze this situation from the perspective of one manufacturer, American Axle and Manufacturing, Inc. (AAM). AAM's Metal Formed Products (MFP) Division currently has a number of challenges: rising steel prices, fixed labor costs and declining sales. All these factors have significantly impacted profitability, forcing senior management to take a comprehensive look at the division and consider developing a plan to improve divisional operations. As a part of this plan, MFP Division's senior management asked for a thorough look into all of the manufacturing processes performed by the division both internally and by outside suppliers. In addition to identifying the processes and suppliers, senior management sought to highlight opportunities for improving the process flow through the re-sourcing of manufacturing processes. This project develops a framework to analyze and evaluate these re-sourcing decisions. This framework employs a five-step approach and incorporates a number of diverse analytical tools. Process flow mapping provided a tool to visually highlight the best opportunities to resource. In addition to a visual representation, process flow mapping also provided the data to financially evaluate alternatives. Strategic and market factors were identified in order to target and prioritize re-sourcing efforts.
(cont.) This framework provides a structure for sourcing decisions that balances the financial and strategic concerns. The project concluded in a $2M investment to re-source heat treating to AAM facilities.
by Heath M. Holtz.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography