Дисертації з теми "Systems and processes engineering"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Systems and processes engineering".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Heng, Jiin Shyang. "On systems engineering processes in system-of-systems acquisition." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5689.
Повний текст джерелаS results show that a low-risk SoS acquisition could continue with the current SE process as the benefits derived from an extensive front-end SE process are limited. Conversely, a high-risk SoS acquisition should adopt the SoS SE process proposed herein to enhance the SoS acquisition program's chance of success. It is highrisk SoS acquisitions such as the US Army's Future Combat System, the US Coast Guard's Deep Water System, the Joint Tactical Radio System (JTRS), and Homeland Security's SBInet that would likely benefit from the proposed SoS SE process.
Ball, Linden John. "Cognitive processes in engineering design." Thesis, University of Plymouth, 1990. http://hdl.handle.net/10026.1/674.
Повний текст джерелаJohnson, Kipp M. "Tailoring systems engineering processes for rapid space acquisitions." Thesis, Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/5203.
Повний текст джерелаThe Self-Awareness Space Situational Awareness (SASSA) program is a congressionally initiated technology demonstration program run by the Air Force, Space and Missile System Center (SMC), Los Angeles Air Force Base. Initiated October 2008, SASSA is investigating the feasibility of a highly flexible and adaptable satellite payload system for detecting satellite threats, both natural and manmade. The SASSA program was given cost and schedule limitations with a mandate to deliver hardware for demonstration in 24 months, considered a "rapid acquisition" by AF and SMC standards. This study provides an assessment of how the SASSA program tailored systems engineering processes to implement a "rapid space acquisition." Acquisition and engineering standards define a roadmap for military procurements to produce the most effective product at the most reasonable cost. Refinement of these standards over time is critical to the continued success of acquisition systems to evolve a current and effective military. This study reviews the SASSA concept and technology demonstration, surveys standard systems engineering guidance, catalogues systems engineering processes tailored, and assesses effectiveness of this tailoring. This study will provide observation and assessment of real-world results, successful and unsuccessful, for the purposes of capturing and documenting lessons learned towards successfully accomplishing rapid space acquisitions.
Begin, Michael P. "Systems Engineering Processes for the Acquisition of Prognostic and Health Management Systems." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/17323.
Повний текст джерелаKazeem, Mukaila. "Developing a Profitable Photography Business Based on System Engineering Principles & Processes." Digital Commons at Loyola Marymount University and Loyola Law School, 2010. https://digitalcommons.lmu.edu/etd/414.
Повний текст джерелаAbdimomunova, Leyla (Leyla M. ). "Organizational assessment processes for enterprise transformation." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62764.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (p. 97-99).
Enterprise transformation is a dynamic process that builds upon and affects organizational processes. Organizational assessment plays critical role in planning and execution of enterprise transformation. It allows the assessment of an enterprise's current capabilities as well as for identification and prioritization of improvements needed to drive the enterprise transformation process. Despite the benefits that organizational assessment has to offer, many organizations fail to exploit them due to unfavorable organizational culture, unsatisfactory assessment processes or mismatch between assessment tool and broader transformation approach. This thesis focuses mainly on a model of organizational assessment and how it can be improved to better support enterprise transformation. We argue that the assessment process spans beyond performing the assessment itself. For the assessment to provide the expected benefit, organizations must first of all create an environment ensuring a clear understanding of the role assessment plays in the enterprise transformation process. To this end they must promote open and frequent discussion about the current state of the enterprise and future goals. The assessment process must be carefully planned to ensure it runs effectively and efficiently and that assessment results are accurate and reliable. Assessment results must be analyzed and turned into specific recommendations and action plans. At the same time, the assessment process itself must be evaluated and adjusted, if necessary, for the next assessment cycle. Based on literature review and case studies of five large aerospace companies, we recommend a five-phase assessment process model that includes mechanisms to change organizational behavior through pre-assessment phases. It also allows for adjustment of the assessment process itself based on the results and experience of participants so that it better suits the organization's needs and practices.
by Leyla Abdimomunova.
S.M.in Engineering and Management
Lam, Rosaly. "Integrating ISSE and SE Processes in Information System Development." Digital Commons at Loyola Marymount University and Loyola Law School, 2007. https://digitalcommons.lmu.edu/etd/411.
Повний текст джерелаClegg, Ben. "A systems approach to reengineering business processes towards concurrent engineering principles." Thesis, De Montfort University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391643.
Повний текст джерелаOswald, W. Andrew (William Andrew). "Understanding technology development processes theory & practice." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/90699.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 75-77).
Technology development is hard for management to understand and hard for practitioners to explain, however it is an essential component of innovation. While there are standard and predictable processes for product development, many of these techniques don't apply well to technology development. Are there common processes for technology development that can make it predictable, or is it unpredictable like basic research and invention? In this thesis, after building a foundation by looking at product development processes, I survey some of the literature on technology development processes and compare them to a handful of case studies from a variety of industries. I then summarize the observations from the cases and build a generic model for technology development that can be used to provide insights into how to monitor and manage technology projects. One of the observations from the product development literature is that looping and iteration is problematic for establishing accurate schedules which becomes one of the fundamental disconnects between management and engineering. Technologists rely heavily on iteration as a tool for gaining knowledge and combined with other risks, technology development may appear "out of control". To mitigate these risks, technologists have developed a variety of approaches including: building a series of prototypes of increasing fidelity and using them as a form of communication, simultaneously developing multiple technologies as a hedge against failure or predicting and developing technologies they think will be needed outside of formal channels. Finally, I use my model to provide some insights as to how management can understand technology development projects. This gives technologists and non-technical managers a common ground for communication.
by W. Andrew Oswald.
S.M. in Engineering and Management
Ajmera, Sameer K. (Sameer Kumar) 1975. "Microchemical systems for kinetic studies of catalytic processes." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/16821.
Повний текст джерелаIncludes bibliographical references.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Silicon microfabrication techniques and scale-up by replication have for decades fueled spectacular advances in the electronics industry. More recently, with the rise of microfluidics, microfabrication has enabled the development of microchemical systems for a variety of chemical and biological applications. This work focuses on the development of these systems for improved gas phase heterogeneous catalysis research. The catalyst development process often requires fundamental information such as reaction rate constants, activation energies, and reaction mechanisms to gauge and understand catalyst performance. To this end, we have examined the ability of microreactors with a variety of geometries to efficiently obtain accurate kinetic information. This work primarily focuses on microfabricated packed-bed reactors that utilize standard catalyst particles and briefly explores the use of membrane based reactors to obtain kinetic information. Initial studies with microfabricated packed-beds led to the development of a microfabricated silicon reactor that incorporates a novel cross-flow design with a short pass multiple flow-channel geometry to reduce the gradients that often confound kinetics in macroscale reactors. The cross-flow geometry minimizes pressure drop though the particle bed and incorporates a passive flow distribution system composed of an array of shallow flow channels. Combined experiments and modeling confirm the even distribution of flow across the wide catalyst bed with a pressure drop [approx.] 1600 times smaller than typical microfabricated packed-bed configurations.
(cont.) Coupled with the inherent heat and mass transfer advantages at the sub-millimeter length scale achievable through microfabrication, the cross-flow microreactor has been shown to operate in near-gradientless conditions and is an advantageous design for catalyst testing. The ability of microfabricated packed-beds to obtain accurate catalytic information has been demonstrated through experiments with phosgene generation over activated carbon, and CO oxidation and acetylene hydrogenation over a variety of noble metals on alumina. The advantages of using microreactors for catalyst testing is quantitatively highlighted throughout this work.
by Sameer K. Ajmera.
Ph.D.
Rana, Farhan 1971. "Electron tunneling processes in Si/SiO₂ systems." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10766.
Повний текст джерелаRupani, Sidharth. "Standardization of product development processes in multi-project organizations." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/91082.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 120-126).
An important question for a large company with multiple product development projects is how standard or varied the sets of activities it uses to conceive, design, and commercialize products should be across the organization. To help address this question, this project is comprised of three research activities to improve understanding of the influence of standardization of product development processes on performance. Previous research indicates that process standardization has many positive (improved efficiency, knowledge transfer, decision making and resource allocation) and negative (reduced creativity, innovation, adaptation and learning, employee satisfaction) performance effects. Even focusing on specific performance outcomes, the influence of process standardization is contested. The first phase was a set of theory-building case studies at five large companies that develop electromechanical assembled products. One important lesson from the case studies was that to appropriately evaluate the impact of standardization on performance it is essential to disaggregate the process into its individual 'dimensions' (activities, deliverables, tools, etc.) because standardization on different dimensions of the process impacts performance outcomes quite differently. Another lesson was that companies differ in their process standardization approach because of differences in their portfolio characteristics and in their strategic priorities across performance outcomes. Based on the importance of focusing on individual process dimensions, a broad and systematic literature study was conducted with the aim of better capturing the current state of knowledge. This literature study resulted in a framework to characterize the problem space, a comprehensive set of relevant project characteristics, process dimensions, and performance outcomes and a summary of the established links, contested links, and unexplored links between these elements. Focusing on one set of contested links from the literature, the final research activity was a detailed empirical study at one company. The goal was to study the effect of variation in project-level product development processes, operating under the guidance of an established process standard, on project performance. The purpose-assembled data set includes measures of project characteristics, process dimensions, and project performance outcomes for 15 projects. Statistical analyses were performed to examine the relationships between process variation and project performance outcomes. Where possible, the statistical analyses were supported and enriched with available qualitative data. The results indicated that, at this company, process variation in the form of both customization and deviation was associated with negative net outcomes. Customization (in the form of combining project reviews) was associated with reduced development time and development cost, but also with lower quality, likely because of reduced testing. On net, in dollar terms, combining reviews was associated with negative outcomes. Specific deviations (in the form of waived deliverables) were also associated with negative performance consequences. Results also supported the lessons from Phase 1. Variation on different process dimensions was associated with different performance outcomes. Disaggregation was important, with many insights lost when deviations were aggregated. This project enhanced our understanding of the performance impacts of product development process standardization. The case studies highlighted the importance of disaggregating to individual process dimensions to correctly evaluate the effects of standardization. The systematic literature study resulted in a framework for organizational decision making about process standardization and a summary of the current state of knowledge - elements, established links, contested links, and unexplored links. The detailed empirical study at one company examined one set of contested links - between process standardization and project performance - and found that process variation in the form of both customization and deviation was associated with net negative effects on project performance.
by Sidharth Rupani.
Ph. D.
Rocha, Andrea M. "Computational Discovery of Phenotype Related Biochemical Processes for Engineering." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3315.
Повний текст джерелаShen, Gwo-Chyau. "Adaptive inferential control for chemical processes /." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487329662147068.
Повний текст джерелаRojas, Gomez Victor Daniel. "Organizational processes analysis of product development in the automotive industry." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107366.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 74-75).
This thesis provides an analysis of specific process phases associated with the vehicle components development process at Ford Motor Company. I will be using the Organizational Process as the foundation to explore opportunities to improve the existing process. As with any other organization, Ford Motor Company has areas of opportunity in the organizational arena. Being on the verge of the next automotive revolution, the organization needs to analyze whether or not it is in the right position to develop the cars for the future. With more than 100 years of history the company faces some legacy challenges that permeate in the culture of today's organization. Being formed around figures of cult and the scars left by turning the company around to avoid bankruptcy could inhibit Ford from keeping pace in a demanding and changing industry. In Ford's current organization, the product development engineers play a key role in engineering and developing the vehicles that people will drive in the years to come. The challenges of simultaneously developing trucks, high performance cars, autonomous, electric and hybrids vehicles, while keeping up with innovation requires engineers to be on top of their competencies. It also requires an organizational environment that supports them. A comprehensive analysis of the process of developing automotive components is presented using the three lenses framework. This methodology reveals performance challenges in three categories or lenses: strategic design, cultural and political. The organizational process analysis presents a desired state and the paths to achieve that change. It is proven that inefficiencies in the engineering process create higher cost in reworks, which could impair the ability to compete with technology companies looking to disrupt the industry.
by Victor Daniel Rojas Gomez.
S.M. in Engineering and Management
Al-Duri, Bushra Abdul-Aziz Abdul-Karim. "Mass transfer processes in single and multicomponent batch adsorption systems." Thesis, Queen's University Belfast, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.258225.
Повний текст джерелаYuan, Heyang. "Bioelectrochemical Systems: Microbiology, Catalysts, Processes and Applications." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/79910.
Повний текст джерелаPh. D.
Wang, Chunguang S. M. Massachusetts Institute of Technology. "Enterprise architecture processes : comparing EA and CLIOS in the Veterans Health Administration." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/76512.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (p. 92-94).
There are numerous frameworks for abstracting an enterprise complex system into a model for purposes of analysis and design. Examples of such frameworks include the Complex Large-scale Interconnected Open Social-technical System (CLIOS) process for handling enterprise system architecture, the Enterprise Architecture eight views (EA) for diagnosing and improving overall enterprise performance, and the Enterprise Strategic Analysis for Transformation (ESAT). In addition to helping identify and manage complexity, emergent behavior and the requirements of many stakeholders, all of these frameworks help identify enterprise-wide processes, bringing value-added flow between enterprises and their stakeholders. This thesis evaluates the applicability of integrating these frameworks into a hybrid process in ongoing programs and to determine if a standard process can be generated through an integrative, interdisciplinary approach using the above models and frameworks. Enterprise Architecture eight views framework as developed at MIT is designed to create enterprise-level transformations in large, complex socio-technical enterprises. In the past 15 years of research at LAI, these enterprise developments have been applied and validated in the government and in other industries including aerospace, transportation, healthcare case, defense acquisition and logistics. The CLIOS process, also developed at MIT, is designed to work with Complex, Largescale, Integrated, Open, Socio-technical systems, creating strategies for stakeholders to reach goals through enterprise development. This process has been used heavily in transportation systems, energy distribution, and regional strategic transportation planning. This thesis will apply both of these frameworks to the case of Veterans Affairs health care enterprise to evaluate its effectiveness. Based on insights from self-assessments and the organization's strategy, a transformation plan will be generated for the Veterans Affairs organization's current state and preferred future state. These outcomes will help to identify the strengths of the merged methodology.
by Chunguang Wang.
S.M.in Engineering and Management
Lotz, Marco. "Modelling of process systems with Genetic Programming /." Thesis, Link to the online version, 2006. http://hdl.handle.net/10019/570.
Повний текст джерелаDaberkow, Debora Daniela. "A formulation of metamodel implementation processes for complex systems design." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/12478.
Повний текст джерелаEzolino, Juan Stefano. "Design for automation in manufacturing systems and processes." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104311.
Повний текст джерелаThesis: S.M. in Engineering Systems, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 88-89).
The Widget' industry has changed significantly over the last 20 years. Although Company A benefited from their historically strong market position for a long time, the market share of widgets has, at this point, been evenly divided between Company A and Company B. There is therefore market pressure for Company A to reassess the way it does business to be more competitive. Automation initiatives in the Widget industry have historically been slow to be implemented, and there has been hesitation to change the way widgets and their parts are designed and manufactured due to the complexity of the widget product. But in order to work in a more competitive global market, companies must question many of the established assumptions regarding their products in order to achieve efficiency gains and improve safety standards in their production system. The ultimate goal of the project was to align the design, manufacturing, and business processes with new technology capabilities and the goals of the company. By doing this, the cost of producing a widget would be decreased, while increasing in-process quality and repeatability. This thesis focuses on ways in which to show the value of improving the design of a widget to enable more efficient production systems, while ensuring the risk of injury to the mechanics is continuously lowered through increased process control and standardization. In order to understand what it means for engineers across the company to design parts and assemblies with automated manufacturing processes in mind, a list of high-level technical design principles needed to be developed. A group of 17 design and production engineers was assembled for a workshop, representing all of the widget programs, R&D, Product Development, Fabrication, Engineering Operations, Manufacturing Operations, and IT. Through two days of activities, a list of ten principles was developed that could be applied to any widget part or assembly that was intended to be manufactured through automation. After the Design for Automation (DfA) principles were established and agreed-upon, it was necessary to find ways to effectively implement new tools and methodologies into the established design process.
by Juan Stefano Ezolino.
M.B.A.
S.M. in Engineering Systems
Conradie, Alex van Eck. "Neurocontroller development for nonlinear processes utilising evolutionary reinforcement learning." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51841.
Повний текст джерелаENGLISH ABSTRACT: The growth in intelligent control has primarily been a reaction to the realisation that nonlinear control theory has been unable to provide practical solutions to present day control challenges. Consequently the chemical industry may be cited for numerous instances of overdesign, which result as an attempt to avoiding operation near or within complex (often more economically viable) operating regimes. Within these complex operating regimes robust control system performance may prove difficult to achieve using conventional (algorithmic) control methodologies. Biological neuronal control mechanisms demonstrate a remarkable ability to make accurate generalisations from sparse environmental information. Neural networks, with their ability to learn and their inherent massive parallel processing ability, introduce numerous opportunities for developing superior control structures for complex nonlinear systems. To facilitate neural network learning, reinforcement learning techniques provide a framework which allows for learning from direct interactions with a dynamic environment. lts promise as a means of automating the knowledge acquisition process is beguiling, as it provides a means of developing control strategies from cause and effect (reward and punishment) interaction information, without needing to specify how the goal is to be achieved. This study aims to establish evolutionary reinforcement learning as a powerful tool for developing robust neurocontrollers for application in highly nonlinear process systems. A novel evolutionary algorithm; Symbiotic, Adaptive Neuro-Evolution (SANE), is utilised to facilitate neurocontroller development. This study also aims to introduce SANE as a means of integrating the process design and process control development functions, to obtain a single comprehensive calculation step for maximum economic benefit. This approach thus provides a tool with which to limit the occurrence of overdesign in the process industry. To investigate the feasibility of evolutionary reinforcement learning in achieving these aims, the SANE algorithm is implemented in an event-driven software environment (developed in Delphi 4.0), which may be applied for both simulation and real world control problems. Four highly nonlinear reactor arrangements are considered in simulation studies. As a real world application, a novel batch distillation pilot plant, a Multi-Effect Batch Distillation (MEBAD) column, was constructed and commissioned. The neurocontrollers developed using SANE in the complex simulation studies, were found to exhibit excellent robustness and generalisation capabilities. In comparison with model predictive control implementations, the neurocontrollers proved far less sensitive to model parameter uncertainties, removing the need for model mismatch compensation to eliminate steady state off-set. The SANE algorithm also proved highly effective in discovering the operating region of greatest economic return, while simultaneously developing a neurocontroller for this optimal operating point. SANE, however, demonstrated limited success in learning an effective control policy for the MEBAD pilot plant (poor generalisation), possibly due to limiting the algorithm's search to a too small region of the state space and the disruptive effects of sensor noise on the evaluation process. For industrial applications, starting the evolutionary process from a random initial genetic algorithm population may prove too costly in terms of time and financial considerations. Pretraining the genetic algorithm population on approximate simulation models of the real process, may result in an acceptable search duration for the optimal control policy. The application of this neurocontrol development approach from a plantwide perspective should also have significant benefits, as individual controller interactions are so doing implicitly eliminated.
AFRIKAANSE OPSOMMING: The huidige groei in intelligente beheerstelsels is primêr 'n reaksie op die besef dat nie-liniêre beheerstelsel teorie nie instaat is daartoe om praktiese oplossings te bied vir huidige beheer kwelkwessies nie. Gevolglik kan talle insidente van oorontwerp in die chemiese nywerhede aangevoer word, wat voortvloei uit 'n poging om bedryf in of naby komplekse bedryfsgebiede (dikwels meer ekonomies vatbaar) te vermy. Die ontwikkeling van robuuste beheerstelsels, met konvensionele (algoritmiese ) beheertegnieke, in die komplekse bedryfsgebiede mag problematies wees. Biologiese neurobeheer megamsmes vertoon 'n merkwaardige vermoë om te veralgemeen vanaf yl omgewingsdata. Neurale netwerke, met hulle vermoë om te leer en hulle inherente paralleie verwerkingsvermoë, bied talle geleenthede vir die ontwikkeling van meer doeltreffende beheerstelsels vir gebruik in komplekse nieliniêre sisteme. Versterkingsleer bied a raamwerk waarbinne 'n neurale netwerk leer deur direkte interaksie met 'n dinamiese omgewing. Versterkingsleer hou belofte in vir die inwin van kennis, deur die ontwikkeling van beheerstrategieë vanaf aksie en reaksie (loon en straf) interaksies - sonder om te spesifiseer hoe die taak voltooi moet word. Hierdie studie beaam om evolutionêre versterkingsleer as 'n kragtige strategie vir die ontwikkeling van robuuste neurobeheerders in nie-liniêre prosesomgewings, te vestig. 'n Nuwe evolutionêre algoritme; Simbiotiese, Aanpasbare, Neuro-Evolusie (SANE), word aangewend vir die onwikkeling van die neurobeheerders. Hierdie studie beoog ook die daarstelling van SANE as 'n weg om prosesontwerp en prosesbeheer ontwikkeling vir maksimale ekonomiese uitkering, te integreer. Hierdie benadering bied dus 'n strategie waardeur die insidente van oorontwerp beperk kan word. Om die haalbaarheid van hierdie doelwitte, deur die gebruik van evolusionêre versterkingsleer te ondersoek, is die SANE algoritme aangewend in 'n Windows omgewing (ontwikkel in Delphi 4.0). Die Delphi programmatuur geniet toepassing in beide die simulasie en werklike beheer probleme. Vier nie-liniêre reaktore ontwerpe is oorweeg in die simulasie studies. As 'n werklike beheer toepassing, is 'n nuwe enkelladingsdistillasie kolom, 'n Multi-Effek Enkelladingskolom (MEBAD) gebou en in bedryf gestel. Die neurobeheerders vir die komplekse simulasie studies, wat deur SANE ontwikkel is, het uitstekende robuustheid en veralgemeningsvermoë ten toon gestel. In vergelyking met model voorspellingsbeheer implementasies, is gevind dat die neurobeheerders heelwat minder sensitief is vir model parameter onsekerheid. Die noodsaak na modelonsekerheid kompensasie om gestadigde toestand afset te elimineer, word gevolglik verwyder. The SANE algoritme is ook hoogs effektief vir die soek na die mees ekonomies bedryfstoestand, terwyl 'n effektiewe neurobeheerder gelyktydig vir hierdie ekonomies optimumgebied ontwikkel word. SANE het egter beperkte sukses in die leer van 'n effektiewe beheerstrategie vanaf die MEBAD toetsaanleg getoon (swak veralgemening). Die swak veralgemening kan toegeskryf word aan 'n te klein bedryfsgebied waarin die algoritme moes soek en die negatiewe effek van sensor geraas op die evaluasie proses. Vir industriële applikasies blyk dit dat die uitvoer van die evolutionêre proses vanaf 'n wisselkeurige begintoestand nie koste effektief is in terme van tyd en finansies nie. Deur die genetiese algoritme populasie vooraf op 'n benaderde modelop te lei, kan die soek tydperk na 'n optimale beheerstrategie aansienlik verkort word. Die aanwending van die neurobeheer ontwikkelingstrategie vanuit 'n aanlegwye oogpunt mag aanleiding gee tot aansienlike voordele, aaangesien individuele beheerder interaksies sodoende implisiet uitgeskakel word.
Sravana, Kumar Karnati. "Diagnostic knowledge-based systems for batch chemical processes: hypothesis queuing and evaluation /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487858106117232.
Повний текст джерелаWard, Eric D. (Eric Daniel). "A socio-technical systems analysis of change processes in the design of flagship interplanetary missions." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107291.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 99-100).
In the engineering of complex systems, changes to flight hardware or software after initial release can have large impacts on project implementation. Even a comparatively small change on an assembly or subsystem can cascade into a significant amount of rework if it propagates through the system. This can happen when a change affects the interfaces with another subsystem, or if it alters the emergent behavior of the system in a significant way, and is especially critical when subsequent work has already been performed utilizing the initial version. These changes can be driven by new or modified requirements leading to changes in scope, design deficiencies discovered during analysis or test, failures during test, and other such mechanisms. In complex system development, changes are managed through engineering change requests (ECRs) that are communicated to affected elements. While the tracking of changes is critical for the ongoing engineering of a complex project, the ECRs can also reveal trends on the system level that could assist with the management of current and future projects. In an effort to identify systematic trends, this research has analyzed ECRs from two different JPL led space mission projects to classify the change activity and assess change propagation. It employs time analysis of ECR initiation throughout the lifecycle, correlates ECR generators with ECR absorbers, and considers the distribution of ECRs across subsystems. The analyzed projects are the planetary rover mission, Mars Science Laboratory (MSL), and the Earth-orbiting mission, Soil Moisture Active Passive (SMAP). This analysis has shown that there is some consistency across these projects with regard to which subsystems generate or absorb change. The relationship of the ECRSubsystem networks identifies subsystems that are absorbers of change and others that are generators of change. For the flight systems, the strongest absorbers of change were found to be avionics and the mechanical structure for the spacecraft bus, and the strongest generators of change were concentrated in the payloads. When this attribute is recognized, project management can attempt to close ECR networks by looking for ways to leverage absorbers and avoid multipliers. Alternatively, in cases where changes to a subsystem are undesirable, knowing whether it is an absorber can greatly assist with expectations and planning. This analysis identified some significant differences between the two projects as well. While SMAP followed a relatively well behaved blossom profile across the project, MSL had an avalanche of change leading to the drastic action of re-baselining the launch date. While the official reasoning for the slip of the launch date is based in technical difficulties, the avalanche profile implies that a snowballing of change may have had a significant impact as well. Furthermore, the complexity metrics applied show that MSL has a more complex nature than SMAP, with 269 ECRs in 65 Parent-Child clusters, opposed to 166 in 53 for SMAP, respectively. The Process Complexity metric confirms this, quantitatively measuring the complexity of MSL at 492, compared to 367 for SMAP. These tools and metrics confirm the intuition that MSL, as a planetary rover, is a more complex space mission than SMAP, an earth orbiter.
by Eric D. Ward.
S.M. in Engineering and Management
Chen, Yan (Yan Henry) 1976. "Integrating Radio Frequency Identification (RFID) data with Electronic Data Interchange (EDI) business processes." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33326.
Повний текст джерелаIncludes bibliographical references (leaves 42-46).
Radio Frequency Identification (RFID) technology, an important component in the enterprise IT infrastructure, must be integrated into the legacy IT system. This thesis studies how RFID technology can be integrated into the existing Electronic Data Interchange (EDI) infrastructure, particularly how RFID can be used in the current EDI exchange process to accelerate the receiving process. After detailed review of current receiving process and relevant data specification, the author finds it possible to replace the current manual receiving process by RFID enabled automatic receiving and reconciliation process. A middleware is proposed to implement this approach.
by Yan Chen.
M.Eng.in Logistics
Benoist, Tristan. "Open quantum systems and quantum stochastic processes." Thesis, Paris, Ecole normale supérieure, 2014. http://www.theses.fr/2014ENSU0006/document.
Повний текст джерелаMany quantum physics phenomena can only be understood in the context of open system analysis. For example a measurement apparatus is a macroscopic system in contact with a quantum system. Therefore any experiment model needs to take into account open system behaviors. These behaviors can be complex: the interaction of the system with its environment might modify its properties, the interaction may induce memory effects in the system evolution, ... These dynamics are particularly important when studying quantum optic experiments. We are now able to manipulate individual particles. Understanding and controlling the environment influence is therefore crucial. In this thesis we investigate at a theoretical level some commonly used quantum optic procedures. Before the presentation of our results, we introduce and motivate the Markovian approach to open quantum systems. We present both the usual master equation and quantum stochastic calculus. We then introduce the notion of quantum trajectory for the description of continuous indirect measurements. It is in this context that we present the results obtained during this thesis. First, we study the convergence of non demolition measurements. We show that they reproduce the system wave function collapse. We show that this convergence is exponential with a fixed rate. We bound the mean convergence time. In this context, we obtain the continuous time limit of discrete quantum trajectories using martingale change of measure techniques. Second, we investigate the influence of measurement outcome recording on state preparation using reservoir engineering techniques. We show that measurement outcome recording does not influence the convergence itself. Nevertheless, we find that measurement outcome recording modifies the system behavior before the convergence. We recover an exponential convergence with a rate equivalent to the rate without measurement outcome recording. But we also find a new convergence rate corresponding to an asymptotic stability. This last rate is interpreted as an added non demolition measurement. Hence, the system state converges only after a random time. At this time the convergence can be much faster. We also find a bound on the mean convergence time
Efatmaneshnik, Mahmoud Mechanical & Manufacturing Engineering Faculty of Engineering UNSW. "Towards immunization of complex engineered systems: products, processes and organizations." Publisher:University of New South Wales. Mechanical & Manufacturing Engineering, 2009. http://handle.unsw.edu.au/1959.4/43358.
Повний текст джерелаUmmethala, Upendra V. "Control of heat conduction in manufacturing processes : a distributed parameter systems approach." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/44894.
Повний текст джерелаObrigkeit, Darren Donald 1974. "Numerical solution of multicomponent population balance systems with applications to particulate processes." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/31099.
Повний текст джерела"June 2001."
Includes bibliographical references.
Population balances describe a wide variety of processes in the chemical industry and environment ranging from crystallization to atmospheric aerosols, yet the dynamics of these processes are poorly understood. A number of different mechanisms, including growth, nucleation, coagulation, and fragmentation typically drive the dynamics of population balance systems. Measurement methods are not capable of collecting data at resolutions which can explain the interactions of these processes. In order to better understand particle formation mechanisms, numerical solutions could be employed, however current numerical solutions are generally restricted to a either a limited selection of growth laws or a limited solution range. This lack of modeling ability precludes the accurate and/or fast solution of the entire class of problems involving simultaneous nucleation and growth. Using insights into the numerical stability limits of the governing equations for growth, it is possible to develop new methods which reduce solution times while expanding the solution range to include many orders of magnitude in particle size. Rigorous derivation of the representations and governing equations is presented for both single and multi-component population balance systems involving growth, coagulation, fragmentation, and nucleation sources. A survey of the representations used in numerical implementations is followed by an analysis of model complexity as new components are added. The numerical implementation of a split composition distribution method for multicomponent systems is presented, and the solution is verified against analytical results. Numerical stability requirements under varying growth rate laws are used to develop new scaling methods which enable the description of particles over many orders of magnitude in size. Numerous examples are presented to illustrate the utility of these methods and to familiarize the reader with the development and manipulations of the representations, governing equations, and numerical implementations of population balance systems.
by Darren Donald Obrigkeit.
Ph.D.
Hoehn, William Kenneth. "An integrated decision approach : combining the demand-revealing, quality function deployment, and elements of the systems engineering processes /." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-03302010-020103/.
Повний текст джерелаRamanan, Baheerathan. "Quantifying mass transport processes in environmental systems using magnetic resonance imaging (MRI)." Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2974/.
Повний текст джерелаPrisby, Craig K. (Craig Kanoa) 1971. "Coordinating the multi-retailer, single supplier procurement processes for a seasonal product with supply contracts." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/28577.
Повний текст джерелаIncludes bibliographical references (p. 24).
Supply contracts are used to maximize profits in a supply chain by coordinating order quantities between the suppliers and retailers. In traditional supply contracts, retailers use a newsvendor approach to maximize their profits, while the supplier's profits increase linearly as a function of the number of units supplied to retailers. Initially, retailers assume risk in the supply chain because they are facing an unknown demand, and the suppliers assume no risk. This thesis looks at an example from the garment industry where retailers order to replenish stock after a small assortment buy is placed at the start of the finite selling season. The suppliers must place production orders for the entire selling season before the selling season begins. It is clear see that the retailers assume little risk in this model, while the supplier faces significant risk, especially if its forecasting methods are not accurate. The levels of risk that each assumes in this model are reversed when compared to the traditional supply contract model. A method is developed that coordinates the retailer ordering with the supplier's production schedule. It is shown that coordinating the supply chain's ordering will lead to higher profits than the current, uncoordinated model.
by Craig K. Prisby.
M.Eng.in Logistics
Herzog, Erik. "An approach to systems engineering tool data representation and exchange." Doctoral thesis, Linköping : Univ, 2004. http://www.ep.liu.se/diss/science_technology/08/67/index.html.
Повний текст джерелаBrataas, Gunnar. "Performance engineering method for workflow systems : an integrated view of human and computerised work processes." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 1996. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1411.
Повний текст джерелаA method for designing workflow systems which satisfy performance requirements is proposed in this thesis. Integration of human and computerised performance is particularly useful for workflow systems where human and computerised processes are intertwined. The proposed framework encompasses human and computerised resources.
Even though systematic performance engineering is not common practice in information system development, current, best practice shows that performance engineering of software is feasible, e.g. the SPE method by Connie U. Smith. Contemporary approaches to performance engineering focus on dynamic models of resource contention, e.g. queueing networks and Petri nets. Two difficulties arise for large-scale information systems. The first difficulty is to estimate appropriate parameters which capture the properties of the software and the organisation. The second difficulty is to maintain an overveiw of a complex model, which is essential both to guide the choice of parameters and to ensure that the oerformance engineering process is an intregal part of the wider system development process.
The proposed method is based on the static performance modelling method Structure and Performance (SP) developed by Peter h: Hughes. SP provides a suitable bridge between contemporary CASE tools and traditional dynamic approaches to performance evaluation, in particular because it adresses the problems of parameterisation and overveiw identified above.
The method is explored and illustrated with two case studies. The Blood Bank Case Study comprised performance engineering of a transaction-oriented information system, showing the oractical feasibility of integrating the method with CASE tools. The Gas Sales Telex Administration Case Study for Statoil looked at performance engineering of a workfloww system for telex handling, and consisted of performance modelling of human activity in interaction with a Lotus Notes computer platform.
The latter case study demonstrated the feasibility of the framework.
Gan, Jyeh J. "Decision support systems for tool reuse and EOL processes." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39488.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 68-69).
Intel® is a manufacturing company that concentrates on the fabrication process of computer chips. Over the years and into the future, Intel® has gone through multiple and advanced generations of manufacturing technology caused by new fabrication techniques and increased wafer sizes. These advances have resulted in significant opportunities for cost reduction which includes reuse of semiconductor equipment within Intel factories and sale of used semiconductor equipment. To ensure assets are transferred in a safe and timely manner, Intel developed a 6D program (Decontamination, Decommission, Demolition, Demolition-System, Delivery, and Deployment) to standardize the EOL (End of Life) process of transferring a tool from the factory to its final destination in reuse, sale, parts harvesting, donation or scrap. Like other multi-national companies, Intel® has decentralized manufacturing processes over multiple worldwide sites; most if not all the fabrication, sort, and assembly tool information is archived in multiple repositories/systems. In addition to the scattering of knowledge, the tool-related information appears not to be comprehensive, including data fields not matching across multiple systems.
(cont.) As a result, significant time is consumed to ensure the comprehensiveness and the accuracy of the required data across the multiple sites. Thus a comprehensive map of information infrastructure based on the 6D process is necessary to understand and enhance efficiencies in the knowledge flow process. Detailed mapping of databases and their meta-data will help identify the thoroughness, accuracy, redundancy, and inefficiency in the tool-related information systems as they relate to 6D. A prototype of a "one-stop-site"was developed and key Knowledge Management recommendations were proposed to enhance efficiency by further reducing costs, time, and resources.
by Jyeh J. Gan.
S.M.
M.B.A.
Abu-Madi, Mahmoud A. "A computer model for heat exchange processes in mobile air-conditioning systems." Thesis, University of Brighton, 1998. https://research.brighton.ac.uk/en/studentTheses/3b31883a-c908-4435-a66b-eef044b014da.
Повний текст джерелаKanumury, Rajesh. "Integrating business and engineering processes in manufacturing environment using AI concepts." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179423333.
Повний текст джерелаIngram, Mary Ann. "Estimation for linear systems driven by point processes with state dependent rates." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/14887.
Повний текст джерелаAl-Meer, Mariam A. "Reducing heart failure admissions through improved care systems and processes." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111872.
Повний текст джерелаThesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 121-124).
Heart failure (HF) is a complex chronic condition that can result from any cardiac disorder that impairs the ventricle's ability to fill with or eject blood. The American Heart Association predicts that there will be about 10 million HF patients in the US by 2037, with total hospitalization costs exceeding $70 billion. This represents a considerable burden to hospitals nationwide, including the Massachusetts General Hospital (MGH) -- a leading medical center that has long grappled with patient overcrowding and capacity constraints. This thesis presents an extensive mapping of the HF care pathway at MGH, followed by the results of a detailed retrospective analysis of the general behavior of HF patients admitted to MGH. Here, we notice that the majority of HF admissions originate as self-referrals via the Emergency Department (ED) and take place on weekdays, between the hours of 9am and 6pm. Moreover, we find that about 57% of hospitalized HF patients often have no scheduled follow-up appointments with their providers in the two weeks leading up to their admissions and, similarly, about 43% have no scheduled appointments in the eight weeks post hospital discharge. These represent two critical time periods in the events of acute heart failure decompensation. In an effort to prioritize targeted outpatient care, we propose a predictive model which aims to identify patients at greatest risk of a first hospital admission following encounters with their primary care providers and/or cardiologists in any given year. We perform logit-linear regressions on multiple prior first admissions and use predictors that, among others, include clinical risk factors, socioeconomic features and histories of prior medications. Some of the model's most significant predictors, as identified by the Akaike information criterion (AIC), include patient's age, marital status, ability to speak English, estimated average income, previous administration of loop diuretics, and the total number of medications prescribed or administered. To assess the quality of our predictions, we turn to the receiver operating characteristic (ROC) and its resulting average area under the curve (AUC) of 0.712. As the team continues to focus on developing interventions that offer better care to HF patients, the value of our model lies in its ability to prioritize patient needs for outpatient care and monitoring, and to guide the allocation of limited care resources.
by Mariam A. Al-Meer.
S.M.
M.B.A.
Al-Tayyar, Mohammad H. (Mohammad Haytham). "Corporate entrepreneurship and new business development : analysis of organizational frameworks, systematic processes and entrepreneurial attributes in established organizations." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90706.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 113-122).
Entrepreneurship is a distinctively individual concept. The individual entrepreneur works on his or her own to create a new business. Employees on the other hand function within the boundaries of the company. Employees that behave entrepreneurially collectively create the phenomenon of corporate entrepreneurship. In this thesis, we study the most common and overarching traits, characteristics and attributes of individual entrepreneurs. We analyze the most commonly prevalent traits and analyze how companies can be structured to foster strong sustainable corporate entrepreneurial ecosystems. The research also evaluates different corporate entrepreneurial models, types and frameworks through analyzing existing processes for creating corporate entrepreneurship and new business development. We explore concepts such as corporate venturing, corporate new business development, intrapreneurship, joint venturing, alliances, entrepreneurial human resource management, entrepreneurial organizational designs and business model innovation strategies. Specific companies that exemplified specific corporate entrepreneurship processes were analyzed such as DuPont 3M, IBM and Degussa AG. The concept of corporate entrepreneurship is instrumental in creating growth for companies but also could be a source of risk, where the example of Samsung Motors describes some of the negative impacts of corporate diversification. The research considers sustainable approaches for successfully implementing corporate entrepreneurship and new business develop and focus is given on the human interactions between the employee and the company.
by Mohammad H. Al-Tayyar.
S.M. in Engineering and Management
Smith, Zachary R. "Designing and implementing auxiliary operational processes." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44301.
Повний текст джерелаIncludes bibliographical references (p. 83-84).
Amazon.com, one of the largest and most profitable online retailers, has been experiencing such dramatic growth rates that it must continually update and modify its fulfillment process in order to meet customer demand for its products. As the volume of customer orders increases, management at the different fulfillment centers must determine the optimal way to increase the throughput through their facility. Many times the answer lies in improving the primary process, but occasionally it makes better sense if an auxiliary process is built or expanded to meet the increased demand.This thesis analyzes the decision criteria necessary to determine when an auxiliary process should be designed in addition to an established primary process. The author's internship project will be presented as an example of how to implement such a secondary method. The six-month LFM project focused on increasing the Fernley, Nevada fulfillment center's capacity by making improvements to its manual sortation/packaging. This process, nicknamed BIGS, was originally built to offload large and troublesome orders from the primary, automated process path. The unique labor-intensive procedures used in this process held several advantages that justified its existence and the investments necessary to expand its capacity
by Zachary R. Smith.
S.M.
M.B.A.
Mäkäräinen, Minna. "Software change management processes in the development of embedded software /." Espoo [Finland] : Technical Research Centre of Finland, 2000. http://www.vtt.fi/inf/pdf/publications/2000/P416.pdf.
Повний текст джерелаLeung, Wai-man Wanthy. "Evolutionary optimisation of industrial systems /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2132668X.
Повний текст джерелаManyuchi, Musaida Mercy. "Measurement and behavior of the overall volumetric oxygen transfer coefficient in aerated agitated alkane based multiphase systems." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5329.
Повний текст джерелаENGLISH ABSTRACT: Hydrocarbons provide excellent feed stocks for bioconversion processes to produce value added products using various micro-organisms. However, hydrocarbon-based aerobic bioprocesses may exhibit transport problems where the bioconversion is limited by oxygen supply rather than reaction kinetics. Consequently, the overall volumetric oxygen transfer coefficient (KLa) becomes critical in designing, operating and scaling up of these processes. In view of KLa importance in hydrocarbon-based processes, this work evaluated KLa measurement methodologies as well as quantification of KLa behavior in aerated agitated alkane-solid-aqueous dispersions. A widely used KLa measurement methodology, the gassing out procedure (GOP) was improved. This improvement was done to account for the dissolved oxygen (DO) transfer resistances associated with probe. These resistances result in a lag in DO response during KLa measurement. The DO probe response lag time, was incorporated into the GOP resulting in the GOP (lag) methodology. The GOP (lag) compared well with the pressure step procedure (PSP), as documented in literature, which also incorporated the probe response lag time. Using the GOP (lag), KLa was quantified in alkane-solid-aqueous dispersions, using either inert compounds (corn flour and CaCO3) or inactive yeast cells as solids to represent the micro-organisms in a hydrocarbon bioprocess. Influences of agitation, alkane concentration, solids loading and solids particle sizes and their interactions on KLa behavior in these systems were quantified. In the application of an accurate KLa measurement methodology, the DO probe response lag time was investigated. Factors affecting the lag, which included process conditions such as agitation (600-1200rpm), alkane concentration (2.5-20% (v/v), alkane chain length (n-C10-13 and n-C14-20), inert solids loading (1-10g/L) and solids particle sizes (3-14μm) as well as probe characteristics such as membrane age and electrolyte age (5 day usage), were investigated. Kp, the oxygen transfer coefficient of the probe, was determined experimentally as the inverse of the time taken for the DO to reach 63.2% of saturation after a step change in DO concentration. Kp dependence on these factors was defined using 22 factorial design experiments. Kp decreased on increased membrane age with an effect double that of Kp decrease due to electrolyte age. Additionally, increased alkane concentration decreased Kp with an effect 7 times higher compared to that of Kp decrease due to increased alkane chain length. This was in accordance to Pareto charts quantification. KLa was then calculated, using the GOP (lag), according to equation [1] which incorporates the influence of Kp. Equation 1 is derived from the simultaneous solution of the models which describe the response of the system and of the probe to a step change in DO. 1 1 * L p p p K at K t L p p La C K e K ae C K K = - - - - - [1] The KLa values documented in literature from the PSP and KLa calculated by the GOP (lag) showed only a 1.6% difference. However KLa values calculated by the GOP (lag) were more accurate than KLa calculated by the GOP, with up to >40% error observed in the latter according to t-tests analyses. These results demonstrated that incorporating Kp markedly improved KLa accuracy. Consequently, the GOP (lag) was chosen as the preferred KLa measurement methodology. KLa was determined in n-C14-20-inert solid-aqueous dispersions. Experiments were conducted in a stirred tank reactor with a 5L working volume at constant aeration of 0.8vvm, 22ºC and 101.3kPa. KLa behavior across a range of agitations (600- 1200rpm), alkane concentrations (2.5-20% (v/v)), inert solids loadings (1-10g/L) and solids particle sizes (3-14μm) was defined using a 24 factorial design experiment. In these dispersions, KLa increased significantly on increased agitation with an effect 5 times higher compared to that of KLa increase due to interaction of increased alkane concentration and inert solids loading. Additionally, KLa decreased significantly on increased alkane concentration with an effect 4 times higher compared to both that of increased solids particle sizes and the interaction of increased agitation and solids particle size. In n-C14-20-yeast-aqueous dispersions, KLa was determined under narrowed process conditions better representing typical bioprocess conditions. KLa behavior across a range of agitations (600-900rpm), alkane concentrations (2.5-11.25% (v/v)) and yeast loadings (1-5.5g/L) using a 5μm-yeast cell was defined using a 23 factorial design experiment. In these dispersions, KLa increased significantly on increased agitation. Additionally, KLa decreased significantly on increased yeast loading with an effect 1.2 times higher compared to that of KLa decrease due to interaction of increased alkane concentration and yeast loading. In this study, the importance of Kp for accurate KLa measurement in alkane based systems has been quantified and an accurate and less complex methodology for its measurement applied. Further, KLa behavior in aerated alkane-solid-aqueous dispersions was quantified, demonstrating KLa enhancement on increased agitation and KLa depression on increased alkane concentration, solids loading and solids particle sizes.
AFRIKAANSE OPSOMMING: Koolwaterstowwe dien as uitstekende voervoorraad vir ´n verskeidenheid van mikroorganismes wat aangewend word in biologiese omsettingsprosesse ter vervaardiging van waardetoevoegende produkte. Hierdie biologiese omsettingsprosesse word egter vertraag weens die gebrek aan suurstoftoevoer onder aerobiese toestande. Die tempo van omsetting word dus beheer deur die volumetriese suurstofoordragkoeffisiënt (KLa) eerder as die toepaslike reaksiekinetika. Die bepaling van ´n akkurate KLa word dus krities tydens die ontwerp en opskalering van hierdie prosesse. Met dit in gedagte het hierdie studie die huidige metodes om KLa te bepaal geëvalueer en die gedrag van KLa in goed vermengde en belugde waterige alkaanmengsels met inerte vastestowwe, soos gisselle, in suspensie ondersoek. ´n Deesdae populêre metode om KLa te bepaal, die sogenaamde gasvrylatingsprosedure (GOP) is in hierdie studie verbeter. Die verbetering berus op die ontwikkeling van ´n prosedure om die suurstofoordragsweerstand van die pobe wat die hoeveelheid opgeloste suurstof (DO) meet, in berekening te bring. Hierdie weerstand veroorsaak ´n vertragin in the responstyd van die probe. Die verbeterde metode, GOP (lag), vergelyk goed met die gepubliseerde resultate van die drukstaptegniek (PSP) wat ook die responstyd in ag neem. GOP (lag) is ingespan om KLa te gekwantifiseer vir waterige alkaan-vastestof suspensies. Inerte componente soos mieliemeel, kalsiumkarbonaat en onaktiewe gisselle het gedien as die vastestof in suspensie verteenwoordigend van die mikroörganismes in ´n koolwaterstof bio-proses. Die invloed van vermengingstempo, alkaan konsentrasie, vastestof konsentrasie en partikelgrootte asook die interaksie van al die bogenoemde op KLa is kwatitatief bepaal in hierdie studie. Faktore wat die responstyd van die DO probe beïnvloed is ondersoek. Hierdie faktore is onder meer vermengingstempo (600-1200opm), alkaankonsentrasie (2.5-20% (v/v)), alkaankettinglengte (n-C10-13 en n-C14-20), vastestofkonsentrasie (1-10g/L) en partikelgrootte (3-14 μm). Faktore wat die eienskappe van die probe beïnvoed, naamlik membraan-en elektrolietouderdom (5 dae verbruik), is ook ondersoek. Kp, die suurstofoordragskoeffisiënt, is bepaal deur ´n inkrementele verandering in die suurstofkonsentrasie van die mengsel te maak en die tyd vir 63.2% versadiging van die probelesing te noteer. Die genoteerde tyd is die response tyd van die probe en Kp, die inverse van hierdie tyd. Die afhanklikheid van Kp op die bogenoemde faktore is ondersoek in ´n 22 faktorieël ontwerpte reeks eksperimente. Kp toon ´n afname met ´n toename in membraanouderdom. Hierdie afname is dubbel in grootte as dit vergelyk word met die afname relatief tot die toename in elektrolietouderdom. Verder toon Kp ´n afname met ´n toename in alkaankonsentrasie. Hierdie afname is 7 keer groter relatief tot die afname gesien met die toename in alkaan kettinglengte. Hierdie is in goeie ooreenstemming met Pareto kaarte as kwantifiseringsmetode. KLa is bereken met die inagname van Kp volgens vergelyking [1]: 1 1 * L p p p K at K t L p p La C K e K ae C K K = - - - - - [1] Vergelyking [1] is afgelei vanaf die gelyktydige oplossing van die bestaande modelle wat die responstyd van die pobe vir ´n stapverandering in DO bereken. Die KLa waardes van die PSP metode uit literatuur verskil in die orde van 1.6% van dié bereken deur vergelyking [2]. Hierdie verskil is weglaatbaar. Die KLa waardes verkry uit die GOP metode wat nie Kp in berekening bring nie, verskil met meer as 40% van die huidige, verbeterde metode volgens die statistiese t-test analiese. Dit bewys dat die inagname van Kp ´n merkwaardige verbetering in die akuraatheid van KLa teweeg bring. GOP (lag) kry dus voorkeur vir die berekening van KLa verder aan in hierdie studie. KLa is bereken vir n-C14-20-water mengsels met inerte vastestofsuspensies. Die eksperimente is uitgevoer in ´n 5L geroerde reaktor met ´n konstante belugting van 0.8vvm (volume lug per volume supensie per minuut), 22ºC en 101.3kPa. Die gedrag van KLa met betrekking tot vermengingstempo (600-1200opm), alkaankonsentrasie (2.5-20% (v/v)), vastestofkonsentrasie (1-10g/L) en partikelgrootte (3-14μm) is ondersoek in ´n 24 faktorieël ontwerpte reeks eksperimente. Verder is die invloed van vloeistofviskositeit en oppervlakspanning op KLa ondersoek in ´n 23 faktorieël ontwerpte reeks eksperimente. KLa het ´n beduidende toename getoon met ´n toename in vermengingstempo. Hierdie toename was 5 keer groter as die toename relatief tot die interaksie van alkaan-en vastestofkonsentrasie. KLa het ook beduidend afgeneem met ´n toename in alkaankonsentrasie. Die toename was 4 keer groter as die toename relatief tot die toename in partikelgrootte en die interaksie van vermengingstempo en partikelgrootte. In n-C14-20-water mengsels met gisselsuspensies is KLa bepaal onder kondisies verteenwoordigend van tipiesie biologiese omsettingsprosesse. Die gedrag van KLa met betrekking tot vermengingstempo (600-900opm), alkaankonsentrasie (2.5-11.25% (v/v)) en giskonsentrasie (1-5.5g/L) met ´n partikelgroote van 5μm is ondersoek in ´n 23 faktorieël ontwerpte reeks eksperimente. Hierdie eksperimente het ´n beduidende toename in KLa met ´n toename in vermengingstempo getoon sowel as ´n beduidende afname met ´n toename in giskonsentrasie. Hierdie afname is in die orde van 1.2 keer groter in vergelyking met die interaksie van alkeen- en giskonsentrasie. Hierdie studie bring die kritieke rol wat Kp speel in die akkurate bepaling van KLa in waterige alkaansisteme met inerte vastestofsuspensies na vore. Dit stel verder ´n metodiek voor vir die akurate meting van en kwantifisering van beide Kp en KLa onder aerobiese toestande met betrekking tot vermengingstempo, alkaankonsentrasie, vastestofkonsentrasie en partikelgrootte.
Xu, Donghai. "Phase behaviour modelling of hydrocarbon systems for compositional reservoir simulation of gas injection processes." Thesis, Heriot-Watt University, 1990. http://hdl.handle.net/10399/886.
Повний текст джерелаThiel, Gregory P. "Desalination systems for the treatment of hypersaline produced water from unconventional oil and gas processes." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/107078.
Повний текст джерелаNumbering for pages 3-4 duplicated. Cataloged from PDF version of thesis.
Includes bibliographical references (pages 183-195).
conventional reserves has led to a boom in the use of hydraulic fracturing to recover oil and gas in North America. Among the most significant challenges associated with hydraulic fracturing is water resource management, as large quantities of water are both consumed and produced by the process. The management of produced water, the stream of water associated with a producing well, is particularly challenging as it can be hypersaline, with salinities as high as nine times seawater. Typical disposal strategies for produced water, such as deep well injection, can be unfeasible in many unconventional resource settings as a result of regulatory, environmental, and/or economic barriers. Consequently, on-site treatment and reuse-a part of which is desalination-has emerged as a strategy in many unconventional formations. However, although desalination systems are well understood in oceanographic and brackish groundwater contexts, their performance and design at significantly higher salinities is less well explored. In this thesis, this gap is addressed from the perspective of two major themes: energy consumption and scale formation, as these can be two of the most significant costs associated with operating high-salinity produced water desalination systems. Samples of produced water were obtained from three major formations, the Marcellus in Pennsylvania, the Permian in Texas, and the Maritimes in Nova Scotia, and abstracted to design-case samples for each location. A thermodynamic framework for analyzing high salinity desalination systems was developed, and traditional and emerging desalination technologies were modeled to assess the energetic performance of treating these high-salinity waters. A novel thermodynamic parameter, known as the equipartition factor, was developed and applied to several high-salinity desalination systems to understand the limits of energy efficiency under reasonable economic constraints. For emerging systems, novel hybridizations were analyzed which show the potential for improved performance. A model for predicting scale formation was developed and used to benchmark current pre-treatment practices. An improved pretreatment process was proposed that has the potential to cut chemical costs, significantly. Ultimately, the results of the thesis show that traditional seawater desalination rules of thumb do not apply: minimum and actual energy requirements of hypersaline desalination systems exceed their seawater counterparts by an order of magnitude, evaporative desalination systems are more efficient at high salinities than lower salinities, the scale-defined operating envelope can differ from formation to formation, and optimized, targeted pretreatment strategies have the potential to greatly reduce the cost of treatment. It is hoped that the results of this thesis will better inform future high-salinity desalination system development as well as current industrial practice.
by Gregory P. Thiel.
Ph. D.
Östlin, Johan. "On Remanufacturing Systems : Analysing and Managing Material Flows and Remanufacturing Processes." Doctoral thesis, Linköpings universitet, Monteringsteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11932.
Повний текст джерелаZhang, Qiang. "Process modeling of innovative design using systems engineering." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD007/document.
Повний текст джерелаWe develop a series of process models to comprehensively describe and effectively manage innovative design in order to achieve adequate balance between innovation and control, following the design research methodology (DRM). Firstly, we introduce a descriptive model of innovative design. This model reflects the actual process and pattern of innovative design, locates innovation opportunities in the process and supports a systematic perspective whose focus is the external and internal factors affecting the success of innovative design. Secondly, we perform an empirical study to investigate how control and flexibility can be balanced to manage uncertainty in innovative design. After identifying project practices that cope with these uncertainties in terms of control and flexibility, a case-study sample based on five innovative design projects from an automotive company is analyzed and shows that control and flexibility can coexist. Based on the managerial insights of the empirical study, we develop the procedural process model and the activity-based adaptive model of innovative design. The former one provides the conceptual framework to balance innovation and control by the process structuration at the project-level and the integration of flexible practices at the operation-level. The latter model considers innovative design as a complex adaptive system, and thereby proposes the method of process design that dynamically constructs the process architecture of innovative design. Finally, the two models are verified by supporting a number of process analysis and simulation within a series of innovative design projects
Viller, Stephen Alexandre. "Human factors in requirements engineering : a method for improving requirements processes for the development of dependable systems." Thesis, Lancaster University, 1999. http://eprints.lancs.ac.uk/11686/.
Повний текст джерелаHoltz, Heath M. (Heath Mikal). "Re-sourcing manufacturing processes in metal forming operations." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34859.
Повний текст джерелаIncludes bibliographical references (p. 75-76).
Deciding which activities to conduct in-house and which to outsource has become increasingly important due to its implications on a company's supply chain and overall business model. A number of factors can lead a company to outsource manufacturing processes. As a result of this outsourcing, the supply chain can become very complex and overwhelming to manage. This thesis will analyze this situation from the perspective of one manufacturer, American Axle and Manufacturing, Inc. (AAM). AAM's Metal Formed Products (MFP) Division currently has a number of challenges: rising steel prices, fixed labor costs and declining sales. All these factors have significantly impacted profitability, forcing senior management to take a comprehensive look at the division and consider developing a plan to improve divisional operations. As a part of this plan, MFP Division's senior management asked for a thorough look into all of the manufacturing processes performed by the division both internally and by outside suppliers. In addition to identifying the processes and suppliers, senior management sought to highlight opportunities for improving the process flow through the re-sourcing of manufacturing processes. This project develops a framework to analyze and evaluate these re-sourcing decisions. This framework employs a five-step approach and incorporates a number of diverse analytical tools. Process flow mapping provided a tool to visually highlight the best opportunities to resource. In addition to a visual representation, process flow mapping also provided the data to financially evaluate alternatives. Strategic and market factors were identified in order to target and prioritize re-sourcing efforts.
(cont.) This framework provides a structure for sourcing decisions that balances the financial and strategic concerns. The project concluded in a $2M investment to re-source heat treating to AAM facilities.
by Heath M. Holtz.
S.M.
M.B.A.