Dissertations / Theses on the topic 'Operating models'

To see the other types of publications on this topic, follow the link: Operating models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Operating models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gupta, Budhaditya. "Essays on External Context and Operating Models." Thesis, Harvard University, 2016. http://nrs.harvard.edu/urn-3:HUL.InstRepos:32744399.

Full text
Abstract:
Effective operating models based on carefully selected resources, processes and logic allow organizations to develop the right products and services and deliver them to customers. However, there has been little investigation of how organizations design and manage their operating models when they enter new contexts due to changes in regulation, competition, markets, technology, location and/or a combination of these factors. This dissertation examines the relationship between an organization’s external context and its operating model by carefully examining the choice of operating resources, processes and logic as organizations enter new contexts. The dissertation specifically focuses on one developing country, India, and adopts an inductive approach to study the design of operating models in response to significant changes in location and/or market in three different empirical settings within the healthcare industry. The first study, conducted jointly with Stefan Thomke, explores the influence of the institutional context on the R&D processes. Inductive field work, focused on medical device development at a newly established R&D center of a US MNC in India, suggests that institutional flexibility in emerging markets (such as India) might allow for high fidelity experimentation and testing during early stages of product development. This, in turn, has implications for R&D search performance and the locus of innovation and entrepreneurship. The second study, a joint project with Rob Huckman and Tarun Khanna, identifies the development of an operating model based on the practice of shifting less complex surgical tasks from senior surgeons to skilled junior surgeons as fundamental in enabling Narayana Health (NH) to provide high-quality, low-cost cardiac surgery care to the indigent population in India. Our analysis of surgical outcome data suggests that the task shifting based model – while costing significantly less – does not negatively affect clinical outcomes. Further, we highlight the location-specific contextual factors that allow for such a model in India. The third study, conducted with Tarun Khanna, focuses on NH’s design of a low-cost, high-quality tertiary care hospital in the Cayman Islands. The prior experience of developing different hospital models in response to the heterogeneous market in India allowed NH to develop a deep understanding of the environmental context and a diverse set of knowledge and practices. This understanding of and experience at diverse contexts informed the Cayman project, and NH was able to selectively borrow and recombine elements from their different models in India while setting up the Cayman hospital. Building on these findings, we develop a process model that highlights how recombination of elements developed to address heterogeneity of context in the home country can allow an organization to develop an effective operating model in the host country. Collectively, the studies illustrate how the external context shapes an organization’s ability to design, implement and transfer operating models. At the same time, they emphasize that organizations can successfully develop an operating model to accommodate any significant context change by approaching the design effort as a fundamentally new design problem. This latter approach is in contrast to the often discussed replication-adaptation balancing approach that emphasizes marginal adaptation of prior established model(s). Further, by uncovering the importance of the local context in selecting and adopting specific operational resources and processes in healthcare settings in an emerging market, the studies contribute rich insights to the new yet growing streams of literature related to healthcare management in resource-constrained settings, innovation and entrepreneurship in emerging markets, and the transfer of innovations from developing to developed markets. In conclusion, by focusing on the relation among (1) the external context of an organization, (2) the design of operating logic, resources and processes and (3) organizational performance, this dissertation contributes to research in operations strategy, organization theory and the management of innovation and entrepreneurship.
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Ojaimi, Abdulkarim. "Evidence based models for evaluating operating room performance." Thesis, Cardiff University, 2012. http://orca.cf.ac.uk/47338/.

Full text
Abstract:
The operating room (OR) within a hospital environment is one of the most expensive functional areas, yet the use of the OR also provides hospitals with an essential source of income. However, at present, there are variations on how to evaluate the performance of ORs, since there is no clear and full explanation of the concept and methods used for evaluation. The overall aim of this thesis is to develop an evidence based Operating Room Assessment Framework (ORAF) to evaluate Operating Room performance with clear and complete guidelines that can be used by operating room managers, directors or any other medical professionals to evaluate operating room performance, determine OR planning and scheduling efficiency, OR workload and OR utilization. The resulting Operating Room Assessment Framework will assist targeted healthcare professionals in their quest to evaluate, monitor and improve overall Operating Room efficiency. The OR management systems of eight tertiary and teaching hospitals in three countries (Japan, Canada and Saudi Arabia) have been examined from 2010 to 2012, which include more than 98,500 procedures. The Operating Room Assessment Framework (ORAF) involves three important elements of Operating Room performance, namely: OR scheduling level, the type of OR workload, and OR utilization. These elements can simply be read to reach the end result, which includes three types of scheduling levels: under scheduling, ideal scheduling and over scheduling; five types of OR workload: OR total workload (the gross workload), OR actual workload, over workload, unnecessary workload and unexpected workload; and three types of OR utilization: underutilization, ideal utilization, and 100% utilization with over workload. Through the validation process in different hospital contexts, the ORAF has proven its ability to perform satisfactorily, with accuracy, in line within the research’s objectives.
APA, Harvard, Vancouver, ISO, and other styles
3

Corbett, Philip. "Accuracy of logistic models and receiver operating characteristic curves." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/4114/.

Full text
Abstract:
The accuracy of prediction is a commonly studied topic in modern statistics. The performance of a predictor is becoming increasingly more important as real-life decisions axe made on the basis of prediction. In this thesis we investigate the prediction accuracy of logistic models from two different approaches. Logistic regression is often used to discriminate between two groups or populations based on a number of covariates. The receiver operating characteristic (ROC) curve is a commonly used tool (especially in medical statistics) to assess the performance Of such a score or test. By using the same data to fit the logistic regression and calculate the ROC curve we overestimate the performance that the score would give if validated on a sample of future cases. This overestimation is studied and we propose a correction for the ROC curve and the area under the curve. The methods axe illustrated through way of two medical examples and a simulation study, and we show that the overestimation can be quite substantial for small sample sizes. The idea of shrinkage pertains to the notion that by including some prior information about the data under study we can improve prediction. Until now, the study of shrinkage has almost exclusively been concentrated on continuous measurements. We propose a methodology to study shrinkage for logistic regression modelling of categorical data with a binary response. Categorical data with a large number of levels is often grouped for modelling purposes, which discards useful information about the data. By using this information we can apply Bayesian methods to update model parameters and show through examples and simulations that in some circumstances the updated estimates are better predictors than the model.
APA, Harvard, Vancouver, ISO, and other styles
4

Ting, Chung-wu. "Estimating operating and support cost models for U.S. Naval ships." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA277755.

Full text
Abstract:
Thesis (M.S. in Management) Naval Postgraduate School, December 1993.
Thesis advisor(s): Katsuaki Terasawa ;Gregory G. Hildebrandt ; Dan C. Boger. ."December 1993." Bibliography: p. 73. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Ming-Cheng. "Estimating operating and support models for U.S. Air Force Aircraft." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA376488.

Full text
Abstract:
Thesis (M.S. in Management) Naval Postgraduate School, March 2000.
Thesis advisor(s): Hildebrandt, Gregory G. ; Liao, Shu S. "March 2000." Includes bibliographical references (p. 57-59). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
6

Feng, Ming-Fa. "Fault diagnosis and prediction in reciprocating air compressors by quantifying operating parameters." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/39786.

Full text
Abstract:
This research introduces a new method of diagnosing the internal condition of a reciprocating air compressor. Using only measured load torques and shaft dynamics, pressures, temperatures, flow rates, leakages, and heat transfer conditions are quantified to within 5%. The load torque acting on the rotor of the machine is shown to be a function of the dynamics (instantaneous position, velocity, and acceleration) of the driving shaft, the kinematic construction, and the internal condition of the machine. If the load torque, the kinematic construction of the machine, and the dynamics of the rotor are known, then the condition of the machine can be assessed. A theoretical model is developed to describe the physical behavior of the slider-crank mechanism and the shaft system. Solution techniques, which are based on the machine construction, crankshaft dynamics, and load torque measurements, are presented to determine the machine parameters. A personal computer based system used to measure the quantities necessary to solve for the machine parameters and the quantities used to compare with calculations is also documented. The solution algorithm for multi-stage compressors is verified by decoupling the load torque contributed by each cylinder. Pressure data for a four-stage two-cylinder high pressure air compressor (HPAC) is used. Also, the mathematical model is proven feasible by using measured angular velocity of the crankshaft and direct measurements of the load torque of a single stage, single cylinder air compressor to solve for the machine parameters. With this unintrusive and nondestructive method of quantifying the operating parameters, the cylinder pressures, operating temperatures, heat transfer conditions, leakage, and power consumption of a reciprocating air compressor can be evaluated.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Ming. "System Dynamics Model for Testing and Evaluating Automatic Headway Control Models for Trucks Operating on Rural Highways." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-01292008-113749/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rojas, Jose Angel. "Relationship between the sludge settling characteristics and the parameters of the activated sludge system." ScholarWorks@UNO, 2004. http://louisdl.louislibraries.org/u?/NOD,171.

Full text
Abstract:
Thesis (M.S.)--University of New Orleans, 2004.
Title from electronic submission form. "A thesis ... in partial fulfillment of the requirements for the degree of Master of Science in the Department of Environmental Engineering."--Thesis t.p. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Jun. "Operating Speed Models for Low Speed Urban Enviroments based on In-Vehcile GPS." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10508.

Full text
Abstract:
Low speed urban streets are designed to provide both access and mobility, and accommodate multiple road users, such as bicyclists and pedestrians. However, speeds on these facilities often exceed the intended operating speeds as well as their design speeds. Several studies have indicated that the design speed concept, as implemented in the roadway design process in the United States, does not guarantee a consistent alignment that promotes uniform operating speeds less than design speeds. To overcome these apparent shortfalls of the design speed approach, a promising design approach is a performance-based design procedure with the incorporation of operating speeds. Under this procedure, the geometric parameters of the roadways are selected based on their influences on the desired operating speeds. However, this approach requires a clear understanding of the relationships between operating speeds and various road environments. Although numerous studies have developed operating speed models, most of these previous studies have concentrated on high speed rural two-lane highways. In contrast, highway designers and planners have very little information regarding the influence of low speed urban street environments on drivers' speeds. This dissertation investigated the relationship between drivers' speed choices and their associated low speed urban roadway environments by analyzing second-by-second in-vehicle GPS data from over 200 randomly selected vehicles in the Atlanta, Georgia area. The author developed operating speed models for low speed urban street segments based on roadway alignment, cross-section characteristics, roadside features, and adjacent land uses. The author found the number of lanes per direction of travel had the most significant influence on drivers' speeds on urban streets. Other significant variables include on-street parking, sidewalk presence, roadside object density and offset, T-intersection and driveway density, raised curb, and adjacent land use. The results of this research effort can help highway designers and planners better understand expected operating speeds when they design and evaluate low speed urban roadways.
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Liying. "Operating Room Version of Safety Attitudes Questionnaire – An Analysis Using Structural Equation Models." Thesis, Uppsala universitet, Statistiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-175920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Jun. "Operating speed models for low speed urban environments based on in-vehicle GPS." Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-03292006-111130/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

GEARA, TONY GEBRAEL. "Improved Models for User Cost Analysis." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1212144622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gong, Huafeng. "OPERATING SPEED PREDICTION MODELS FOR HORIZONTAL CURVES ON RURAL FOUR-LANE NON-FREEWAY HIGHWAYS." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_diss/562.

Full text
Abstract:
One of the significant weaknesses of the design speed concept is that it uses the design speed of the most restrictive geometric element as the design speed of the entire road. This leads to potential inconsistencies among successive sections of a road. Previous studies documented that a uniform design speed does not guarantee consistency on rural two-lane facilities. It is therefore reasonable to assume that similar inconsistencies could be found on rural four-lane non-freeway highways. The operating speed-based method is popularly used in other countries for examining design consistency. Numerous studies have been completed on rural two-lane highways for predicting operating speeds. However, little is known for rural four-lane non-freeway highways. This study aims to develop operating speed prediction models for horizontal curves on rural four-lane non-freeway highways using 74 horizontal curves. The data analysis showed that the operating speeds in each direction of travel had no statistical differences. However, the operating speeds on inside and outside lanes were significantly different. On each of the two lanes, the operating speeds at the beginning, middle, and ending points of the curve were statistically the same. The relationships between operating speed and design speed for inside and outside lanes were different. For the inside lane, the operating speed was statistically equal to the design speed. By contrary, for the outside lane, the operating speed was significantly lower than the design speed. However, the relationships between operating speed and posted speed limit for both inside and outside lanes were similar. It was found that the operating speed was higher than the posted speed limit. Two models were developed for predicting operating speed, since the operating speeds on inside and outside lanes were different. For the inside lane, the significant factors are: shoulder type, median type, pavement type, approaching section grade, and curve length. For the outside lane, the factors included shoulder type, median type, approaching section grade, curve length, curve radius and presence of approaching curve. These factors indicate that the curve itself does mainly influence the drivers speed choice.
APA, Harvard, Vancouver, ISO, and other styles
14

Eusuff, M. Muzaffar. "Optimisation of an operating policy for variable speed pumps using genetic algorithms." Title page, contents and abstract only, 1995. http://web4.library.adelaide.edu.au/theses/09ENS/09ense91.pdf.

Full text
Abstract:
Undertaken in conjunction with JUMP (Joint Universities Masters Programme in Hydrology and Water Resources). Bibliography: leaves 76-83. Establishes a methodology using genetic algorithms to find the optimum operating policy for variable speed pumps in a water supply network over a period of 24 hours.
APA, Harvard, Vancouver, ISO, and other styles
15

Jordan, Ramiro. "A derivation of the probability distribution function of the output of a square-law detector operating in a jamming environment." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gong, Huafeng. "Operating speed preditcion [sic] models for horizontal curves on rural four-lane non-freeway highways." Lexington, Ky. : [University of Kentucky Libraries], 2007. http://hdl.handle.net/10225/727.

Full text
Abstract:
Thesis (Ph. D.)--University of Kentucky, 2007.
Title from document title page (viewed on March 25, 2008). Document formatted into pages; contains: x, 182 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 136-149).
APA, Harvard, Vancouver, ISO, and other styles
17

Arumí, Albó Pau. "Real-time multimedia on off-the-shelf operating systems: from timeliness dataflow models to pattern languages." Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7558.

Full text
Abstract:
Els sistemes multimèdia basats en programari capaços de processar àudio, vídeo i gràfics a temps-real són omnipresents avui en dia. Els trobem no només a les estacions de treball de sobre-taula sinó també als dispositius ultra-lleugers com els telèfons mòbils. Degut a que la majoria de processament es realitza mitjançant programari, usant abstraccions del maquinari i els serveis oferts pel sistema operatiu i les piles de llibreries que hi ha per sota, el desenvolupament ràpid d'aplicacions esdevé possible. A més d'aquesta immediatesa i exibilitat (comparat amb les plataformes orientades al maquinari), aquests plataformes també ofereixen capacitats d'operar en temps-real amb uns límits de latència apropiats. Malgrat tot això, els experts en el domini dels multimèdia s'enfronten a un desafiament seriós: les funcionalitats i complexitat de les seves aplicacions creixen ràpidament; mentrestant, els requeriments de temps-real (com ara la baixa latència) i els estàndards de fiabilitat augmenten. La present tesi es centra en l'objectiu de proporcionar una caixa d'eines als experts en el domini que els permeti modelar i prototipar sistemes de processament multimèdia. Aquestes eines contenen plataformes i construccions que reecteixen els requeriments del domini i de l'aplicació, i no de propietats accidentals de la implementació (com ara la sincronització entre threads i manegament de buffers). En aquest context ataquem dos problemes diferents però relacionats:la manca de models de computació adequats pel processament de fluxos multimèdia en temps-real, i la manca d'abstraccions apropiades i mètodes sistemàtics de desenvolupament de programari que suportin els esmentats models. Existeixen molts models de computació orientats-a-l'actor i ofereixen millors abstraccions que les tècniques d'enginyeria del programari dominants, per construir sistemes multimèdia de temps-real. La família de les Process Networks i els models Dataflow basades en xarxes d'actors de processat del senyal interconnectats són els més adequats pel processament de fluxos continus. Aquests models permeten expressar els dissenys de forma propera al domini del problema (en comptes de centrar-se en detalls de la implementació), i possibiliten una millor modularització i composició jeràrquica del sistema. Això és possible perquè el model no sobreespecifica com els actors s'han d'executar, sinó que només imposa dependències de dades en un estil de llenguatge declaratiu. Aquests models admeten el processat multi-freqüència i, per tant, planificacions complexes de les execucions dels actors. Però tenen un problema: els models no incorporen el concepte de temps d'una forma útil i, en conseqüència, les planifiacions periòdiques no garanteixen un comportament de temps-real i de baixa latència. Aquesta dissertació soluciona aquesta limitació a base de descriure formalment un nou model que hem anomenat Time-Triggered Synchronous Dataflow (TTSDF). En aquest nou model les planificacions periòdiques són intercalades per vàries "activacions" temporalment-disparades (time-triggered) de forma que les entrades i sortides de la xarxa de processat poden ser servides de forma regular. El model TTSDF té la mateixa expressivitat (o, en altres paraules, té computabilitat equivalent) que el model Synchronous Dataow (SDF). Però a més, té l'avantatge que garanteix la operativitat en temps-real, amb mínima latència i absència de forats i des-sincronitzacions a la sortida. Finalment, permet el balancejat de la càrrega en temps d'execució entre diferents activacions de callbacks i la paralel·lització dels actors. Els models orientats-a-l'actor no són solucions directament aplicables; no són suficients per construir sistemes multimèdia amb una metodologia sistemàtica i pròpia d'una enginyeria. També afrontem aquest problema i, per solucionar-lo, proposem un catàleg de patrons de disseny específics del domini organitzats en un llenguatge de patrons. Aquest llenguatge de patrons permet el refús del disseny, posant una especial atenció al context en el qual el disseny-solució és aplicable, les forces enfrontades que necessita balancejar i les implicacions de la seva aplicació. Els patrons proposats es centren en com: organitzar diferents tipus de connexions entre els actors, transferir dades entre els actors, habilitar la comunicació dels humans amb l'enginy del dataflow, i finalment, prototipar de forma ràpida interfícies gràfiques d'usuari per sobre de l'enginy del dataflow, creant aplicacions completes i extensibles. Com a cas d'estudi, presentem un entorn de desenvolupament (framework) orientat-a-objectes (CLAM), i aplicacions específiques construïdes al seu damunt, que fan ús extensiu del model TTSDF i els patrons contribuïts en aquesta tesi.
Software-based multimedia systems that deal with real-time audio, video and graphics processing are pervasive today, not only in desktop workstations but also in ultra-light devices such as smart-phones. The fact that most of the processing is done in software, using the high-level hardware abstractions and services offered by the underlying operating systems and library stacks, enables for quick application development. Added to this exibility and immediacy (compared to hardware oriented platforms), such platforms also offer soft real-time capabilities with appropriate latency bounds. Nevertheless, experts in the multimedia domain face a serious challenge: the features and complexity of their applications are growing rapidly; meanwhile, real-time requirements (such as low latency) and reliability standards increase. This thesis focus on providing multimedia domain experts with workbench of tools they can use to model and prototype multimedia processing systems. Such tools contain platforms and constructs that reect the requirements of the domain and application, and not accidental properties of the implementation (such as thread synchronization and buffers management). In this context, we address two distinct but related problems: the lack of models of computation that can deal with continuous multimedia streams processing in real-time, and the lack of appropriate abstractions and systematic development methods that support such models. Many actor-oriented models of computation exist and they offer better abstractions than prevailing software engineering techniques (such as object-orientation) for building real-time multimedia systems. The family of Process Networks and Dataow models based on networks of connected processing actors are the most suited for continuous stream processing. Such models allow to express designs close to the problem domain (instead of focusing in implementation details such as threads synchronization), and enable better modularization and hierarchical composition. This is possible because the model does not over-specify how the actors must run, but only imposes data dependencies in a declarative language fashion. These models deal with multi-rate processing and hence complex periodic actor's execution schedulings. The problem is that the models do not incorporate the concept of time in a useful way and, hence, the periodic schedules do not guarantee real-time and low latency requirements. This dissertation overcomes this shortcoming by formally describing a new model that we named Time-Triggered Synchronous Dataow (TTSDF), whose periodic schedules can be interleaved by several time-triggered activations" so that inputs and outputs of the processing graph are regularly serviced. The TTSDF model has the same expressiveness (or equivalent computability) than the Synchronous Dataow (SDF) model, with the advantage that it guarantees minimum latency and absence of gaps and jitter in the output. Additionally, it enables run-time load balancing between callback activations and parallelization. Actor-oriented models are not off-the-shelf solutions and do not suffice for building multimedia systems in a systematic and engineering approach. We address this problem by proposing a catalog of domain-speciffic design patterns organized in a pattern language. This pattern language provides design reuse paying special attention to the context in which a design solution is applicable, the competing forces it needs to balance and the implications of its application. The proposed patterns focus on how to: organize different kinds of actors connections, transfer tokens between actors, enable human interaction with the dataow engine, and finally, rapid prototype user interfaces on top of the dataow engine, creating complete and extensible applications. As a case study, we present an object-oriented framework (CLAM), and speciffic applications built upon it, that makes extensive use of the contributed TTSDF model and patterns.
APA, Harvard, Vancouver, ISO, and other styles
18

Tuch, Harvey Computer Science &amp Engineering Faculty of Engineering UNSW. "Formal memory models for verifying C systems code." Publisher:University of New South Wales. Computer Science & Engineering, 2008. http://handle.unsw.edu.au/1959.4/41233.

Full text
Abstract:
Systems code is almost universally written in the C programming language or a variant. C has a very low level of type and memory abstraction and formal reasoning about C systems code requires a memory model that is able to capture the semantics of C pointers and types. At the same time, proof-based verification demands abstraction, in particular from the aliasing and frame problems. In this thesis, we study the mechanisation of a series of models, from semantic to separation logic, for achieving this abstraction when performing interactive theorem-prover based verification of C systems code in higher- order logic. We do not commit common oversimplifications, but correctly deal with C's model of programming language values and the heap, while developing the ability to reason abstractly and efficiently. We validate our work by demonstrating that the models are applicable to real, security- and safety-critical code by formally verifying the memory allocator of the L4 microkernel. All formalisations and proofs have been developed and machine-checked in the Isabelle/HOL theorem prover.
APA, Harvard, Vancouver, ISO, and other styles
19

Schmidt, Stephan. "A cost-effective diagnostic methodology using probabilistic approaches for gearboxes operating under non-stationary conditions." Diss., University of Pretoria, 2016. http://hdl.handle.net/2263/61332.

Full text
Abstract:
Condition monitoring is very important for critical assets such as gearboxes used in the power and mining industries. Fluctuating operating conditions are inevitable for wind turbines and mining machines such as bucket wheel excavators and draglines due to the continuous uctuating wind speeds and variations in ground properties, respectively. Many of the classical condition monitoring techniques have proven to be ine ective under uctuating operating conditions and therefore more sophisticated techniques have to be developed. However, many of the signal processing tools that are appropriate for uctuating operating conditions can be di cult to interpret, with the presence of incipient damage easily being overlooked. In this study, a cost-e ective diagnostic methodology is developed, using machine learning techniques, to diagnose the condition of the machine in the presence of uctuating operating conditions when only an acceleration signal, generated from a gearbox during normal operation, is available. The measured vibration signal is order tracked to preserve the angle-cyclostationary properties of the data. A robust tacholess order tracking methodology is proposed in this study using probabilistic approaches. The measured vibration signal is order tracked with the tacholess order tracking method (as opposed to computed order tracking), since this reduces the implementation and the running cost of the diagnostic methodology. Machine condition features, which are sensitive to changes in machine condition, are extracted from the order tracked vibration signal and processed. The machine condition features can be sensitive to operating condition changes as well. This makes it difficult to ascertain whether the changes in the machine condition features are due to changes in machine condition (i.e. a developing fault) or changes in operating conditions. This necessitates incorporating operating condition information into the diagnostic methodology to ensure that the inferred condition of the machine is not adversely a ected by the uctuating operating conditions. The operating conditions are not measured and therefore representative features are extracted and modelled with a hidden Markov model. The operating condition machine learning model aims to infer the operating condition state that was present during data acquisition from the operating condition features at each angle increment. The operating condition state information is used to optimise robust machine condition machine learning models, in the form of hidden Markov models. The information from the operating condition and machine condition models are combined using a probabilistic approach to generate a discrepancy signal. This discrepancy signal represents the deviation of the current features from the expected behaviour of the features of a gearbox in a healthy condition. A second synchronous averaging process, an automatic alarm threshold for fault detection, a gear-pinion discrepancy distribution and a healthy-damaged decomposition of the discrepancy signal are proposed to provide an intuitive and robust representation of the condition of the gearbox under uctuating operating conditions. This allows fault detection, localisation as well as trending to be performed on a gearbox during uctuating operation conditions. The proposed tacholess order tracking method is validated on seven datasets and the fault diagnostic methodology is validated on experimental as well as numerical data. Very promising results are obtained by the proposed tacholess order tracking method and by the diagnostic methodology.
Dissertation (MEng)--University of Pretoria, 2016.
Mechanical and Aeronautical Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
20

Kafka, Jan. "Analýza trhu operačních systémů." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-198871.

Full text
Abstract:
The reason of this work are operating systems, their significance, history, analysis of the current situation and attempt to predict the future. The first part introduces the basic concepts, the definition of operating system and brief history. The second part deals with the current situation on the market for operating systems, the main drivers of this sector and the business models used. The last part deals with the prediction of the situation on the market for operating systems, their future evolution, the probable evolution of their business model and estimate the near future from the business point of view. It were also made study of available new scientific publications about OS. Contribution of this thesis evaluate role and future operating systems and prediction business perspective in the industry branch.
APA, Harvard, Vancouver, ISO, and other styles
21

Welsch, Manuel. "Enhancing the Treatment of Systems Integration in Long-term Energy Models." Doctoral thesis, KTH, Energisystemanalys, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134100.

Full text
Abstract:
Securing access to affordable energy services is of central importance to our societies. To do this sustainably, energy systems design should be – amongst other things – environmentally compliant and reconcile with the integrated management of potentially limiting resources. This work considers the role for so-called 'Smart Grids' to improve the delivery of energy services. It deals with the integration of renewable energy technologies to mitigate climate change. It further demonstrates an approach to harmonise potentially conflicting energy, water and land-use strategies. Each presents particular challenges to energy systems analysis. Computer aided models can help identify energy systems that most effectively meet the multiple demands placed on them. As models constitute a simple abstraction of reality, it is important to ensure that those dynamics that considerably impact results are suitably integrated. In its three parts, this thesis extends long-term energy system models to consider improved integration between: (A) supply and demand through Smart Grids; (B) timeframes by incorporating short-term operating constraints into long-term models; and (C) resource systems by linking multiple modelling tools. In Part A, the thesis explores the potential of Smart Grids to accelerate and improve electrification efforts in developing countries. Further, a long-term energy system model is enhanced to investigate the Smart Grid benefits associated with a closer integration of supply, storage and demand-side options. In Part B, the same model is extended to integrate flexibility requirements. The benefits of this integration are illustrated on an Irish case study on high levels of wind power penetrations. In Part C, an energy model is calibrated to consider climate change scenarios and linkages with land-use and water models. This serves to assess the implications of introducing biofuels on the small island developing state of Mauritius. The thesis demonstrates that too weak integration between models and resource systems can produce significantly diverging results. The system configurations derived may consequently generate different – and potentially erroneous – policy and investment insights.
Säker och prisvärd tillgång till energitjänster är en central fråga för dagens samhällen. För att tillgodose samhällen med hållbara energitjänster bör energisystemen designas för att – bland annat – möta de miljömässiga kraven samt hantera potentiellt begränsade resurser. Den här avhandlingen undersöker de ”smarta” elnätens roll för bättre tillhandahållande av energitjänster. Avhandlingen behandlar integration av förnybar energiteknik för minskad klimatpåverkan samt demonstrerar ett tillvägagångssätt för att förena potentiellt motstridiga energi-, vatten- och markanvändningsstrategier. Dessa uppvisar särskilda utmaningar i energisystemanalyser. Datorstödda modeller kan användas för att identifiera energisystem som på effektivast sätt möter samhällets krav. Datorstödda modeller är, per definition, förenklingar av verkligheten och det är därför viktigt att säkerställa en korrekt representation av det verkliga systemets dynamik. Den här avhandlingen förstärker energisystemmodeller för långsiktsprognoser utifrån tre aspekter: förbättra integrationen av (A) tillgång och efterfrågan genom smarta elnät; (B) olika tidsaspekter genom att inkludera kortsiktiga operativa begränsningar; samt (C) resurssystem genom att sammanlänka olika modelleringsverktyg. I del A utforskades de smarta elnätens potential för att förbättra elektriska system i utvecklingsländer. En befintlig energisystemmodell förstärktes för att behandla smarta elnät och kan därmed fånga fördelarna förknippade med energilagring och energianvändning. I del B utvidgades en energisystemmodell för långsiktsprognoser med flexibilitet för kortsiktiga operativa begränsningar. En fallstudie fokuserad på ett vindkraftsdominerat irländskt elnät genomfördes för att demonstrera fördelarna av modellutvecklingen. I del C kalibrerades en energisystemmodell för att ta klimatscenarier i beaktande samt energisystemets kopplingar till markanvändning och vattenresurssystem. En fallstudie fokuserad på Mauritius energisystem genomfördes för att undersöka konsekvenserna av en potentiell introducering av biobränslen. Avhandlingen demonstrerar att undermålig integration av energimodeller och resurssystem kan leda till avsevärda avvikelser i resultaten. Slutsatser som dras utifrån dessa resultat kan därmed leda till vitt skilda – och potentiellt felaktiga – underlag för investeringar och energipolitiska rekommendationer.

QC 20131118

APA, Harvard, Vancouver, ISO, and other styles
22

Pegado, Ana Maria Mesquita de Oliveira. "Gestão de bloco operatório : modelos de gestão e monitorização." Master's thesis, Escola Nacional de Saúde Pública. Universidade Nova de Lisboa, 2010. http://hdl.handle.net/10362/5468.

Full text
Abstract:
RESUMO - A consciência de uma necessidade clara em rentabilizar a capacidade instalada e os meios tecnológicos e humanos disponíveis no Bloco Operatório e face ao imperativo de um cabal desempenho e de uma adequada efectividade nestes serviços levou-nos à realização deste estudo. Objectivos: O trabalho de projecto centrou-se em quatro objectivos concretos: Elaboração de uma grelha de observação de Modelos de Gestão de Bloco Operatório; Observação de seis Modelos de Gestão de Blocos Operatórios em experiências nacionais e in-loco, de acordo com a grelha de observação; Avaliação da qualidade gestionária na amostra seleccionada à luz dos modelos existentes; Criação de uma grelha de indicadores para a monitorização e avaliação do Bloco Operatório. Metodologia: Na elaboração da grelha de observação dos Blocos Operatórios recorremos a um grupo de peritos, à bibliografia disponível e à informação recolhida em entrevistas. Aplicámos a grelha de observação aos seis Blocos Operatórios e analisámos as informações referentes a cada modelo com a finalidade de encontrar os pontos-chave que mais se destacavam em cada um deles. Para a elaboração da grelha de indicadores de monitorização do Bloco Operatório realizámos uma reunião recorrendo à técnica de grupo nominal para encontrar o nível de consenso entre os peritos. Resultados: Criámos uma grelha de observação de Modelos de Gestão de Bloco Operatório que permite comparar as características de gestão. Esta grelha foi aplicada a seis Blocos Operatórios o que permitiu destacar como elementos principais e de diferenciação: o sistema de incentivos implementado; o sistema informático, de comunicação entre os serviços e de débito directo dos gastos; a existência de uma equipa de gestão de Bloco Operatório e de Gestão de Risco; a importância de um planeamento cirúrgico semanal e da existência de um regulamento do Bloco Operatório. Desenhámos um painel de indicadores para uma monitorização do Bloco Operatório, de onde destacamos: tempo médio de paragem por razões técnicas, tempo médio de paragem por razões operacionais, tempo médio por equipa e tempo médio por procedimento. Considerações finais: Os Blocos Operatórios devem ponderar a existência das componentes mais importantes dos Modelos, bem como recolher exaustivamente indicadores de monitorização. A investigação futura deverá debruçar-se sobre a relação entre os indicadores de monitorização e os Modelos de Gestão, recorrendo à técnicas de benchmarking. -------------------ABSTRACT - This study was driven by the need to optimise available capacity, technology and human resources in the Operating Room and to address the corresponding goals of adequate performance and effectiveness. Objectives: This project focuses on four specific objectives: development of an observation grid of operating room management models; in-loco observation and documentation of six national operating room, according to the grid; assess the quality of management in the selected sample relative to existing management models; create a set of indicators for monitoring and evaluating operating rooms. Methodology: The design of the observation grid was based on experts’ consultation, a literature survey and information gathered in various interviews. The observation grid was applied to six operating rooms and the information for each management model was analysed in order to find its key characteristics. We used the Nominal Group Technique in order to develop a set of indicators for monitoring and evaluating operating rooms. Results: An observation grid was created for operating rooms management models, which allowed comparing management characteristics. This grid was applied to six operating rooms allowing disentangle its main features and differentiating characteristics: implementation of incentive systems; IT systems including information flow between services; inventory and expense management; existence of a management team and effective risk management; importance of weekly planning and regulations. We designed a set control indicators, whose major characteristics are the following: the average down time due to technical reasons, the average down time due to operational reasons, the average time per team and the average time per procedure. Final Conclusions: Operating rooms should consider the most relevant characteristics of management models and collect exhaustive information on control indicators. Future research should be devoted to assessing the operating room performance according to management models, using control indicators and benchmarking techniques.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Xiujuan. "Computational Intelligence Based Classifier Fusion Models for Biomedical Classification Applications." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/26.

Full text
Abstract:
The generalization abilities of machine learning algorithms often depend on the algorithms’ initialization, parameter settings, training sets, or feature selections. For instance, SVM classifier performance largely relies on whether the selected kernel functions are suitable for real application data. To enhance the performance of individual classifiers, this dissertation proposes classifier fusion models using computational intelligence knowledge to combine different classifiers. The first fusion model called T1FFSVM combines multiple SVM classifiers through constructing a fuzzy logic system. T1FFSVM can be improved by tuning the fuzzy membership functions of linguistic variables using genetic algorithms. The improved model is called GFFSVM. To better handle uncertainties existing in fuzzy MFs and in classification data, T1FFSVM can also be improved by applying type-2 fuzzy logic to construct a type-2 fuzzy classifier fusion model (T2FFSVM). T1FFSVM, GFFSVM, and T2FFSVM use accuracy as a classifier performance measure. AUC (the area under an ROC curve) is proved to be a better classifier performance metric. As a comparison study, AUC-based classifier fusion models are also proposed in the dissertation. The experiments on biomedical datasets demonstrate promising performance of the proposed classifier fusion models comparing with the individual composing classifiers. The proposed classifier fusion models also demonstrate better performance than many existing classifier fusion methods. The dissertation also studies one interesting phenomena in biology domain using machine learning and classifier fusion methods. That is, how protein structures and sequences are related each other. The experiments show that protein segments with similar structures also share similar sequences, which add new insights into the existing knowledge on the relation between protein sequences and structures: similar sequences share high structure similarity, but similar structures may not share high sequence similarity.
APA, Harvard, Vancouver, ISO, and other styles
24

Pouget, Kevin. "Debogage Interactif des systemes embarques multicoeur base sur le model de programmation." Phd thesis, Université de Grenoble, 2014. http://tel.archives-ouvertes.fr/tel-01010061.

Full text
Abstract:
Dans cette thèse, nous proposons d'étudier le débogage interactif d'applications pour les systèmes embarqués MPSoC (Multi-Processor System on Chip). Une étude de l'art a montrée que la conception et le développement de ces applications reposent de plus en plus souvent sur des modèles de programmation et des frameworks de développement. Ces environnements définissent les bonnes pratiques, tant au niveau algorithmique qu'au niveau des techniques de programmation. Ils améliorent ainsi le cycle de développement des applications destinées aux processeurs MPSoC. L'utilisation de modèles de programmation ne garantit cependant pas que les codes pourront etre exécutés sans erreur, en particulier dans le cas de la programmation dynamique, oú ils offrent très peu d'aide à la vérification. Notre contribution pour résoudre ces challenges consiste en une nouvelle approche pour le débogage interactif, appelée Programming Model-Centric Debugging, ainsi qu'une implémentation d'un prototype de débogueur. Le débogage centré sur les modèles rapproche le débogage interactif du niveau d'abstraction fourni par les modèles de programmation, en capturant et interprétant les événements générés pendant l'exécution de l'application. Nous avons appliqué cette approche sur trois modèles de programmation, basés sur les composants logiciels, le dataflow et la programmation d'accélérateur par kernels. Ensuite, nous détaillons comment nous avons développé notre prototype de débogueur, basé sur GDB, pour la programmation de la plate-forme STHORM de STMicroelectronics. Nous montrons aussi comment aborder le débogage basé sur les modèles avec quatre études de cas~: un code de réalité augmentée construit à l'aide de composants, une implémentation dataflow d'un décodeur vidéo H.264 and deux applications de calcul scientifique.
APA, Harvard, Vancouver, ISO, and other styles
25

Walton, Nigel. "The extent to which data-rich firms operating two-sided platform-ecosystem business models are able to use data to gain an innovation advantage over established one-sided companies." Thesis, Coventry University, 2017. http://eprints.worc.ac.uk/6313/.

Full text
Abstract:
It is the purpose of the dissertation to explore the extent to which data-rich firms operating two-sided platform-ecosystem business models are able to use data to gain an innovation advantage over established one-sided companies. The dissertation begins with an analysis of business model theory and identifies two viewpoints based on the static and transformational perspectives. The transformational perspective is analysed in more depth and how data is playing a key role in creating an innovation advantage for two-sided platform ecosystem firms. A detailed explanation of how the platform ecosystem model works in provided in addition to a definition of the four platform typologies and how they compare and contrast with the one-sided business model. This is followed by a critique of the resource-based view of strategy and the relevance of dynamic capabilities, the knowledge-based view and the value chain approaches to strategy. A comprehensive innovation audit questionnaire (based on a sample of one hundred companies) is used to test whether the two-sided firms have a data-driven innovation advantage over the one-sided firms or not. The results reveal a clear innovation advantage for the two-sided firms who score consistently higher marks across all the dimensions of the innovation audit survey.
APA, Harvard, Vancouver, ISO, and other styles
26

De, Vries Marne. "A process reuse identification framework using an alignment model." Thesis, University of Pretoria, 2013. http://hdl.handle.net/2263/31634.

Full text
Abstract:
This thesis explores the potential to unify three emerging disciplines: enterprise engineering, enterprise architecture and enterprise ontology. The current fragmentation that exists in literature on enterprise alignment and design constrains the development and growth of the emerging disciplines. Enterprises need to use a multi-disciplinary approach when they continuously align, design and re-design the enterprise. Although enterprises need to be aligned internally (across various enterprise facets), as well as externally (with the environment), most alignment approaches still focus on business-IT alignment, i.e. aligning the business operations with the information and communication technologies and systems of the enterprise. This study focuses on a popular business-IT alignment approach,called the foundation for execution approach, and its associated artefact, called the operating model. The study acknowledges the theoretical contribution of the operating model to establish the required level of business process integration and standardisation at an enterprise in delivering goods and services to customers. Highlighting the practical problems in selecting an operating model for an enterprise, and more specifically the practical problems of identifying process reuse potential at an enterprise, a thesis statement is formulated: The operating model concept, as part of a business-IT alignment approach, can be enhanced with a process reuse identification framework, when a business-IT alignment contextualisation is used. The study is divided into two research questions. The first research question addresses the current fragmentation that exists in the literature, which impairs reuse of the existing business-IT alignment knowledge base. An inductive literature review develops the Business-IT Alignment Model to provide a common contextualisation for current business-IT alignment approaches. The second research question addresses the practical problems of the operating model regarding the identification of process reuse potential at an enterprise. Applying the newly developed Business-IT Alignment Model as a contextualisation instrument, the study demonstrates the use of design research in developing the Process Reuse Identification Framework. The conclusion after the investigation of the two research questions is that the thesis statement was confirmed, i.e. the operating model concept, as part of a business-IT alignment approach, can be enhanced with a process reuse identification framework, when a business-IT contextualisation is used.
Thesis (PhD)--University of Pretoria, 2013.
Industrial and Systems Engineering
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
27

Gorjian, Nima. "Asset health prediction using the explicit hazard model." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/57314/1/Nima_Gorjian_Jolfaei_Thesis.pdf.

Full text
Abstract:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
APA, Harvard, Vancouver, ISO, and other styles
28

Ojong, Emile Tabu [Verfasser], Hans Joachim [Gutachter] Krautz, and Heinz Peter [Gutachter] Berg. "Characterization of the performance of PEM water electrolysis cells operating with and without flow channels, based on experimentally validated semi-empirical coupled-physics models / Emile Tabu Ojong ; Gutachter: Hans Joachim Krautz, Heinz Peter Berg." Cottbus : BTU Cottbus - Senftenberg, 2018. http://d-nb.info/1172718202/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

De, Montfort Pierre Juan. "A model of co-operative education on peace support operations in Africa." Thesis, [Bloemfontein?] : Central University of Technology, Free State, 2007. http://hdl.handle.net/11462/67.

Full text
Abstract:
Thesis (D. Tech.) - Central University of Technology, Free State, 2007
The focus of this study is on a Model of Co-operative Education on Peace Support Operations (PSO) in Africa. PSO are multi-functional operations involving military forces and diplomatic humanitarian agencies. They are designed to achieve humanitarian goals or a long-term political settlement, and are conducted impartially in support of a UN mandate. These include peacekeeping (PK), peace enforcement (PE), conflict prevention, peacemaking, peace building, and humanitarian operations. Since the advent of democracy in 1994, domestic and international expectations have steadily grown regarding a new South African role as a responsible and respected member of the international community. These expectations have included a hope that South Africa will play a leading role in a variety of international, regional and sub-regional forums, and that the country will become an active participant in attempts to resolve various regional and international conflicts. Peacekeeping is becoming more and more important as South Africa plays a vital role in African missions, mandates, deployment and restructuring. The core of peacekeeping operations in Africa is no longer about the deployment of armed forces, but the focus is shifting towards a more integrated approach including reconstruction, development, stability, civilian involvement and humanitarian aspects. While skills required for peace operations overlap with those required for war, there is increasing recognition that additional peace operations training is needed to successfully conduct these missions. The demand, advancement and application of peacekeeping evolve worldwide, especially in Africa, where enormous funding is being poured into local research and development, testing and training. The market for Education, Training and Development (ETD) in the field of PSO is growing, as South Africa is becoming increasingly involved in peacekeeping missions on the African continent. At present, there is no Co-operative Education programme on generic PSO on the operational/strategic level presented by any of the major universities in South Africa in order to enhance other PSO training. The objectives of this research project are in phase one: • To determine the need for and feasibility of a Co-operative Education Program on PSO. • To write a instructional design (ISD) report for a Co-operative Education Model on PSO and, • To draft possible curriculum content. • The second phase of the project could involve the development of learning material, and the evaluation of the proposed Co-operative Education Model on PSO by running a pilot programme. The principal product (output) of this research will consist out of an ISD report on a Model for Co-operative Education on PSO in Africa, presented by means of Correspondence Instruction with contact sessions. The key factors in production of the learning program include geo-political and security studies in order to create an understanding of the African battle space, PSO as presented by UNITAR POCI, the assessment of international practice with regards to PSO in order to relate the information to operations in Africa, PSO on the African continent, and Civil-Military Cooperation.
APA, Harvard, Vancouver, ISO, and other styles
30

Youssef, Joseph. "Developing an enterprise operating system for the monitoring and control of enterprise operations." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0761/document.

Full text
Abstract:
Le système d'exploitation (OS) est un concept bien connu en informatique comme interface entre l'Homme et le matériel informatique (MacOS, Windows, IOS, Android, ...). Dans le but de développer la future génération de systèmes d'entreprise basés sur les principes de l'IoT et du Cyber-Physique, cette thèse propose de développer un système d'exploitation d'entreprise « System d’Exploitation des Entreprises » (EOS); Contrairement à ERP, qui est défini comme un programme qui permet à l'organisation au niveau opérationnel d'utiliser un système d'applications intégrées afin d'automatiser de nombreuses fonctions liées à la technologie et aux services, EOS servira d'interface entre les gestionnaires d'entreprise et les ressources d'entreprise pour le suivi en temps réel et le contrôle des opérations.Nous présenterons d'abord le contexte, les priorités, les défis et les résultats escomptés. Ensuite, un ensemble d'exigences et de fonctionnalités d'EOS est décrit. Après, un état de l’art existant sur les travaux pertinents est donné et mis en correspondance avec les exigences spécifiées liées à EOS. Par la suite, et en fonction des exigences et des résultats, les architectures conceptuelle, technique et d’implantation sont décrites, y compris tous les composants internes et externes. La dernière partie présenteront deux exemples dans les secteurs bancaire et manufacturier pour illustrer l'utilisation de l'EOS
Operating System (OS) is a well-known concept in computer science as an interface between human and computer hardware (MacOS, Windows, IOS, Android,…). In the perspective of developing future generation of enterprise systems based on IoT and Cyber-Physical System principles, this doctorate research proposes to develop an Enterprise Operating System (EOS); Unlike ERP, which is defined as a platform that allows the organization at the operational level to use a system of integrated applications in order to automate many back office functions related to technology and services, EOS will act as an interface between enterprise business managers and enterprise resources for real time monitoring and control of enterprise operations.The thesis presents at first the context, priorities, challenges and expected results. Then a set of requirements and functionalities of EOS is described. After that, a survey on existing relevant works is given and mapped to the specified requirements related to EOS. Afterwards, and based on the requirements and state-of-the-art results, the EOS conceptual, technical and implementation architectures are outlined including all internal and external components. The last part draws two examples in the banking and manufacturing sectors to illustrate the use of the EOS
APA, Harvard, Vancouver, ISO, and other styles
31

Paxton, Blaine Kermit. "The Dell operating model." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/34778.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004.
Includes bibliographical references (p. 62-63).
Dell, Inc. is well known for its dramatic and continually improving operational performance in terms of unit cost, inventory level, production capacity, and labor efficiency. However, in late 2002, several members of Dell's Americas operations group realized that they did not fully understand what was driving this operational excellence. Therefore, they decided to sponsor an MIT Leaders for Manufacturing internship project to find out. The goal of this project was to "identify and document the essential beliefs, principles, and practices that have contributed to the operations success at Dell". The result of this endeavor is a model which describes four beliefs that are widely shared between members of Dell's operations organizations. These four beliefs (or cultural elements) are, in turn, supported by a set of specific management practices and programs. This model was developed using qualitative organizational research methods including conducting semi-structured interviews, holding focus groups, and gathering individual feedback on a draft version of the model for final validation. In this thesis, the "Dell Operating Model" is described, and each element of the model is shown to support Dell's critical business objectives. The model is then examined through the lenses of three organizational frameworks, and the limitations of these alternate frameworks are discussed. Finally, the applicability of the model to other companies is discussed, and new projects are proposed that will build on this research.
by Blaine Paxton.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
32

Verdi, Marcio. "Prediçao de distribuíção de espécies arbustivo-arbóreas no sul do Brasil." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/115515.

Full text
Abstract:
Em vista das mudanças ambientais em nível global, disponibilizar informações ecológicas e buscar uma melhor compreensão dos fatores e processos que moldam a distribuição de espécies, é uma iniciativa importante para o planejamento de ações de conservação. Neste contexto, a importância e carência de informações sobre a distribuição geográficas das espécies nos motivaram a predizer a distribuição potencial de arbustos e árvores das famílias Lauraceae e Myrtaceae na Floresta Atlântica, no sul do Brasil. Modelos lineares generalizados (GLM) foram usados para ajustar modelos preditivos com os registros de ocorrência de 88 espécies em função de variáveis ambientais. As variáveis preditoras foram selecionadas com base no menor critério de informação de Akaike corrigido. Nós avaliamos o desempenho dos modelos usando o método de validação cruzada (10-fold) para calcular a habilidade estatística verdadeira (TSS) e a área sob a curva característica do operador receptor (AUC). Nós usamos GLM para testar a influência da área de ocorrência estimada, do número de registros das espécies e da complexidade dos modelos sobre a TSS e a AUC. Nossos resultados mostraram que as variáveis climáticas governam amplamente a distribuição de espécies, mas as variáveis que captam as variações ambientais locais são relativamente importantes na área de estudo. A TSS foi significativamente influenciada pelo número de registros e complexidade dos modelos, enquanto a AUC sofreu com o efeito de todos os três fatores avaliados. A interação entre estes fatores é uma questão importante e a ser considerada em novas avaliações sobre ambas medidas e com diferentes técnicas de modelagem. Nossos resultados também mostraram que as distribuições de algumas espécies foram superestimadas e outras corresponderam bem com a ocorrência por nós conhecida. Efetivamente nossos resultados têm fundamentos para embasar novos levantamentos de campo, a avaliação de áreas prioritárias e planos de conservação, além de inferências dos efeitos de mudanças ambientais sobre as espécies da Mata Atlântica.
In view of environmental change on a global level, providing ecological information and getting a better understanding of the factors and processes that shape species distribution is an important initiative for planning conservation actions. In this context, the importance and lack of information about the geographical distribution of species motivated us to predict the potential species distribution of shrubs and trees of the family Lauraceae and Myrtaceae, in the Atlantic Forest in southern Brazil. Generalized linear models (GLM) were used to fit predictive models with records of occurrence of 88 species according to environmental variables. Predictor variables were selected based on the lowest corrected Akaike information criterion. We evaluate the performance of the models using the method of cross-validation (10-fold) to calculate the true skill statistic (TSS) and area under the receiver operator characteristic curve (AUC). We used GLM to test the influence of the area of occurrence estimated, the number of records of the species and the complexity of the models on the TSS and AUC. Our results show that climatic variables largely govern the distribution of species, but the variables that capture the local environmental variations are relatively important in the study area. The TSS was significantly influenced by the number of records and complexity of models while the AUC suffered from the effect of all three evaluated factors. The interaction between these factors is an important issue and be considered for new reviews on both measures and with different modeling techniques. Our results also showed that the distributions of some species were overestimated and other corresponded well with the occurrence known to us. Indeed our results have foundations to support new field surveys, assessment of priority areas and conservation plans, and inferences of the effects of environmental change on species of the Atlantic Forest.
APA, Harvard, Vancouver, ISO, and other styles
33

Wiedemann, Michael. "Robust parameter design for agent-based simulation models with application in a cultural geography model." Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Jun/10Jun%5FWiedemann.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, June 2010.
Thesis Advisor(s): Johnson, Rachel T. ; Second Reader: Baez, Francisco R, "June 2010." Description based on title screen as viewed on July 15, 2010. Author(s) subject terms: Cultural Geography, Agent-Based Model (ABM), Irregular Warfare (IW), Theory of planned Behavior (TpB), Baysian Belief Nets (BBN), Counterinsurgency Operations (COIN), Stability Operations, Discrete Event Simulation (DES), Design of Experiments (DOX), Robust Parameter Design (RPD). Includes bibliographical references (p. 69-70). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
34

Nadai, Carolina Camargo de. "Gambiarração: poéticas em composição coreográfica." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/27/27156/tde-30052017-142002/.

Full text
Abstract:
A hipótese aqui construída é que a partir da noção de gambiarra, pode-se entender a gambiarração como um modo do corpo operar dispositivos para a composição em dança. Deste modo, se faz relevante pensar acerca das práticas cotidianas, vistas em Michel de Certeau como produções poéticas que desencadeiam formulações acerca das relações imanentes à produção de gambiarras, assim como possibilitam perceber uma potência para a criação de procedimentos e ações voltados à composição artística. No âmbito das composições coreográficas, o método de Composição em Tempo Real, em desenvolvimento pelo coreógrafo português João Fiadeiro desde o início da década de 1990, e o Modo Operativo AND, estruturado pela antropóloga Fernanda Eugénio, foram os responsáveis por partilharem sentidos que compõem com o tema gambiarra, contribuindo para a pesquisa em seu desdobramento teórico-prático. Assim, a noção de gambiarração surge como um estado-ação do e no corpo, percepções que são também modos de fazer ou de ser que operam à semelhança das gambiarras, a partir do improviso, da capacidade de agenciar ações em tempo real, de lidar com acidentes, com a instabilidade e precariedade e com a suficiência daquilo que há em determinada circunstância compositiva. Identificamos no pensamento de Baruch Spinoza e Gilles Deleuze (e Félix Guattari) modos de perceber afetos, corpo, experiência e composição, que fortalecem essa proposição artística.
The hypothesis here constructed is that from the notion of gambiarra, gambiarração can be understood as a way for the body to operate devices for dance composition. In this way, it becomes relevant to think about the daily practices, seen in Michel de Certeau as poetic productions that trigger formulations about the immanent relations to the production of gambiarras, as well as make possible to perceive a power for the creation of procedures and actions directed to the artistic composition. Within the framework of choreographic compositions, the Real Time Composition method, developed by the portuguese choreographer João Fiadeiro since the beginning of the 1990s, and the AND Operational Mode, structured by the anthropologist Fernanda Eugénio, were responsible for sharing senses that compose with The subject gambiarra, contributing to the research in its theoretical-practical unfolding. Thus, the notion of gambiarração appears as a state of action of and in the body, perceptions that are also ways of doing or of being that operate in the likeness of gambiarras, from the improvisation, the ability to act actions in real time, to deal With accidents, with instability and precariousness and with the sufficiency of what there is in a certain compositional circumstance. We have identified in the thinking of Baruch Spinoza and Gilles Deleuze (and Félix Guattari) ways of perceiving affection, body, experience and composition, which strengthen this artistic proposition.
APA, Harvard, Vancouver, ISO, and other styles
35

Marshall, Jak. "Models of intelligence operations." Thesis, Lancaster University, 2016. http://eprints.lancs.ac.uk/81659/.

Full text
Abstract:
It is vital to modern intelligence operations that the cycle of gathering, analysing and acting upon intelligence is as efficient as possible in the face of an ever increasing volume of available information. The collection, processing and subsequent analysis aspect of the intelligence cycle is modelled as a novel finite horizon Bayesian stochastic dynamic programming problem, namely the multi-armed bandit allocation (MABA) problem. The MABA framework models the efforts of a processor to search for intelligence items of the highest importance by making sequential samples from a collection of intelligence sources. Through Bayesian learning the processor learns about the importance distributions of the available sources over time, select a source from which to sample at each decision epoch, and decides whether or not to allocate sampled items for analysis. For source selection, a novel Lagrangian based index heuristic is developed and its performance is compared to existing index heuristics including knowledge gradient and Thompson sampling methods. The allocation policy is handled by thresholds which act as Lagrangian multipliers of the original MABA problem. Both a discrete Dirichlet-Multinomial and a continuous Exponential-Gamma-Gamma implementation of the MABA problem are developed, where the latter also models uncertainty in the processor's own ability to accurately assess the importance of sampled items.
APA, Harvard, Vancouver, ISO, and other styles
36

Тевяшев, А. Д., О. І. Матвієнко, and Г. В. Никитенко. "Stochastic Model of Operating Modes of a Group of Artesian Wellsin Water Supply Systems." Thesis, ХНУРЕ, 2020. https://openarchive.nure.ua/handle/document/16411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Nageswaran, Leela A. "Innovative Models in Service Operations." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Fei, Qi. "Operation models for information systems /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?IELM%202009%20FEI.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

林紹健 and Siu-kin Lum. "Trimming operations for geometric modelling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31211732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lum, Siu-kin. "Trimming operations for geometric modelling /." [Hong Kong : University of Hong Kong], 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13857733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Тевяшев, А. Д., О. І. Матвієнко, and Г. Нікітенко. "Stochastic Model and Method of Optimizing the Operating Modes of a Water Network with Hidden Leaks." Thesis, Харків : ТОВ «Друкарня Мадрид», 2018. http://openarchive.nure.ua/handle/document/9410.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Morales, Juan Carlos. "Planning Robust Freight Transportation Operations." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14107.

Full text
Abstract:
This research focuses on fleet management in freight transportation systems. Effective management requires effective planning and control decisions. Plans are often generated using estimates of how the system will evolve in the future; during execution, control decisions need to be made to account for differences between actual realizations and estimates. The benefits of minimum cost plans can be negated by performing costly adjustments during the operational phase. A planning approach that permits effective control during execution is proposed in this dissertation. This approach is inspired by recent work in robust optimization, and is applied to (i) dynamic asset management and (ii) vehicle routing problems. In practice, the fleet management planning is usually decomposed in two parts; the problem of repositioning empty, and the problem of allocating units to customer demands. An alternative integrated dynamic model for asset management problems is proposed. A computational study provides evidence that operating costs and fleet sizes may be significantly reduced with the integrated approach. However, results also illustrate that not considering inherent demand uncertainty generates fragile plans with potential costly control decisions. A planning approach for the empty repositioning problem is proposed that incorporates demand and supply uncertainty using interval around nominal forecasted parameters. The intervals define the uncertainty space for which buffers need to be built into the plan in order to make it a robust plan. Computational evidence suggests that this approach is tractable. The traditional approach to address the Vehicle Routing Problem with Stochastic Demands (VRPSD) is through cost expectation minimization. Although this approach is useful for building routes with low expected cost, it does not directly consider the maximum potential cost that a vehicle might incur when traversing the tour. Our approach aims at minimizing the maximum cost. Computational experiments show that our robust optimization approach generates solutions with expected costs that compare favorably to those obtained with the traditional approach, but also that perform better in worst-case scenarios. We also show how the techniques developed for this problem can be used to address the VRPSD with duration constraints.
APA, Harvard, Vancouver, ISO, and other styles
43

LEAL, HELOISA REIS. "BOOLEAN OPERATIONS ON POINT-BASED MODELS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5881@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Operações booleanas em modelagem 3D são usadas para criar novos modelos ou para modificá-los. Na maioria dos tipos de representação de objetos 3D, estas operações são bastante complexas. Nos últimos anos tem sido muito explorado um novo tipo de modelagem, a modelagem por pontos, que apresenta muitas vantagens em relação às outras representações como maior simplicidade e eficiência. Dois trabalhos exploram as operações booleanas na modelagem por pontos, o trabalho de Adams e Dutré e o trabalho de Pauly et. al. Dada a grande importância deste novo tipo de modelagem e do uso de operações booleanas, esta dissertação apresenta uma introdução à modelagem por pontos, implementa o algoritmo proposto em Adams e Dutré com algumas melhorias e o compara com o método de Pauly et. al.
Boolean operations are used to create or modify models. These operations in the majority of 3D object representations are very complex. In the last years a significant trend in computer graphics has been the shift towards point sampled 3D models due to their advantages over other representations, such as simplicity and efficiency. Two recent works present algorithms to perform interactive boolean operations on point-based models: the work by Adams and Dutré and the work by Pauly et. Al.. Due to great importance of this novel representation and of the use of boolean operations, the present work makes an introduction to point-based representation, implements the algorithm proposed by Adams and Dutré with some improvements, and compares this implementation with the work by Pauly et. al..
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Kevin Bozhe. "Multiperiod Optimization Models in Operations Management." Thesis, University of California, Berkeley, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13423656.

Full text
Abstract:

In the past two decades, retailers have witnessed rapid changes in markets due to an increase in competition, the rise of e-commerce, and ever-changing consumer behavior. As a result, retailers have become increasingly aware of the need to better coordinate inventory control with pricing in order to maximize their profitability. This dissertation was motivated by two of such problems facing retailers at the interface between pricing and inventory control. One considers inventory control decisions for settings in which planned prices fluctuate over time, and the other considers pricing of multiple substitutable products for settings in which customers hold inventory as a consequence of stockpiling when promotional prices are offered.

In Chapter 1, we provide a brief motivation for each problem. In Chapter 2, we consider optimization of procurement and inventory allocation decisions by a retailer that sells a product with a long production lead time and a short selling season. The retailer orders most products months before the selling season, and places only one order for each product due to short product life cycles and long delivery lead times. Goods are initially stored at the warehouse and then sent to stores over the course of the season. The stores are in high-rent locations, necessitating efficient use of space, so there is no backroom space and it is uneconomical to send goods back to the warehouse; thus, all inventory at each store is available for sale. Due to marketing and logistics considerations, the planned trajectory of prices is determined in advance and may be non-monotonic. Demand is stochastic and price-dependent, and independent across time periods. We begin our analysis with the case of a single store. We first formulate the inventory allocation problem given a fixed initial order quantity with the objective of maximizing expected profit as a dynamic program and explain both technical and computational challenges in identifying the optimal policy. We then present two variants of a heuristic based on the notion of equalizing the marginal value of inventory across the time periods. Results from a numerical study indicate that the more sophisticated variant of the heuristic performs well when compared with both an upper bound and an industry benchmark, and even the simpler variant performs fairly well for realistic settings. We then generalize our approaches to the case of multiple stores, where we allow the stores to have different price trajectories. Our numerical results suggest that the performance of both heuristics is still robust in the multiple store setting, and does not suffer from the same performance deterioration observed for the industry benchmark as the number of stores increases or as price differences increase across stores and time periods. For the pre-season procurement problem, we develop a heuristic based on a generalization of the newsvendor problem that accounts for the two-tiered salvage values in our setting, specifically, a low price during end-of-season markdown periods and a very low or zero salvage value after the season has concluded. Results for numerical examples indicate that our modified newsvendor heuristic provides solutions that are as good as those obtained via grid search.

In Chapter 3, we address a retailer's problem of setting prices, including promotional prices, over a multi-period horizon for multiple substitutable products in the same product category. We consider the problem in a setting in which customers anticipate the retailer's pricing strategy and the retailer anticipates the customers' purchasing decisions. We formulate the problem as a two-stage game in which the profit maximizing retailer chooses prices and the utility maximizing customers respond by making explicit decisions regarding purchasing and consumption, and thus also implicit decisions regarding stockpiling. We incorporate a fairly general reference price formation process that allows for cross-product effects of prices on reference prices. We initially focus on a single customer segment. The representative customer's utility function accounts for the value of consumption of the products, psychological benefit (for deal-seekers) from purchasing at a price below his/her reference price but with diminishing marginal returns, costs of purchases, penalties for both shortages and holding inventory, and disutility for deviating from a consumption target in each period (where applicable). We are the first to develop a model that simultaneously accounts for this combination of realistic factors for the customer, and we also separate the customer's purchasing and consumption decisions. We develop a methodology for solving the customer's problem for arbitrary price trajectories based on a linear quadratic control formulation of an approximation of the customer's utility maximization problem. We derive analytical representations for the customer's optimal decisions as simple linear functions of prices, reference prices, inventory levels (as state variables), and the cumulative aggregate consumption level (as a state variable). (Abstract shortened by ProQuest.)

APA, Harvard, Vancouver, ISO, and other styles
45

Brabazon, Philip G. "Mass customization : fundamental modes of operation and study of an order fulfilment model." Thesis, University of Nottingham, 2006. http://eprints.nottingham.ac.uk/10182/.

Full text
Abstract:
This research studies Mass Customization as an operations strategy and model. Opinions differ over whether MC should be a label for a specific business model in which customers select from pre-engineered product options, or whether it should be interpreted as a performance goal that has wider relevance. In this research it is viewed as the latter and a manufacturing enterprise is considered to be a mass customizer if it gives its customers the opportunity to have a product any time they want it, anywhere they want it, any way they want it and in any volume they want it, and at the same time brings the benefits that are associated with mass operations, in particular those of price and quality. In the literature MC is not one operations strategy but a family of sub-strategies and there are several classification schemes, most of which delineate the sub-strategies by the point along the value chain that customization takes place. Other than for one scheme for which correlations between technologies and MC types has been sought by means of a survey, no progress has been made in developing operations configurations models. Through the study of primary and secondary case studies several classification schemes are appraised and a new framework of five fundamental operations Modes is developed. The Modes are the kernel of a theory of MC, with the other elements being: - A model for Mode selection that uses four factors to determine when a Mode is suitable; - Indicative models of the information infrastructures of two Modes that demonstrate the Modes to be different and that they can be a foundation for configurations models; - A set of product customizable attributes that reveals the multifaceted nature of customization and extends the terminology of customization; - The delta Value concept that links the motivation for customizing attributes to differences between customers. A theory of MC is proposed, which postulates: - An MC strategy is relevant when there are differences across customers in how they value the configurations of customizable attributes; - There are five operational sub-strategies of MC; - The choice of sub-strategy for an enterprise is contingent on its organisation and its business environment. One of the five modes, Catalogue MC, is the Mode that is commonly associated with MC. It is the Mode in which all product variants are fully engineered before being ordered. A diverse set of order fulfilment models of relevance to this Mode are reviewed and organised into four types: fulfilment from stock; fulfilment from a single decoupling point; fulfilment from several decoupling points; and fulfilment from a floating decoupling point. The term floating decoupling point is coined to describe systems that can allocate a product to a customer wherever the product lies, whether it be a finished product in stock, a part processed product or a product that does not yet exist but is in the production plan. In the automotive sector this system has been called Virtual-Build-to-Order (VBTO) and in this research the generic characteristics of VBTO systems are described and key concepts developed, in particular the concept of reconfiguration flexibility. Discrete event simulation and Markov models are developed to study the behaviour of the VBTO fulfilment model. The non-dimensional ratio of product variety / pipeline length is identified to be a fundamental indicator of performance. By comparing the VBTO system to a conventional system that can fulfil a customer from stock or by BTO only, the role of pipeline fulfilment is identified and a surprising observation is that it can cause stock levels and average customer waiting time to be higher than in a conventional system. The study examines also how customer differences, in particular their willingness to compromise and their aversion to waiting, affect fulfilment and how fulfilment is dependent on reconfiguration flexibility.
APA, Harvard, Vancouver, ISO, and other styles
46

Couret, Marine. "Failure mechanisms implementation into SiGe HBT compact model operating close to safe operating area edges." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0265.

Full text
Abstract:
Afin de répondre au marché florissant des applications térahertz, les filières BiCMOS atteignent désormais des fréquences de coupure supérieures à 0,5 THz. Ces performances dynamiques sont obtenues grâce aux améliorations technologiques apportées aux transistors bipolaires à hétérojonction (TBH) SiGe. Toutefois, cette montée en fréquence à entraîner un décalage du point de polarisation des transistors au plus proche, voir au-delà, de l’aire de sécurité de fonctionnement (SOA). En conséquence, de nombreux effets physiques « parasites » sont présents tel que l’ionisation par impact ou bien l’auto-échauffement pouvant potentiellement activer des mécanismes de défaillance et ainsi limiter la fiabilité à long terme du transistor. Dans le cadre de cette thèse, nous proposons une approche pour la description et la modélisation de la dégradation par porteurs chauds au sein des TBH SiGe fonctionnant aux frontières de la SOA. L’étude est basée sur une caractérisation approfondie en conditions statiques et dynamiques des transistors. Du fait de ses résultats de mesures, une modélisation de l’ionisation par impact et de l’auto-échauffement a été proposé permettant d’étendre, avec précision, le domaine de validité des modèles compact commerciaux (HiCuM). Au-vu du fonctionnement aux limites de la SOA, une campagne de vieillissement a été mise en place afin de mieux cerner l’origine physique de ce mécanisme de défaillance. De ce fait, il a été démontré que la dégradation par porteurs chauds entraîne la création de densités de pièges au niveau de l’interface Si/SiO2del’espaceur émetteur-base induisant un courant de recombinaison supplémentaire dans la base. Un modèle compact intégrant des lois de vieillissement (HiCuM-AL) a été développé prédisant l’évolution des paramètres électriques d’un transistor ou d’un circuit au travers d’un facteur de vieillissement accéléré. Afin de faciliter son utilisation dans des outils de conception assistée par ordinateur (CAO), les lois de vieillissement ont été adaptées en fonction de la géométrie et de l’architecture de l’espaceur émetteur-base. Le modèle a démontré sa robustesse et sa précision pour plusieurs technologies de TBH SiGe et, ce, pour différentes conditions de vieillissement. De plus, une étude de la fiabilité de plusieurs architectures de circuits intégrés a été réalisé menant à une localisation précise des régions les plus sensibles au mécanisme de dégradation par porteurs chauds. Le modèle HiCuM-AL ouvre ainsi la voie à des simulations optimisées pour la conception de circuits millimétriques en termes de performances, mais aussi de fiabilité à long terme
In an ever-growing terahertz market, BiCMOS technologies have reached cut-off frequencies beyond 0.5 THz. These dynamic performances are achieved thanks to the current technological improvements in SiGe heterojunction bipolar transistors (HBTs). However, these increased performances lead to a shift of the transistors bias point closer to, or even beyond, the conventional safe-operating-area (SOA). As a consequence, several "parasitic" physical effects are encountered such as impact-ionization or self-heating which can potentially activate failure mechanisms, hence limiting the long-term reliability of the electric device. In the framework of this thesis, we develop an approach for the description and the modeling of hot-carrier degradation occurring in SiGe HBTs when operating near the SOA edges. The study aims to provide an in-depth characterization of transistors operating under static and dynamic operating conditions. Based on these measurements results, a compact model for the impact-ionization and the self-heating has been proposed, ultimately allowing to extend the validity domain of a commercially available compact model (HiCuM). Considering the operation as close as possible to the SOA, an aging campaign was conducted to figure out the physical origin behind such failure mechanism. As a result, it has been demonstrated that hot-carrier degradation leads to the creation of trap densities at the Si/SiO2interface of the emitter-base spacer which induces an additional recombination current in the base. A compact model integrating aging laws (HiCuM-AL) was developed to predict the evolution of the transistor/circuit electrical parameters through an accelerated aging factor. For ease of use in computer-aided design (CAD) tools, the aging laws have been scaled according to the geometry and architecture of the emitter-base spacer. The model has demonstrated its robustness and its accuracy for different SiGe HBT technologies under various aging conditions. In addition, a study on the reliability of several integrated circuits has been performed leading to a precise location of the most sensitive regions to the hot-carrier degradation mechanism. Thus, the HiCuM-AL model paves the way to perform circuit simulations optimizing the mm-wave circuit design not only in term of sheer performances but also in term of long-term reliability
APA, Harvard, Vancouver, ISO, and other styles
47

Messmacher, Eduardo B. (Eduardo Bernhart) 1972. "Models for project management." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9217.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2000.
Also available online at the DSpace at MIT website.
Includes bibliographical references (p. 119-122).
Organizations perform work essentially through operations and projects. The characteristics of projects makes them extremely difficult to manage: their non repetitive nature discards the trial and error learning, while their short life span is particularly unforgiving to misjudgments. Some authors have found that effective scheduling is an important contributor to the success of research and development (R&D), as well as construction projects. The widely used critical path method for scheduling projects and identifying important activities fails to capture two important dimensions of the problem: the availability of different technologies (or options) to perform the activities, and the inherent problem of limited availability of resources that most managers face. Nevertheless, when one tries to account for such additional constraints, the problems become very hard to solve. In this thesis we propose an approach to the scheduling problem using a genetic algorithm, and try to compare its performance to more traditional approaches, such as an extension to a very innovative Lagrangian relaxation approach recently proposed. The purpose of using genetic algorithms is twofold: first to obtain good approximations to very hard problems, and second to realize the limitations and virtues of this search technique. The purpose of this thesis is not only to develop the algorithms, but also to obtain insight about the implications of the additional constraints in the perspective of a project manager.
by Eduardo B. Messmacher.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
48

Lettovsky, Ladislav. "Airline operations recovery : an optimization approach." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/24326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhai, Wensi. "A study of the co-working operating model." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108832.

Full text
Abstract:
Thesis: S.M. in Real Estate Development, Massachusetts Institute of Technology, Program in Real Estate Development in conjunction with the Center for Real Estate, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 64).
After explosive development in the past half decade, the co-working industry is seeking changes to enhance the sustainability of its business model. Despite early success, the buy-bulk-sell-piece model does not promise a high return today due to the increasing cost of rent. Floating revenue and high fixed cost make the model fundamentally risky, imposing challenges for co-working companies to withstand the next recession. In the face of intensifying competition, major co-working players are expanding their businesses aggressively, aiming to benefit from economies of scale. The demand for funds is greater and more urgent than ever. Aside from commercial loans and venture capital, co-working companies are seeking more flexible and sustainable financing sources for growth. On the supply side, traditional real estate companies now have fewer doubts and greater interest in participating in the co-working business. While a small group have chosen to start their own spaces, more are looking for strategic cooperation with co-working players that have proven track records. This thesis conducted a study of the co-working operating model in an attempt to elucidate the optimal solution that benefits both sides of the business. Following a brief industry overview, it discusses the revenue and cost structure of the co-working space and the pros and cons of five co-working operating models. With that understanding, it constructs a DCF model of a mock-up co-working project and develops cash flows for both participants to analyze their return and risk profile under each operating model. The results suggest that the joint venture model is the optimal solution for co-working companies in business expansion, and property owners with passive investment positions. Further, the management model is the best choice for more matured co-working companies with strong brand influence and concentration on management service. It also indicates that the transformation from the lease model to the management and franchise model requires co-working companies to have a strong brand, proven track record, and an established member network. While for property owners, such transformation depends on its willingness of exposure to the co-working business, as well as the capital cost, risk tolerance, and investment horizontal.
by Wensi Zhai.
S.M. in Real Estate Development
APA, Harvard, Vancouver, ISO, and other styles
50

Monsch, Matthieu (Matthieu Frederic). "Large scale prediction models and algorithms." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84398.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Operations Research Center, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 129-132).
Over 90% of the data available across the world has been produced over the last two years, and the trend is increasing. It has therefore become paramount to develop algorithms which are able to scale to very high dimensions. In this thesis we are interested in showing how we can use structural properties of a given problem to come up with models applicable in practice, while keeping most of the value of a large data set. Our first application provides a provably near-optimal pricing strategy under large-scale competition, and our second focuses on capturing the interactions between extreme weather and damage to the power grid from large historical logs. The first part of this thesis is focused on modeling competition in Revenue Management (RM) problems. RM is used extensively across a swathe of industries, ranging from airlines to the hospitality industry to retail, and the internet has, by reducing search costs for customers, potentially added a new challenge to the design and practice of RM strategies: accounting for competition. This work considers a novel approach to dynamic pricing in the face of competition that is intuitive, tractable and leads to asymptotically optimal equilibria. We also provide empirical support for the notion of equilibrium we posit. The second part of this thesis was done in collaboration with a utility company in the North East of the United States. In recent years, there has been a number of powerful storms that led to extensive power outages. We provide a unified framework to help power companies reduce the duration of such outages. We first train a data driven model to predict the extent and location of damage from weather forecasts. This information is then used in a robust optimization model to optimally dispatch repair crews ahead of time. Finally, we build an algorithm that uses incoming customer calls to compute the likelihood of damage at any point in the electrical network.
by Matthieu Monsch.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography