Dissertations / Theses on the topic 'Research, Industrial – Mathematical models'

To see the other types of publications on this topic, follow the link: Research, Industrial – Mathematical models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Research, Industrial – Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mitwasi, Mousa George. "Mathematical models for the deterministic, capacitated, single kanban system." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185523.

Full text
Abstract:
The kanban system is the most popular technique for implementing the Just-In-Time philosophy. In this dissertation we develop mathematical models for the deterministic, capacitated, single kanban system. Three different production structures are studied. The models are used to analyze the system, understand the need and behavior of kanbans, and compute good solutions for the number of kanbans to allocate for each part. The first model applies to the single-stage, single-item system. Optimal solutions for the number of kanbans for this system are developed. The second and third models are built for the multi-stage, single-item system and the single-stage, multi-item system respectively. Necessary and sufficient conditions for the feasibility of a set of kanbans are developed for the last two models. The conditions are used to develop heuristic and optimal solution procedures. The heuristic procedures are tested over randomly generated problems and are shown to perform very well compared to the optimal solution procedures.
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Lebin. "Data Mining and Mathematical Models for Direct Market Campaign Optimization for Fred Meyer Jewelers." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1483558398637535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Shuai. "Data mining techniques and mathematical models for the optimal scholarship allocation problem for a state university." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1515618183686262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Malik, Shadan A. "Optimization model for product mix and capacity management with activity-based information." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-02022010-020435/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Abbas, Mustafa Sulaiman. "Consistency Analysis for Judgment Quantification in Hierarchical Decision Model." PDXScholar, 2016. https://pdxscholar.library.pdx.edu/open_access_etds/2699.

Full text
Abstract:
The objective of this research is to establish consistency thresholds linked to alpha (α) levels for HDM’s (Hierarchical Decision Model) judgment quantification method. Measuring consistency in order to control it is a crucial and inseparable part of any AHP/HDM experiment. The researchers on the subject recommend establishing thresholds that are statistically based on hypothesis testing, and are linked to the number of decision variables and (α) level. Such thresholds provide the means with which to evaluate the soundness and validity of an AHP/HDM decision. The linkage of thresholds to (α) levels allows the decision makers to set an appropriate inconsistency tolerance compatible with the situation at hand. The measurements of judgments are unreliable in the absence of an inconsistency measure that includes acceptable limits. All of this is essential to the credibility of the entire decision making process and hence is extremely useful for practitioners and researchers alike. This research includes distribution fitting for the inconsistencies. It is a valuable and interesting part of the research results and adds usefulness, practicality and insight. The superb fits obtained give confidence that all the statistical inferences based on the fitted distributions accurately reflect the HDM’s inconsistency measure.
APA, Harvard, Vancouver, ISO, and other styles
6

Athawale, Samita. "Chemotherapy Appointment Scheduling and Operations Planning." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1428951061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fatollahzadeh, Kianoush. "A laboratory vehicle mock-up research work on truck driver’s selected seat position and posture : A mathematical model approach with respect to anthropometry, body landmark locations and discomfort." Doctoral thesis, KTH, Industriell ekonomi och organisation (Inst.), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4028.

Full text
Abstract:
Professional truck drivers are highly exposed to fatigue and work related injuries. Truck drivers are common victims of musculoskeletal disorders, frequently suffering from pain symptoms particularly in the neck, shoulder and lower back. This situation is believed to be a contributor to the high absenteeism in this job category. A high percentage of this problem is due to the adoption of an unhealthy driving posture resulting from inappropriate seat design. This incorrect and poor design is owing to the insufficient and obsolete anthropometrical data which has been used for decades for arranging and positioning components in the driver environment. The main objective of the present study was to create and construct a mathematical model which clarifies and predicts the drivers’ comfortable sitting posture and position. It was hypothesized that the length and height characteristics of some body segments as well as the body weight and waist circumference of the driver have a great impact on the selection of a specific sitting posture. The steering wheel positions as well as the pedal/floor locations were hypothesized to be highly correlated to the driver’s selected posture and the corresponding comfort. The effect of the seat position on posture selection and related comfort assessments constituted the other hypothesis of the study which received extra attention. A laboratory experiment on a Scania truck cab mock-up was conducted. The seat track travel along a vertical as well as horizontal forward-backward path was obtained by mounting the seat on the motorized rigid frame which allowed unrestricted vertical and fore-aft travel. The seat cushion angle and backrest angle were adjusted by pivoting the entire seat and backrest around a lateral axis and independently. The pedal components were mounted on a motorized platform, thus allowing unrestricted fore-aft and height travel without any changes in the pedal angles. The steering wheel was mounted on the instrument panel by two independent pneumatic axes which allowed a wide range of adjustments including tilting and moving along the sagittal plane for adjusting the height and distance. The test plan called for 55 international highly experienced heavy truck drivers. The drivers were recruited to span a large range of body weight and stature, in particular to ensure adequate representation of both the extreme as well as the normal group of drivers. The drivers filled in a general information questionnaire before undergoing the anthropometrical measurements and thereafter the test trials. The experiment contained a subset of test conditions with five different trials using random selection sampling procedure. Drivers were asked to adjust the components in a wide range of trajectory according to a written protocol. A sparse set of threedimensional body landmark locations and the corresponding comfort assessments were recorded. As the main part of the result, the mathematical models using multiple regression analyses on selected body landmarks as well as anthropometrical measures were developed which proposed a linear correlation between parameters. The differences between the observed data and the corresponding predicted data using the model were found to be minimal and almost dispensable. Additionally, the drivers preferred to sit in the rearmost position and at a rather high level relative to the rest of the available and adjustable area. Considering the normal adjustable seat area of the cab, only a very small part of the observed Hpoint data lies within this area while a large remaining amount of data lies outside of it. Moreover, the difference between the observation (plotted H-point data) and the neutral H-point was found to be significant. Furthermore, and since some of the data lies almost on the border of the adjustable area, it may indicate a reasonable tendency for even more seat adjustment in the backward direction. A conceptual model consisting of four different parameters was developed and presented in the end. These parameters of the model suggest being as key factors which play a central role on process of decision making regarding the selection of a desirable sitting posture. Any eventual modifications and adjustments for elimination or minimizing discrepancies, biases or obscured factors affecting the quality of the mathematical model would be a case for future study. The investigation of a complete assessment of comfort should be supplemented with an analysis of how many truck drivers are satisfied with the comfort in the end.
QC 20100824
APA, Harvard, Vancouver, ISO, and other styles
8

Vijayakumar, Bharathwaj. "SCHEDULING SURGICAL CASES IN A CONSTRAINED ENVIRONMENT." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1303093820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Roychowdhury, Sayak. "Investigation of Flash-free Die Casting by Overflow Design Optimization." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406121850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lim, Dong-Joon. "Technological Forecasting Based on Segmented Rate of Change." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2220.

Full text
Abstract:
Consider the following questions in the early stage of new product development. What should be the target market for proposed design concepts? Who will be the competitors and how fast are they moving forward in terms of performance improvements? Ultimately, is the current design concept and targeted launch date feasible and competitive? To answer these questions, there is a need to integrate the product benchmarking with the assessment of performance improvement so that analysts can have a risk measure for their R&D target setting practices. Consequently, this study presents how time series benchmarking analysis can be used to assist scheduling new product releases. Specifically, the proposed model attempts to estimate the "auspicious" time by which proposed design concepts will be available as competitive products by taking into account the rate of performance improvement expected in a target segment. The empirical illustration of commercial airplane development has shown that this new method provides valuable information such as dominating designs, distinct segments, and the potential rate of performance improvement, which can be utilized in the early stage of new product development. In particular, six dominant airplanes are identified with corresponding local RoCs and, inter alia, technological advancement toward long-range and wide-body airplanes represents very competitive segments of the market with rapid changes. The resulting individualized RoCs are able to estimate the arrivals of four different design concepts, which is consistent with what has happened since 2007 in commercial airplane industry. In addition, the case study of the Exascale supercomputer development is presented to demonstrate the predictive use of the new method. The results indicate that the current development target of 2020 might entail technical risks considering the rate of change emphasizing power efficiency observed in the past. It is forecasted that either a Cray-built hybrid system using Intel processors or an IBM-built Blue Gene architecture system using PowerPC processors will likely achieve the goal between early 2021 and late 2022. This indicates that the challenge to improve the power efficiency by a factor of 23 would require the maximum delay of 4 years to reach the Exascale supercomputer compared to the existing performance curve.
APA, Harvard, Vancouver, ISO, and other styles
11

Estep, Judith. "Development of a Technology Transfer Score for Evaluating Research Proposals| Case Study of Demand Response Technologies in the Pacific Northwest." Thesis, Portland State University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10248715.

Full text
Abstract:

Investment in Research and Development (R&D) is necessary for innovation, allowing an organization to maintain a competitive edge. The U.S. Federal Government invests billions of dollars, primarily in basic research technologies to help fill the pipeline for other organizations to take the technology into commercialization. However, it is not about just investing in innovation, it is about converting that research into application. A cursory review of the research proposal evaluation criteria suggests that there is little to no emphasis placed on the transfer of research results. This effort is motivated by a need to move research into application.

One segment that is facing technology challenges is the energy sector. Historically, the electric grid has been stable and predictable; therefore, there were no immediate drivers to innovate. However, an aging infrastructure, integration of renewable energy, and aggressive energy efficiency targets are motivating the need for research and to put promising results into application. Many technologies exist or are in development but the rate at which they are being adopted is slow.

The goal of this research is to develop a decision model that can be used to identify the technology transfer potential of a research proposal. An organization can use the model to select the proposals whose research outcomes are more likely to move into application. The model begins to close the chasm between research and application—otherwise known as the “valley of death”.

A comprehensive literature review was conducted to understand when the idea of technology application or transfer should begin. Next, the attributes that are necessary for successful technology transfer were identified. The emphasis of successful technology transfer occurs when there is a productive relationship between the researchers and the technology recipient. A hierarchical decision model, along with desirability curves, was used to understand the complexities of the researcher and recipient relationship, specific to technology transfer. In this research, the evaluation criteria of several research organizations were assessed to understand the extent to which the success attributes that were identified in literature were considered when reviewing research proposals. While some of the organizations included a few of the success attributes, none of the organizations considered all of the attributes. In addition, none of the organizations quantified the value of the success attributes.

The effectiveness of the model relies extensively on expert judgments to complete the model validation and quantification. Subject matter experts ranging from senior executives with extensive experience in technology transfer to principal research investigators from national labs, universities, utilities, and non-profit research organizations were used to ensure a comprehensive and cross-functional validation and quantification of the decision model.

The quantified model was validated using a case study involving demand response (DR) technology proposals in the Pacific Northwest. The DR technologies were selected based on their potential to solve some of the region’s most prevalent issues. In addition, several sensitivity scenarios were developed to test the model’s response to extreme case scenarios, impact of perturbations in expert responses, and if it can be applied to other than demand response technologies. In other words, is the model technology agnostic? In addition, the flexibility of the model to be used as a tool for communicating which success attributes in a research proposal are deficient and need strengthening and how improvements would increase the overall technology transfer score were assessed. The low scoring success attributes in the case study proposals (e.g. project meetings, etc.) were clearly identified as the areas to be improved for increasing the technology transfer score. As a communication tool, the model could help a research organization identify areas they could bolster to improve their overall technology transfer score. Similarly, the technology recipient could use the results to identify areas that need to be reinforced, as the research is ongoing.

The research objective is to develop a decision model resulting in a technology transfer score that can be used to assess the technology transfer potential of a research proposal. The technology transfer score can be used by an organization in the development of a research portfolio. An organization’s growth, in a highly competitive global market, hinges on superior R&D performance and the ability to apply the results. The energy sector is no different. While there is sufficient research being done to address the issues facing the utility industry, the rate at which technologies are adopted is lagging. The technology transfer score has the potential to increase the success of crossing the chasm to successful application by helping an organization make informed and deliberate decisions about their research portfolio.

APA, Harvard, Vancouver, ISO, and other styles
12

Takaidza, Isaac. "Modelling the optimal efficiency of industrial labour force in the presence of HIV/AIDs pandemic." Thesis, Cape Peninsula University of Technology, 2012. http://hdl.handle.net/20.500.11838/1305.

Full text
Abstract:
Thesis (DTech (Mechanical Engineering))--Cape Peninsula University of Technology, 2012
In this thesis, we investigate certain key aspects of mathematical modelling to explain the epidemiology of HIV/AIDS at the workplace and to assess the potential benefits of proposed control strategies. Deterministic models to investigate the effects of the transmission dynamics of HIV/AIDS on labour force productivity are formulated. The population is divided into mutually exclusive but exhaustive compartments and a system of differential equations is derived to describe the spread of the epidemic. The qualitative features of their equilibria are analyzed and conditions under which they are stable are provided. Sensitivity analysis of the reproductive number is carried out to determine the relative importance of model parameters to initial disease transmission. Results suggest that optimal control theory in conjunction with standard numerical procedures and cost effective analysis can be used to determine the best intervention strategies to curtail the burden HIV/AIDS is imposing on the human population, in particular to the global economy through infection of the most productive individuals. We utilise Pontryagin’s Maximum Principle to derive and then analyze numerically the conditions for optimal control of the disease with effective use of condoms, enlightenment/educational programs, treatment regime and screening of infectives. We study the potential impact on productivity of combinations of these conventional control measures against HIV. Our numerical results suggest that increased access to antiretroviral therapy (ART) could decrease not only the HIV prevalence but also increase productivity of the infected especially when coupled with prevention, enlightenment and screening efforts.
APA, Harvard, Vancouver, ISO, and other styles
13

Manipura, Walappuly Mudiyanselage Janakasiri Aruna Shantha Bandara. "Bioprocess development for removal of nitrogenous compounds from precious metal refinery wastewater." Thesis, Rhodes University, 2008. http://hdl.handle.net/10962/d1007341.

Full text
Abstract:
Removal of nitrogenous compounds from precious metal refinery (PMR) wastewater is important in terms of avoiding eutrophication (environmental protection), metal recovery (increased overall process efficiency and value recovery) and reuse of treated water (maximum use of natural resources). Extreme pH conditions (4 to 13 depending on the wastewater stream), high chemical oxygen demand (> 10,000 mg/I), numerous metals and high concentrations of those metals (> 20 mg/l of platinum group metals) in the wastewater are the main challenges for biological removal of nitrogenous compounds from PMR wastewater. Nitrogenous compounds such as NH₄⁺-N and N0₃-N are strong metal ligands, which make it difficult to recover metals from the wastewater. Therefore, a bioprocess was developed for removal of nitrogenous compounds from carefully simulated PMR wastewater. A preliminary investigation of metal wastewater was carried out to determine its composition and physico-chemical properties, the ability to nitrify and denitrify under different pH conditions and denitrification with different carbon Source compounds and amounts. Even at pH 4, nitrification could be carried out. A suitable hydraulic retention time was found to be 72 hours. There was no significant difference between sodium acetate and sodium lactate as carbon sources for denitrification. Based on these results, a reactor comparison study was carried out using simulated PMR wastewater in three types of reactors: continuously stirred tank reactor (CSTR), packed-bed reactor (PBR) and airlift suspension reactor (ALSR). These reactors were fed with 30 mg/l of Rh bound in an NH₄⁺ based compound (Claus salt: pentaaminechlororhodium (III) dichloride). Total nitrogen removal efficiencies of > 68 % , > 79 % and > 45 % were obtained in the CSTR, PBR and ALSR, respectively. Serially connected CSTR-PBR and PBR-CSTR reactor configurations were then studied to determine the best configuration for maximum removal of nitrogenous compounds from the wastewater. The PBR-CSTR configuration gave consistent biomass retention and automatic pH control in the CSTR. Ammonium removal efficiencies > 95 % were achieved in both reactors. As poor nitrate removal was observed a toxicity study was carried out using respirometry and the half saturation inhibition coefficients for Pt, Pd, Rh and Ru were found to be 15.81, 25.00, 33.34 and 39.25 mg/l, respectively. A mathematical model was developed to describe the nitrogen removal in PMR wastewater using activated sludge model number 1 (ASMl), two step nitrification and metal toxicity. An operational protocol was developed based on the literature review, experimental work and simulation results. The optimum reactor configuration under the set conditions (20 mg/I of Rh and < 100 mg/I of NH₄⁺-N) was found to be PBR-CSTR-PBR process, which achieved overall NH₄⁺-N and N0₃⁻-N removal efficiencies of > 90 % and 95 %, respectively. Finally, a rudimentary microbial characterisation was carried out on subsamples from the CSTR and PBRsecondary. It was found that the CSTR biomass consisted of both rods and cocci while PBRsecondary consisted of rods only. Based on these experimental works, further research needs and recommendations were made for optimisation of the developed bioprocess for removal of nitrogenous compounds from PMR wastewater.
APA, Harvard, Vancouver, ISO, and other styles
14

梁慧敏 and Wai-man Wanthy Leung. "Evolutionary optimisation of industrial systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B30252994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

He, Yumei, and 何玉梅. "Essays on public infrastructure, industrial location and regional development." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B39707313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Enrique, Eduardo Horacio. "Mathematical models based on spline functions for industrial applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60534.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Delgado, San Martin Juan A. "Mathematical models for preclinical heterogeneous cancers." Thesis, University of Aberdeen, 2016. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=230139.

Full text
Abstract:
Cancer is a deadly, complex disease with 14 million new cases diagnosed every year and the endeavour to develop a cure is a global multidisciplinary effort. The complexity of cancer and the resulting vast volume of data derived from its research necessitates a robust and cutting-edge system of mathematical and statistical modelling. This thesis proposes novel mathematical models of quantification and modelling applied to heterogeneous preclinical cancers, focusing on the translation of animal studies into patients with particular emphasis on tumour stroma. The first section of this thesis (quantification) will present different techniques of extracting and quantifying data from bioanalytical assays. The overall aim will be to present and discuss potential methods of obtaining data regarding tumour volume, stromal morphology, stromal heterogeneity, and oxygen distribution. Firstly, a 3D scanning technique will be discusses. This technique aims to assess tumour volume in mice more precisely than the current favoured method (callipers) and record any cutaneous symptoms as well, with the potential to revolutionise tumour growth analysis. Secondly, a series of image processing methods will be presented which, when applied to tumour histopathology, demonstrate that tumour stromal morphology and its microenvironment play a key role in tumour physiology. Lastly, it will be demonstrated through the integration of in-vitro data from various sources that oxygen and nutrient distribution in tumours is very irregular, creating metabolic niches with distinct physiologies within a single tumour. Tumour volume, oxygen, and stroma are the three aspects central to the successful modelling of tumour drug responses over time. The second section of this thesis (modelling) will feature a mathematical oxygen-driven model - utilising 38 cell lines and 5 patient-derived animal models - that aims to demonstrate the relationship between homogeneous oxygen distribution and preclinical tumour growth. Finally, all concepts discussed will be merged into a computational tumour-stroma model. This cellular automaton (stochastic) model will demonstrate that tumour stroma plays a key role in tumour growth and has both positive (at a molecular level) and negative (at both a molecular and tissue level) effects on cancers. This thesis contains a useful set of algorithms to help visualise, quantify, and understand tissue phenomena in cancer physiology, as well as providing a series of platforms to predict tumour outcome in the preclinical setting with clinical relevance.
APA, Harvard, Vancouver, ISO, and other styles
18

POSCHKE, Markus. "Firm heterogeneity and macroeconomic performance." Doctoral thesis, European University Institute, 2007. http://hdl.handle.net/1814/10310.

Full text
Abstract:
Defence date: 7 December 2007
Examining Board: Prof. Omar Licandro, (EUI) ; Prof. Salvador Ortigueira, (EUI) ; Prof. Russell Cooper, (University of Texas at Austin) ; Prof. Jaume Ventura, (CREI, Universitat Pompeu Fabra)
PDF of thesis uploaded from the Library digital archive of EUI PhD theses
The regulation of entry and aggregate productivity Euro Area economies have lower firm turnover rates, lower total factor and labor productivity, and higher capital intensity than the Unites States. I argue that differences in entry cost contribute to this pattern by affecting firms' technology choice. Introducing technology choice into a standard heterogeneous firm model, small differences in administrative entry cost suffice to explain 10-20% of differences in total factor productivity and the capital-output ratio. The productivity difference arises because higher equilibrium capital intensity acts as an entry barrier and protects low-productivity incumbents. Both firm heterogeneity and technology choice are crucial for strengthening results compared to previous studies. 2 Employment protection, firm selection, and growth This paper analyzes the effect of ring costs on aggregate productivity growth. For this purpose, a model of endogenous growth through selection and imitation is developed. It is consistent with recent evidence on firm dynamics and on the importance of reallocation for productivity growth. In the model, growth is driven by selection among heterogeneous incumbent firms, and is sustained as entrants imitate the best incumbents. In this framework, firing costs not only induce misallocation of labor, but also affect growth by affecting firms' exit decisions. Importantly, charging firing costs only to continuing firms raises growth by promoting selection. Also charging them to exiting firms is akin to an exit tax, hampers selection, and reduces growth { by 0.1 percentage points in a calibrated version of the model. With job turnover very similar in the two settings, this implies that the treatment of exiting firms matters for welfare. In addition, the impact on growth rates is larger in sectors where firms face larger idiosyncratic shocks, as in services. This fits evidence that recent EU-US growth rate differences are largest in these sectors and implies that firing costs can play a role here. A brief empirical analysis of the impact of firing costs on the size of exiting firms supports the model's conclusions. 3 The labor market, the decision to become an entrepreneur, and the firm size distribution Why do some people become entrepreneurs, and how do labor markets affect this choice? This paper addresses this question using a matching model with occupational choice and heterogeneity in both ability as a worker and ex ante unknown productivity of firm start-ups. Key effects are the following: labor market conditions affect incentives to start firms differently for workers and the unemployed, with repercussions on aggregate productivity; and they affect the expected value of firm creation due to the possibility of failure. These effects go beyond the standard impact of labor market conditions on firms' employment policy and value. The correlation of observed productive ability and potential productivity significantly shapes the firm size distribution, suggesting that the empirical correlation is positive but far from perfect. Finally, the model allows for a comparatively flexible lower tail of the firm size distribution and can explain the existence and persistence of small, lowproductivity firms with low profits: their owners have low outside options in the labor market.
APA, Harvard, Vancouver, ISO, and other styles
19

Tai, Hoi-lun Allen, and 戴凱倫. "Quantitative analysis in monitoring and improvement of industrial systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B4394193X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

O'Kiely, Doireann. "Mathematical models for the glass sheet redraw process." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:3788de4d-8254-4fba-9cd0-4bec32409d1e.

Full text
Abstract:
In this thesis we derive mathematical models for the glass sheet redraw process for the production of very thin glass sheets. In the redraw process, a prefabricated glass block is fed into a furnace, where it is heated and stretched by the application of draw rollers to reduce its thickness. Redrawn sheets may be used in various applications including smartphone and battery technology. Our aims are to investigate the factors determining the final thickness profile of a glass sheet produced by this process, as well as the growth of out-of-plane deformations in the sheet during redraw. Our method is to model the glass sheet using Navier–Stokes equations and free-surface conditions, and exploit small aspect ratios in the sheet to simplify and solve these equations using asymptotic expansions. We first consider a simple two-dimensional sheet to determine which physical effects should be taken into account in modelling the redraw process. Next, we derive a mathematical model for redraw of a thin threedimensional sheet. We consider the limits in which the heater zone is either short or long compared with the sheet half-width. The resulting reduced models predict the thickness profile of the redrawn sheet and the initial shape required to redraw a product of uniform thickness. We then derive mathematical models for buckling of thin viscous sheets during redraw. For buckling of a two-dimensional glass sheet due to gravity-induced compression, we predict the evolution of the centreline and investigate the early- and late-time behaviour of the system. For a three-dimensional glass sheet undergoing redraw, we use numerical solutions to investigate the behaviour of the sheet mid-surface.
APA, Harvard, Vancouver, ISO, and other styles
21

Viriththamulla, Gamage Indrajith. "Mathematical programming models and heuristics for standard modular design problem." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185431.

Full text
Abstract:
In this dissertation, we investigate the problem of designing standard modules which can be used in a wide variety of products. The basic problem is: given a set of parts and products, and a list of the number of each part required in each product, how do we group parts into modules and modules into products to minimize costs and satisfy requirements. The design of computers, electronic equipments, tool kits, emergency vehicles and standard military groupings are among the potential applications for this work. Several mathematical programming models for modular design are developed and the advantages and weaknesses of each model have been analyzed. We demonstrate the difficulties, due to nonconvexity, of applying global optimization methods to solve these mathematical models. We develop necessary and sufficient conditions for satisfying requirements exactly, and use these results in several heuristic methods. Three heuristic structures; decomposition, sequential local search, and approximation, are considered. The decomposition approach extends previous work on modular design problems. Sequential local search uses a standard local solution routine (MINOS) and sequentially adds cuts on the objective function to the original model. The approximation approach uses a "least squares" relaxation to find upper and lower bounds on the objective of the optimal solution. Computational results are presented for all three approaches and suggest that the approximation approach performs better than the others (with respect to speed and solution quality). We conclude the dissertation with a stochastic variation of the modular design problem and a solution heuristic. We discuss an approximation model to the continuous formulation, which is a geometric programming model. We develop a heuristic to solve this problem using monotonicity properties of the functions. Computational results are given and compared with an upper bound.
APA, Harvard, Vancouver, ISO, and other styles
22

Terciyanli, Erman. "Alternative Mathematical Models For Revenue Management Problems." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610711/index.pdf.

Full text
Abstract:
In this study, the seat inventory control problem is considered for airline networks from the perspective of a risk-averse decision maker. In the revenue management literature, it is generally assumed that the decision makers are risk-neutral. Therefore, the expected revenue is maximized without taking the variability or any other risk factor into account. On the other hand, risk-sensitive approach provides us with more information about the behavior of the revenue. The risk measure we consider in this study is the probability that revenue is less than a predetermined threshold level. In the risk-neutral cases, while the expected revenue is maximized, the probability of revenue being less than such a predetermined level might be high. We propose three mathematical models to incorporate the risk measure under consideration. The optimal allocations obtained by these models are numerically evaluated in simulation studies for example problems. Expected revenue, coefficient of variation, load factor and probability of the poor performance are the performance measures in the simulation studies. According to the results of these simulations, it shown that the proposed models can decrease the variability of the revenue considerably. In other words, the probability of revenue being less than the threshold level is decreased. Moreover, expected revenue can be increased in some scenarios by using the proposed models. The approach considered in this thesis is especially proposed for small scale airlines because risk of obtaining revenue less than the threshold level is more for this type of airlines as compared to large scale airlines.
APA, Harvard, Vancouver, ISO, and other styles
23

Cui, Lixin, and 崔麗欣. "Integrated supplier selection and order allocation incorporating customer flexibility." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47869380.

Full text
Abstract:
Supplier selection and order allocation are significant decisions for a manufacturer to ensure stable material flows in a highly competitive supply chain, in particular when customers are willing to accept products with less desirable product attributes. Hence, this study develops efficient methodologies to solve optimally the integrated supplier selection and order allocation problem incorporating customer flexibility for a manufacturer producing multiple products over a multi-period planning horizon. In this research, a new fuzzy multi-attribute approach is proposed to evaluate customer flexibility which is characterized through range and response. The approach calculates the product’s general utility value. This value is used by a bi-variant function which is developed to determine the retail price for the product. A new mixed integer program model describing the behavior of the basic problem is firstly developed. This basic model is the first to jointly determine: 1) type and quantity of the product variants to be offered; 2) the suppliers to be selected and orders to be allocated; and 3) inventory levels of product variants and raw materials/components. The objective is to maximize the manufacturer’s total profit subject to various operating constraints. This basic problem constitutes a very complex combinatorial optimization problem that is Nondeterministic Polynomial (NP)-hard. To tackle this challenge, two new optimization algorithms, i.e., an improved genetic approach called king GA (KGA) and an innovative hybrid algorithm called (CP-SA) _I which combines the techniques of constraint programming and simulated annealing are developed to locate optimal solutions. Extensive computational experiments demonstrate the effectiveness of these algorithms and also show clearly that (CP-SA) _I outperforms KGA in terms of both solution quality and computational cost. To examine the influence of subcontracting as one widespread practice in modern production management, this study also develops a modified mathematical model. It shares some similarity with the basic model but brings additional complexity by taking into consideration subcontractors for inter-mediate components and machine capacity. Since (CP-SA) _I outperforms KGA, it is employed and modified to solve the modified problem. Hence, this study presents a new hybrid algorithm called (CP-SA) _II, to locate optimal solutions. This study also establishes a new parallel (CP-SA) _II algorithm to enhance the performance of (CP-SA) _II. This parallel algorithm is implemented on a distributed computing platform based on the contemporary Graphic Processing Unit (GPU) using the Compute Unified Device Architecture (CUDA) programming model. Extensive numerical experiments conducted clearly demonstrate that the parallel (CP-SA) _II algorithm and its serial counterpart are efficient and robust optimization tools for formulating integrated supplier selection and order allocation decisions. Sensitivity analysis is employed to study the effects of the critical parameters on the performance of these algorithms. Finally, the convergence behavior of the proposed parallel (CP-SA) _II algorithm is studied theoretically. The results prove that the search process eventually converges to the global optimum if the overall best solution is maintained over time.
published_or_final_version
Industrial and Manufacturing Systems Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
24

De, Matteis Giovanni. "Mathematical models for biaxial nematic liquid crystals." Doctoral thesis, Scuola Normale Superiore, 2005. http://hdl.handle.net/11384/85713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Cooper, William L. "Revenue management, auctions, and perishable inventories." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/25805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ertek, Gurdal. "Pricing models for two-stage supply chains." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/30693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mendoza, Maria Nimfa F. "Essays in production theory : efficiency measurement and comparative statics." Thesis, University of British Columbia, 1989. http://hdl.handle.net/2429/30734.

Full text
Abstract:
Nonparametric linear programming tests for consistency with the hypotheses of technical efficiency and allocative efficiency for the general case of multiple output-multiple input technologies are developed in Part I. The tests are formulated relative to three kinds of technologies — convex, constant returns to scale and quasiconcave technologies. Violation indices as summary indicators of the distance of an inefficient observation from an efficient allocation are proposed. The consistent development of the violation indices across the technical efficiency and allocative efficiency tests allows us to obtain comparative measures of the degrees of technical inefficiency and pure allocative inefficiency. Constrained optimization tests applicable to cases where the producer is restricted to optimizing with respect to a subset of goods are also proposed. The latter tests yield the revealed preference-type inequalities commonly used as tests for consistency of observed data with profit maximizing or cost minimizing behavior as limiting cases. Computer programs for implementing the different tests and sample results are listed in the appendix. In part II, an empirical comparison of nonparametric and parametric measures of technical progress for constant returns to scale technologies is performed using the Canadian input-output data for the period 1961-1980. The original data base was aggregated into four sectors and ten goods and the comparison was done for each sector. If we assume optimizing behavior on the part of the producers, we can reinterpret the violation indices yielded by the efficiency tests in part I as indicators of the shift in the production frontier. More precisely, the violation indices can be considered nonparametric chained indices of technical progress. The parametric measures of technical progress were obtained through econometric profit function estimation using the generalized McFadden flexible functional form with a quadratic spline model for technical progress proposed by Diewert and Wales (1989). Under the assumption of constant returns, the index of technical change is defined in terms of the unit scale profit function which gives the per unit return to the normalizing good. The empirical results show that the parametric estimates of technical change display a much smoother behavior which can be attributed to the incorporation of stochastic disturbance terms in the estimation procedure and, more interestingly, track the long term trend in the nonparametric estimates. Part III builds on the theory of minimum wages in international trade and is a theoretical essay in the tradition of analyzing the effects of factor market imperfections on resource allocation. The comparative static responses of the endogenous variables — output levels, employment levels of fixed-price factors with elastic supply and flexible prices of domestic resources — to marginal changes in the economy's exogenous variables — output prices, fixed factor prices and endowments of flexibly-priced domestic resources -— are examined. The effect of a change in a fixed factor price on other flexible factor prices can be decomposed Slutsky-like into substitution and scale effects. A symmetry condition between fixed factor prices and flexible factor prices is obtained which clarifies the concepts of "substitutability" and "complementarity" between these two kinds of factors. As an illustration, the model is applied to the case of a devaluation in a two-sector small open economy with rigid wages and capital as specific factors. The empirical implementation of the general model for the Canadian economy is left to more able econometricians but a starting point can be the sectoral analysis performed in Part II.
Arts, Faculty of
Vancouver School of Economics
Graduate
APA, Harvard, Vancouver, ISO, and other styles
28

Murat, Ekrem Alper. "An allocation based modeling and solution framework for location problems with dense demand /." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102685.

Full text
Abstract:
In this thesis we present a unified framework for planar location-allocation problems with dense demand. Emergence of such information technologies as Geographical Information Systems (GIS) has enabled access to detailed demand information. This proliferation of demand data brings about serious computational challenges for traditional approaches which are based on discrete demand representation. Furthermore, traditional approaches model the problem in location variable space and decide on the allocation decisions optimally given the locations. This is equivalent to prioritizing location decisions. However, when allocation decisions are more decisive or choice of exact locations is a later stage decision, then we need to prioritize allocation decisions. Motivated by these trends and challenges, we herein adopt a modeling and solution approach in the allocation variable space.
Our approach has two fundamental characteristics: Demand representation in the form of continuous density functions and allocation decisions in the form of service regions. Accordingly, our framework is based on continuous optimization models and solution methods. On a plane, service regions (allocation decisions) assume different shapes depending on the metric chosen. Hence, this thesis presents separate approaches for two-dimensional Euclidean-metric and Manhattan-metric based distance measures. Further, we can classify the solution approaches of this thesis as constructive and improvement-based procedures. We show that constructive solution approach, namely the shooting algorithm, is an efficient procedure for solving both the single dimensional n-facility and planar 2-facility problems. While constructive solution approach is analogous for both metric cases, improvement approach differs due to the shapes of the service regions. In the Euclidean-metric case, a pair of service regions is separated by a straight line, however, in the Manhattan metric, separation takes place in the shape of three (at most) line segments. For planar 2-facility Euclidean-metric problems, we show that shape preserving transformations (rotation and translation) of a line allows us to design improvement-based solution approaches. Furthermore, we extend this shape preserving transformation concept to n-facility case via vertex-iteration based improvement approach and design first-order and second-order solution methods. In the case of planar 2-facility Manhattan-metric problems, we adopt translation as the shape-preserving transformation for each line segment and develop an improvement-based solution approach. For n-facility case, we provide a hybrid algorithm. Lastly, we provide results of a computational study and complexity results of our vertex-based algorithm.
APA, Harvard, Vancouver, ISO, and other styles
29

辛樹豪 and Shu-ho Sun. "A two-dimensional continuum approach to facility location problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31223394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Chih-yüan. "AN ANALYSIS OF CAPICITY EXPANSION PROBLEMS WITH BACKORDERS AND STOCHASTIC DEMAND." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/292045.

Full text
Abstract:
We show that, under certain conditions, instead of solving stochastic capacity expansion problems, we will obtain the same optimal solution by solving deterministic equivalent problems. Since only the first decision must be implemented immediately, knowing the optimal first decision is nearly as good as knowing the entire optimal sequences. Hence if we can solve the problem with 'big enough' finite horizon such that the first decision remains optimal for longer than this finite horizon, then we identify the 'big enough' finite horizon as forecast horizon. The forward dynamic programming recursion can be used to solve a finite horizon problem. An efficient forward algorithm has been developed to obtain the first optimal decision and forecast horizon. A heuristic algorithm also has been derived to prove an initial decision is within known error bound of the optimal first decision. Several examples are examined to investigate how a decision will be affected by randomness. (Abstract shortened with permission of author.)
APA, Harvard, Vancouver, ISO, and other styles
31

Su, Wei, and 蘇薇. "Partner selection and production-distribution planning for the design of optimal supply chain networks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41757853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

蘇美子 and Mee-chi Meko So. "An operations research model and algorithm for a production planning application." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31226681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sinangil, Mehmet Selcuk. "Modeling and control on an industrial polymerization process." Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/10150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Aboelfotoh, Aaya H. F. "Optimizing the Multi-Objective Order Batching Problem for Warehouses with Cluster Picking." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1564663802880513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Fowler, Christopher William. "Heuristic performance for the uncapacitated facility location problem with uncertain data." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/30760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bhuiyan, Farina. "Dynamic models of concurrent engineering processes and performance." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=38153.

Full text
Abstract:
Mathematical and stochastic computer models were built to simulate concurrent engineering processes (CE) in order to study how different process mechanisms contribute to new product development (NPD) performance. Micro-models of various phenomena which occur in concurrent engineering processes, such as functional participation, overlapping, decision-making, rework, and learning, were included, and their effects on the overall NPD process were related to process span time and effort. The study focused on determining under what conditions CE processes are more favorable than sequential processes, in terms of expected payoff, span time, and effort, as dependent variables of functional participation and overlapping, and the corresponding trade-offs between more upfront effort versus span time reduction.
APA, Harvard, Vancouver, ISO, and other styles
37

Fischer, Manfred M. "Computational Neural Networks: An attractive class of mathematical models for transportation research." WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/4158/1/WSG_DP_5797.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

McNelis, Robert J. "The measurement and empirical evaluation of quality and productivity for manufacturing processes." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06102009-063228/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Charnsirisakskul, Kasarin. "Demand fulfillment flexibility in capacitated production planning." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/25667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Nori, Vijay S. "Algorithms for dynamic and stochastic logistics problems." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/24513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Johnston, Susan Joy. "The development of an operational management procedure for the South African west coast rock lobster fishery." Doctoral thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/22567.

Full text
Abstract:
This thesis considers the development of an operational management procedure (OMP) to provide scientific recommendations for commercial TAC for the South African west coast rock lobster (Jasus lalandii) fishery. This fishery has been under considerable stress in recent years as a result of overfishing and low somatic growth rates. Present catch levels, less than 2000 MT, are substantially smaller than levels recorded in the past. The present biomass (above 75mm carapace length) is estimated to be only six percent of the pristine level. At the start of this research, no long-term management strategy for the resource existed. Neither was there any robust, tested, scientific method available for setting the annual TAC for the fishery, which resulted in a time-consuming and unsatisfactory scientific debate each year in developing a series of ad hoc TAC recommendations. The work presented in this thesis is thus aimed at answering two important questions. i) Can an adequate mathematical model be developed as a basis to simulate the resource and its associated fishery? ii) Can a self-correcting robust OMP be developed for the resource? The first phase of this thesis is the development of a size-structured population model of the resource and the associated fishery. A size-structured model is necessary as lobsters are difficult to age and hence most of the data collected are on a size basis. Furthermore, important management issues, such as the legal minimum size which has changed over time, require a model able to take size-structure into account. This model is fitted to a wide range of data from the fishery, including CPUE (catch-per-unit-effort) and catch-at-size information, by maximising a likelihood function. The model is shown to fit reasonably well to all data, and to provide biologically plausible estimates for its six estimable parameters.
APA, Harvard, Vancouver, ISO, and other styles
42

Korobeinikov, Andrei. "Stability and bifurcation of deterministic infectious disease models." Thesis, University of Auckland, 2001. http://wwwlib.umi.com/dissertations/fullcit/3015611.

Full text
Abstract:
Autonomous deterministic epidemiological models are known to be asymptotically stable. Asymptotic stability of these models contradicts observations. In this thesis we consider some factors which were suggested as able to destabilise the system. We consider discrete-time and continuous-time autonomous epidemiological models. We try to keep our models as simple as possible and investigate the impact of different factors on the system behaviour. Global methods of dynamical systems theory, especially the theory of bifurcations and the direct Lyapunov method are the main tools of our analysis. Lyapunov functions for a range of classical epidemiological models are introduced. The direct Lyapunov method allows us to establish their boundedness and asymptotic stability. It also helps investigate the impact of such factors as susceptibles' mortality, horizontal and vertical transmission and immunity failure on the global behaviour of the system. The Lyapunov functions appear to be useful for more complicated epidemiological models as well. The impact of mass vaccination on the system is also considered. The discrete-time model introduced here enables us to solve a practical problem-to estimate the rate of immunity failure for pertussis in New Zealand. It has been suggested by a number of authors that a non-linear dependence of disease transmission on the numbers of infectives and susceptibles can reverse the stability of the system. However it is shown in this thesis that under biologically plausible constraints the non-linear transmission is unable to destabilise the system. The main constraint is a condition that disease transmission must be a concave function with respect to the number of infectives. This result is valid for both the discrete-time and the continuous-time models. We also consider the impact of mortality associated with a disease. This factor has never before been considered systematically. We indicate mechanisms through which the disease-induced mortality can affect the system and show that the disease-induced mortality is a destabilising factor and is able to reverse the system stability. However the critical level of mortality which is necessary to reverse the system stability exceeds the mortality expectation for the majority of human infections. Nevertheless the disease-induced mortality is an important factor for understanding animal diseases. It appears that in the case of autonomous systems there is no single factor able to cause the recurrent outbreaks of epidemics of such magnitudes as have been observed. It is most likely that in reality they are caused by a combination of factors.
Subscription resource available via Digital Dissertations
APA, Harvard, Vancouver, ISO, and other styles
43

梁耀祥 and Yiu-cheung Leung. "A reconfigurable neural network for industrial sensory systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31224751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kilincli, Taskiran Gamze. "Mathematical Models and Solution Approach for Staff Scheduling with Cross-Training at CallCenters." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1441028781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xu, Suxiu, and 徐素秀. "Truthful, efficient auctions for transportation procurement." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206443.

Full text
Abstract:
Transportation procurement problem (TPP) is the problem of setting transportation service prices, delivery timing and quantity, and controlling costs and capacity to reduce empty movements and improve market efficiency. The purchase of transportation service is traditionally achieved using a request for proposal and long-term contracts. However, as business relationships become ever more flexible and dynamic, there has been an increasing need to hedge the risks of traditional transportation procurement such as entrance of new carriers and sudden drop in fuel price. This thesis proposes a holistic aution-based solution for the TPP. Four typical scenarios are investigated. The first scenario incorporates bilateral bidding into auction mechanism design for multi-unit TPP. This scenario considers one-sided Vickrey-Clarke-Groves (O-VCG) combinatorial auctions for a complex transportation marketplace with multiple lanes. This scenario then designs three alternative multi-unit trade reduction (MTR) mechanisms for the bilateral exchange transportation marketplace where all the lanes are partitioned into distinct markets. Proposed mechanisms ensure incentive compatibility, individual rationality, budget balance and asymptotical efficiency. The second scenario presents a double auction model for the TPP in a dynamic single-lane transportation environment. This scenario first addresses the TPP in a transportation spot market with stochastic but balanced or “symmetric” demand and supply. A periodic sealed double auction (PSDA) is proposed. This scenario then devises a modified PSDA (M-PSDA) to address the TPP with “asymmetric” demand and supply. The auctioneer is likely to gain higher profits from setting a relatively short auction length. However, it is optimal to run the auction (either PSDA or MPSDA) with a relatively large auction length, when maximizing either the social welfare or the utility of shippers and carriers (agents). When the degree of supply-demand imbalance is low, the auctioneer’s myopic optimal expected profit under supply-demand imbalance is larger than that under symmetric demand and supply. This third scenario presents an auction-based model for the TPP in make-toorder systems. The optimality of dynamic base-stock type (S(x)-like policy) is established. The optimal allocation can be achieved by running an O-VCG auction or a first-price auction with closed-form reserve prices. By mild technical modifications, the results derived in the infinite horizon case can all be extended to the finite horizon case. The fourth scenario proposes allocatively efficient auction mechanisms for the distributed transportation procurement problem (DTPP), which is generally the problem of matching demands and supplies over a transportation network. This scenario constructs an O-VCG combinatorial auction for the DTPP where carriers are allowed to bid on bundles of lanes. To simplify the execution of auction, this scenario next proposes a primal-dual Vickrey (PDV) auction based on insights from the known Ausubel auctions and the primal-dual algorithm. The PDV auction realizes VCG payments and truthful bidding under the condition of seller-submodularity, which implies that the effect of each individual carrier is decreasing when the coalition increases.
published_or_final_version
Industrial and Manufacturing Systems Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
46

Palmer, Kurt D. "Data collection plans and meta models for chemical process flowsheet simulators." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/24511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Powell, Megan Olivia. "Mathematical Models of the Activated Immune System During HIV Infection." University of Toledo / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1301415627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Rodriguez, Javier A. "Capacity expansion and capital investment decisions using the Economic Investment Time Model : a case oriented approach /." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-07292009-090518/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lutambi, Angelina Mageni. "Basic properties of models for the spread of HIV/AIDS." Thesis, Stellenbosch : Stellenbosch University, 2007. http://hdl.handle.net/10019.1/19641.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2007.
ENGLISH ABSTRACT: While research and population surveys in HIV/AIDS are well established in developed countries, Sub-Saharan Africa is still experiencing scarce HIV/AIDS information. Hence it depends on results obtained from models. Due to this dependence, it is important to understand the strengths and limitations of these models very well. In this study, a simple mathematical model is formulated and then extended to incorporate various features such as stages of HIV development, time delay in AIDS death occurrence, and risk groups. The analysis is neither purely mathematical nor does it concentrate on data but it is rather an exploratory approach, in which both mathematical methods and numerical simulations are used. It was found that the presence of stages leads to higher prevalence levels in a short term with an implication that the primary stage is the driver of the disease. Furthermore, it was found that time delay changed the mortality curves considerably, but it had less effect on the proportion of infectives. It was also shown that the characteristic behaviour of curves valid for most epidemics, namely that there is an initial increase, then a peak, and then a decrease occurs as a function of time, is possible in HIV only if low risk groups are present. It is concluded that reasonable or quality predictions from mathematical models are expected to require the inclusion of stages, risk groups, time delay, and other related properties with reasonable parameter values.
AFRIKAANSE OPSOMMING: Terwyl navorsing en bevolkingsopnames oor MIV/VIGS in ontwikkelde lande goed gevestig is, is daar in Afrika suid van die Sahara slegs beperkte inligting oor MIV/VIGS beskikbaar. Derhalwe moet daar van modelle gebruik gemaak word. Dit is weens hierdie feit noodsaaklik om die moontlikhede en beperkings van modelle goed te verstaan. In hierdie werk word ´n eenvoudige model voorgelˆe en dit word dan uitgebrei deur insluiting van aspekte soos stadiums van MIV outwikkeling, tydvertraging by VIGS-sterftes en risikogroepe in bevolkings. Die analise is beklemtoon nie die wiskundage vorme nie en ook nie die data nie. Dit is eerder ´n verkennende studie waarin beide wiskundige metodes en numeriese simula˙sie behandel word. Daar is bevind dat insluiting van stadiums op korttermyn tot ho¨er voorkoms vlakke aanleiding gee. Die gevolgtrekking is dat die primˆere stadium die siekte dryf. Verder is gevind dat die insluiting van tydvestraging wel die kurwe van sterfbegevalle sterk be¨ınvloed, maar dit het min invloed op die verhouding van aangestekte persone. Daar word getoon dat die kenmerkende gedrag van die meeste epidemi¨e, naamlik `n aanvanklike styging, `n piek en dan `n afname, in die geval van VIGS slegs voorkom as die bevolking dele bevat met lae risiko. Die algehele gevolgtrekking word gemaak dat vir goeie vooruitskattings met sinvolle parameters, op grond van wiskundige modelle, die insluiting van stadiums, risikogroepe en vertragings benodig word.
APA, Harvard, Vancouver, ISO, and other styles
50

Babayigit, Cihan. "Genetic Algorithms and Mathematical Models in Manpower Allocation and Cell Loading Problem." Ohio University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1079298235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography