To see the other types of publications on this topic, follow the link: Sizing.

Dissertations / Theses on the topic 'Sizing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sizing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Taymur, Eyup. "Photovoltaics Systems Sizing." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259684298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mattsson, Rosa. "AKD sizing - dispersion colloidal stability, spreading and sizing with pre-flocculated dispersions /." Licentiate thesis, Luleå tekniska universitet, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-17236.

Full text
Abstract:
Alkylketene dimer, AKD, dispersions are used in the paper industry to obtain paper grades with a hydrophobic character i.e. resistance to sorption of water. AKD is used in neutral to alkaline internal sizing, pH in the interval of 6-9. AKD dispersions consist of wax particles emulsified in water. The particles are electrostatically and in most cases also sterically stabilized by cationic polyelectrolytes. The size of the particles is usually in the range of 0.5-2 µm in diameter. The idea of having cationic particles is that they are supposed to be deposited onto the negatively charged fiber surfaces, heterocoagulation. An increased usage of recycled pulp and a closure of the wet-end circuits of the paper machine will lead to increasing levels of dissolved and colloidal substances. These substances could be detrimental by affecting the AKD deposition onto the fiber material and the colloidal stability i.e. induce flocculation of sizing particles. Flocculation could lead to an inhomogeneous distribution of the sizing agent. The aims addressed in this thesis were the following: to investigate if high levels of shear, electrolyte concentration, anionic polymers and surfactants could affect the colloidal stability and induce flocculation of differently designed AKD dispersions to study the migration of AKD and the influence of extractives on the migration to study the sizing efficiency and retention using flocculated dispersions The colloidal stability of the dispersions was tested under shear by measuring the particle size distribution and the microelectrophoretic mobility. The migration of AKD through paper sheets with defined thickness and through paper sheets impregnated with extractives was analyzed with contact angle measurements on stored samples. The sizing efficiency of flocculated AKD was determined by measuring 60-second Cobb and contact angles on laboratory sheets. The retention of AKD was determined by quantitative analysis of the AKD content in the laboratory sheets using alkaline hydrolysis followed by solvent extraction and quantification with capillary gas chromatography. It was shown that anionic polymers and surfactants could seriously affect the colloidal stability of the dispersions, while shear and electrolyte concentration had a small influence. Dispersions with a high surface charge flocculates easier than dispersions with a low surface charge in process water from the wet-end of a linerboard paper machine. The flocculation rate is fast and occurs on the same time scale as deposition onto fines and fibers. The AKD agglomerates could be up to 30 µm in diameter even at high levels of shear. The AKD diffusion coefficient in paper sheets was estimated to be 10-11 m2/s at a temperature above the melting point of the wax. The results indicate that extractives do not interfere with the AKD migration. A reasonable explanation is that migration occurs via the vapor phase. It was also shown that sizing with flocculated AKD improved the sizing efficiency significantly due to increased retention. This clearly indicates that migration is efficient and that agglomerates can be used for internal sizing provided that the flocculation process is controlled. The conventional picture that the sizing agents must be very evenly distributed to achieve efficient sizing could, from the viewpoint of these results, be questioned. The process of sizing with pre-flocculated AKD dispersions is the subject of a patent application. Even if this thesis is focused on AKD dispersions, the obtained knowledge should be possible to transfer to other sizing systems. The emulsification process for rosin and ASA is based on the same principles as for AKD.
Godkänd; 2002; 20070222 (ysko)
APA, Harvard, Vancouver, ISO, and other styles
3

Boyajian, Tabetha Suzanne. "Sizing Up the Stars." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/phy_astr_diss/34.

Full text
Abstract:
For the main part of this dissertation, I have executed a survey of nearby, main sequence A, F, and G-type stars with the CHARA Array, successfully measuring the angular diameters of forty-four stars to better than 4% accuracy. The results of these observations also yield empirical determinations of stellar linear radii and effective temperatures for the stars observed. In addition, these CHARA-determined temperatures, radii, and luminosities are fit to Yonsei-Yale isochrones to constrain the masses and ages of the stars. These quantities are compared to the results found in Allende Prieto & Lambert (1999), Holmberg et al. (2007), and Takeda (2007), who indirectly determine these same properties by fitting models to observed photometry. I find that for most cases, the models underestimate the radius of the star by ~12%, while in turn they overestimate the effective temperature by ~ 1.5 - 4%, when compared to my directly measured values, with no apparent correlation to the star's metallicity or color index. These overestimated temperatures and underestimated radii in these works appear to cause an additional offset in the star's surface gravity measurements, which consequently yield higher masses and younger ages, in particular for stars with masses greater than ~ 1.3 M_sol. Alternatively, these quantities I measure are also compared to direct measurements from a large sample of eclipsing binary stars in Andersen (1991), and excellent agreement is seen within both data sets. Finally, a multi-parameter solution is found to fit color-temperature-metallicity values of the stars in this sample to provide a new calibration of the effective temperature scale for these types of stars. Published work in the field of stellar interferometry and optical spectroscopy of early-type stars are presented in Appendix D and E, respectively.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Rui. "Novel particle sizing techniques." Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13492/.

Full text
Abstract:
Two novel approaches to particle size measurement are investigated; these are designated as Particle Movement Displacement Distribution (PMDD) method and Separated Multiple Image Technique (SMIT). An advantage of these methods compared with the established particle sizing methods of Static Light Scattering (SLS) and Dynamic Light Scattering (DLS) is that PMDD and SMIT do not suffer from the intensity weighting problem that affects SLS and DLS. The performance of the PMDD method is examined through computer simulations and through analysis of pre-existing experimental data. The SMIT method is investigated through computer simulations and through the construction and use of an optical system. The ability of both methods was measured through the assessment of an ‘area error’ measure which gives an estimate of the accuracy of a recovered particle size distribution. This area error measure varies between 0 and 2; with 0 corresponding to a perfectly recovered distribution. Typically a good inversion of DLS data can achieve an area-error value of 0.32 to 0.34 and this figure (along with the recovered mean particle size and standard deviation of the distribution) was used to judge quantitatively the success of the methods. The PMDD method measures the centre of individual particles in each image. A vector histogram is formed based on the connection between the centres in the first image and the centres in the next image. This vector histogram contains information about the particle size distribution. A maximum likelihood data inversion procedure is used to yield a particle size distribution from this data. The SMIT method is similar to the Static Light Scattering (SLS) method, but it combines angular dependent intensity method and individual visualisation method together to recover individual particle sizes without an intensity weighting. A multi-aperture mask and wedge prisms are utilised in this method to capture particle images formed from light scattered into a number of selected directions. A look-up table is then used to recover the individual particle sizes, which are then formed into a histogram. For the PMDD method, computer simulation results established the optimum values for parameters such as the time interval between frames, the exposure time and the particle concentration and also investigated the effects of different noise sources. For mono-modal size distributions, the PMDD method was shown through computer simulation to be capable of recovering a particle size distribution with an area error of around 0.27 which compares well with the typical DLS result. PMDD results were also recovered from mono-modal experimental data with mean particle sizes close to the manufacturers quoted particle mean size. However, recovery of bi-modal distributions was found to be not so successful; for bi-modal distributions, the recovered distributions generally had only a single peak, which, of course gives a very poor area-error figure. This result compares poorly with the particle tracking method ‘Nano Particle Tracking Analysis’ which is able to recover bi-modal distributions. For this reason further research was concentrated on an image intensity method (SMIT). For the SMIT method, computer simulation results established the optimum values for parameters such as the particle concentration and also investigated the effects of different noise sources and of aberrations in the optical system. The SMIT method was shown through computer simulation to be capable of recovering particle size distributions for both mono-modal and bi-modal distributions. The area error values obtained were in the range 0.24 to 0.45, and most of the results are good compared to the DLS value. The major problem with the SMIT method was found to be the presence of a small number of recovered particle radii much larger (or smaller) than the true sizes. These errors were attributed to ambiguities in the look-up table system used to convert the relative intensity data values into particle sizes. Two potential methods to reduce the influence of these ambiguities were investigated. These were, firstly by combining Brownian motion movement data from tracking individual particles over a few frames of data, and secondly by combining an estimate of the total scattered intensity from a particle with the normal SMIT data to constrain the look-up procedure. In computer simulations both approaches gave some improvement but the use of the total scattered intensity method gave the better results. In a computer simulation this method managed to improve the area-error from 0.37 for SMIT alone to 0.25 for SMIT combined with this extra information. Based on the success of these computer simulation results, an experimental SMIT system was constructed and tested. It was found necessary to first calibrate the optical system, to account for the different optical transmission coefficients of the different prisms/optical paths. But using a particle sample with particles of known size to calibrate; other particle sizes were successfully recovered from the experimental data using the original SMIT data processing. A majority of the recovered particle radius were close to the manufacturers quoted particle mean radius. Attempts to implement the total intensity approach to enhance the SMIT were found not be successful due to the difficulty in measuring the small displacements in particle positions required with sufficient accuracy. A possible alternative design to overcome this problem is suggested in the future work section 7.2.
APA, Harvard, Vancouver, ISO, and other styles
5

Larsson, Carolina. "Sizing of different pulp qualities." Thesis, Karlstad University, Faculty of Technology and Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-1591.

Full text
Abstract:

Edge wicking, i.e. in-plane wetting, is an important parameter for liquid packaging board products. Before filling of the products at the dairy the web is being sterilised with hydrogen peroxide (H2O2). A tube is shaped and cut into packages which mean that new surfaces are created. If a short stop occurs in the sterilising bath the raw edges exposed to H2O2 could absorb the liquid and when the package later is being filled this absorption could lead to tube burst. For the end products it is also important to prevent the liquid from penetrating the raw edges of the board in order to maintain the functionality and fresh look of the package.

To control edge wicking different kinds of internal hydrophobic sizing agents are used. Alkyl ketene dimer (AKD) is the most common size in neutral sizing and rosin in conjunction with alum is the traditional acidic sizing agent. AKD and rosin size are often used in combination (dual sizing) to make the board hydrophobic.

The objective of this work was to examine the effects of the sizing agents on different pulps in different dosages. The pulps used in the sheet preparation were unbleached kraft pulp as well as bleached and unbleached CTMP refined to different levels. Apart from varying the size dosages, the wet sheets were also pressed at different levels to vary the density. The major part of the study included dual sizing but a minor part also concerned mono sized sheets.

The parameters investigated on the laboratory sheets were STFI-Density, hydrogen peroxide edge wicking, pressurised edge wicking, lactic acid edge wicking (LA) and AKD and rosin size content.

In hydrogen peroxide edge wicking the amount of size was the central source of improved wick index. The retention was therefore of great significance for the results. The dual sized sheets were found to give least edge wicking, due to the synergy effect between AKD and alum rosin size. The kraft pulps gave sheets with satisfactory edge wicking even at the lowest size dosage (1 kg/t) while CTMP required a dosage of 4 kg/t to reach the same wick index.

To achieve acceptable pressurised edge wicking values the most important parameter was found to be the density of the sheets. The kraft pulp sheets gave the lowest edge penetration while the high freeness CTMP sheets did not give satisfactory penetration resistant even at the highest size dosage.

The edge wicking of lactic acid could be prevented by sizing with AKD only. Sheets made from mono sized sheets containing rosin size did not manage to avoid a large in-plane wetting of the board structure. Hence, dual sizing to improve the edge wicking of lactic acid is not necessary. The density was found not to be an important parameter in lactic acid edge wicking, all the pulps got equal edge wicking regardless of density. Retention above 1 kg/t AKD in sheets was enough to prevent penetration. Retention above 1.5 kg/t AKD in sheet did not improve the edge wicking any further.

No significant difference in sizeability between unbleached and bleached CTMP was observed.

APA, Harvard, Vancouver, ISO, and other styles
6

Speirs, Andrew T. (Andrew Todd) 1975. "Right sizing for government review." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50616.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering, 1998.
Includes bibliographical references (leaf 91).
The U.S. Navy has changed many of its acquisition practices. One of these changes is from performing the early design work primarily in-house, to contracting private shipyards to do the design. This change has shifted the government's role in design to a predominantly review function. Therefore, the government needs to decide what level of review should be performed, and how much this will cost in the future. This research examines the Strategic Sealift acquisition program, which was one of the first programs that employed this new acquisition strategy. The costs of the design stages are identified, and the level of review is described. This research can be compared to later programs, and possibly used as a benchmark for future programs.
by Andrew T. Speirs.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Gavgani, Yashar Ganjali. "Buffer sizing in internet routers /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Casagrande, Anthony Joseph. "Robust, Low Power, Discrete Gate Sizing." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5656.

Full text
Abstract:
Ultra-deep submicron circuits require accurate modeling of gate delay in order to meetaggressive timing constraints. With the lack of statistical data, variability due to the mechanical manufacturing process and its chemical properties poses a challenging problem. Discrete gate sizing requires (i) accurate models that take into account random parametric variation and (ii) a fair allocation of resources to optimize the solution. The proposed GTFUZZ gate sizing algorithm handles both tasks. Gate sizing is modeled as a resource allocation problem using fuzzy game theory. Delay is modeled as a constraint and power is optimized in this algorithm. In GTFUZZ, delay is modeled as a fuzzy goal with fuzzy parameters to capture the imprecision of gate delay early in the design phase when extensive empirical data is absent. Dynamic power is modeled as a fuzzy goal without varying coefficients. The fuzzy goals provide a flexible platform for multimetric optimization. The robust GTFUZZ algorithm is compared against fuzzy linear programming (FLP) and deterministic worst-case FLP (DWCFLP) algorithms. The benchmark circuits are first synthesized, placed, routed, and optimized for performance using the Synopsys University 32/28nm standard cell library and technology files. Operating at the optimized clock frequency, results show an average power reduction of about 20% versus DWCFLP and 9% against variation-aware gate sizing with FLP. Timing and timing yield are verified by both Synopsys PrimeTime and Monte Carlo simulations of the critical paths using HSPICE.
APA, Harvard, Vancouver, ISO, and other styles
9

Hovland, Douglas Lyle. "Particle sizing in solid rocket motors." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/26153.

Full text
Abstract:
Particle size distribution measurements were made with a Malvern 2600c forward laser light diffraction system across the exhaust nozzle entrance and exhaust plume of a small two-dimensional rocket motor. The solid propellants tested were GAP propellants containing 2.0% and 4.69% aluminum. Surface agglomeration of the aluminum, indicated by the in-motor results, was found to decrease as the motor chamber pressures were increased. At low pressures, increasing the aluminum loading with fixed total solids decreased the mean particle size at the nozzle entrance. Exhaust plume particle size was practically independent of nozzle inlet particle diameters, supporting the critical Weber number particle breakup theory. Initial validation of the Malvern 2600c measurements was accomplished by favorable comparison to exhaust plume particle distribution results obtained using a particle collection probe. Particle sizing
Solid propellant rocket engines
Light scattering
Theses
APA, Harvard, Vancouver, ISO, and other styles
10

Marshall, I. M. "Aerodynamic sizing of non-spherical particles." Thesis, Loughborough University, 1996. https://dspace.lboro.ac.uk/2134/14469.

Full text
Abstract:
The hypothesis of this thesis is that the time-of-flight method of determining an estimate of the aerodynamic diameter of aerosol particles is fundamentally flawed when applied to non-spherical and/or non-unit density particles. Such a particle-sizing system, the TSI Aerodynamic Particle Sizer, is challenged with solid, non-spherical particles of known aerodynamic diameter to assess the influence of particle shape on instrument response. The aerodynamic diameter of the non-spherical particles is also determined under gravitational settling. Deposits that had been size-separated are resuspended for aerodynamic sizing by the APS. The experimental study is supplemented by a theoretical investigation of the relative effects of particle density and shape on APS-measured diameters. This is achieved through the development of a computational routine to calculate the trajectories of particles of various densities and shapes through the APS nozzle and sensing zone. The results of these calculations are compared with the experimentally-measured APS performance. The consequences for the traceability and accuracy of data measured using this technique are assessed in the light of the outcome of both aspects of the study.
APA, Harvard, Vancouver, ISO, and other styles
11

Zabihinejad, Amin. "Optimal sizing of modular multilevel converters." Doctoral thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27918.

Full text
Abstract:
L’électronique de puissance a pénétré depuis quelques décennies les applications à forte puissance dans de nombreux domaines de l’industrie électrique. Au-delà de l’apparition des technologies d’interrupteur à forte puissance commutable en moyenne tension, ces applications imposaient également des avancées dans le domaine des topologies de convertisseurs statiques : les principaux défis à affronter concernaient l’atteinte de niveaux de tension compatibles avec le domaine de puissance des applications, l’augmentation de la fréquence de commutation apparente en sortie afin d’augmenter la bande passante de la commande, de réduire la taille des éléments de filtrage et de limiter les harmoniques de courant injectés dans le réseau d’alimentation. Les topologies de convertisseurs modulaires multiniveaux (MMC) sont issues de cette problématique de recherche : elles permettent grâce à l’association de cellules de commutation d’atteindre des niveaux de tension exploitables en grande puissance avec les technologies d’interrupteurs existantes, de limiter les fréquences et les pertes de commutation des interrupteurs élémentaires tout en maîtrisant la distorsion harmonique totale (THD). La modularité, la redondance, les degrés de liberté et les fonctionnalités des MMC leur permettent aussi d’augmenter la tolérance aux défauts. Ils pénètrent à présent une large gamme d'applications comme le transport à courant continu en haute tension (HVDC), les systèmes d'énergie renouvelable, les entraînements à vitesse variables de grande puissance, la traction ferroviaire et maritime ainsi que des applications spécifiques très contraignantes en matière de performance dynamique comme les systèmes d’alimentation des électro-aimants dans les accélérateurs de particules. Les topologies MMC sont composées de cellules de commutation élémentaires utilisant des interrupteurs électroniques tels que le Thyristor à Commande Intégrée (IGCT) standard ou les dernières génération d’IGBT. Les convertisseurs MMC ont fait l’objet de nombreux travaux de recherche et de développement en ce qui concerne les topologies, la modélisation et le calcul du fonctionnement en régime permanent et transitoire, le calcul des pertes, le contenu harmonique des grandeurs électriques et les systèmes de commande et de régulation. Par contre le dimensionnement de ces structures est rarement abordé dans les travaux publiés. Comme la grande majorité des topologies de convertisseurs statiques, les convertisseurs MMC sont composés non seulement d’interrupteurs mais aussi d’organes de stockage d’énergie de type composants diélectriques (condensateurs) et magnétiques (inductances, coupleurs) qui sont essentiels pour assurer la conversion des grandeurs électriques en entrée et en sortie. Ces composants ont une forte influence sur la taille, le volume et le rendement des convertisseurs et le dimensionnement optimal de ces derniers résulte souvent de compromis entre la taille des composants passifs, la fréquence et la puissance commutable par les interrupteurs élémentaires. Le travail de recherche présenté dans ce mémoire concerne le développement d’une méthodologie de dimensionnement optimal et global des MMC intégrant les composants actifs et passifs, respectant les contraintes des spécifications de l’application et maximisant certains objectifs de performance. Cette méthodologie est utilisée pour analyser divers compromis entre le rendement global du convertisseur et sa masse, voire son volume. Ces divers scénarios peuvent être également traduits en termes de coût si l’utilisateur dispose du prix des composants disponibles. Diverses solutions concurrentes mettant en œuvre un nombre de cellules spécifique adaptées à des interrupteurs de caractéristiques différentes en termes de calibre de tension, de courant et de pertes associés peuvent ainsi être comparées sur la base de spécifications d’entrée-sortie identiques. La méthodologie est appliquée au dimensionnement d’un convertisseur MMC utilisé comme étage d’entrée (« Active Front-end » : AFE) d’une alimentation d’électro-aimant pulsée de grande puissance. Dans une première partie, une méthode de calcul rapide, précise et générique du régime permanent du convertisseur MMC est développée. Elle présente la particularité de prendre en compte la fréquence de commutation contrairement aux approches conventionnelles utilisant la modélisation en valeurs moyennes. Cet outil se révèle très utile dans l’évaluation du contenu harmonique qui est contraint par les spécifications, il constitue le cœur de l’environnement de conception du convertisseur. Contrairement aux convertisseurs conventionnels, il existe des courants de circulation dans les convertisseurs MMC qui les rendent complexe à analyser. Les inductances de limitation incorporées dans les bras de la topologie sont généralement volumineux et pénalisants en termes de volume et de masse. Il est courant d’utiliser des inductances couplées afin de réduire l'ondulation , la THD et la masse. Dans le travail présenté, un circuit équivalent des inductances couplée tenant compte de l'effet de saturation est développé et intégré à l’environnement. L’utilisation d’inductances couplée augmente la complexité de l'analyse du fonctionnement et la précision de leur méthode de dimensionnement est critique pour l’optimisation globale du convertisseur. Un modèle analytique de dimensionnement de ces composants a été développé et intégré dans l’environnement ainsi qu’un modèle de complexité supérieure qui utilise le calcul des champs par éléments finis. La méthodologie de conception optimale et globale proposée utilise une procédure d’optimisation non linéaire avec contraintes qui pilote l’outil de calcul de régime permanent, le modèles de dimensionnements à plusieurs niveaux de complexité des composants passifs ainsi que d’autres modules permettant de quantifier les régimes de défaut. Pour pallier à la précision réduite des modèles analytiques, une approche d'optimisation hybride est également implantée dans l’environnement. Dans la boucle d'optimisation hybride, le modèle de dimensionnement des inductances peut être corrigé par le modèle de complexité supérieure qui utilise le calcul des champs. On obtient ainsi un meilleure compromis entre la précision de la solution optimale et le temps de convergence de la méthode itérative d’optimisation globale.
In the last decades, power electronics has penetrated high power applications in many areas of the electrical industry. After the emergence of high-voltage semiconductor switch technologies these applications also required advances in the field of static converter topologies: The main challenges were to achieve voltage levels compatible with the application power domain, to increase the apparent switching frequency at the output, to increase the control bandwidth, to reduce the size of the elements of filtering and of limiting the current harmonics injected into the supply network. The topologies of multi-level modular converters (MMC) are based on this research problem: they enable the use of switching cells to achieve high power levels that can be used with existing switch technologies, frequencies and switching losses of the elementary switches while controlling the total harmonic distortion (THD). Modularity, redundancy, degrees of freedom and MMC functionality also allow them to increase fault tolerance. They now penetrated a wide range of applications, such as high-voltage DC (HVDC), renewable energy systems, high-speed variable speed drives, rail and marine traction, and very specific applications in terms of dynamic performance such as electromagnet power systems in particle accelerators. MMC topologies are composed of elementary switching cells using electronic switches such as the standard Integrated Control Thyristor (IGCT) or the latest generation of IGBTs. MMC converters have been the subject of extensive research and development work on topologies, modeling, and calculation of steady-state and transient operation, loss calculation, the harmonic content of electrical quantities and systems control and regulation functions. On the other hand, the dimensioning methodology of these structures is rarely addressed in the published works. Like most static converter topologies, MMC converters are composed not only of switches but also passive components of energy storage devices (capacitors) and magnetic (inductors, couplers) that are essential to ensure the conversion of the input and output electrical quantities. These components have a strong influence on the size, the volume and the efficiency of the converters and the optimal dimensioning of the latter often result from a compromise between the size of the passive components, the frequency and the power switchable by the elementary switches. The research presented in this thesis concerns the development of an optimal and comprehensive design methodology for MMCs integrating active and passive components, respecting the constraints of the application specifications and maximizing certain performance objectives. This methodology is used to analyze the various trade-off between the overall efficiency of the converter and its mass, or even its volume. These various scenarios can also be translated into cost if the user has the price of the available components. Various competing solutions using a specific number of cells adapted to switches with different characteristics in terms of voltage, current, and associated losses can thus be compared on the basis of identical input-output specifications. The methodology is applied to the dimensioning of an MMC converter used as an active front-end (AFE) input of a high-power pulsed solenoid power supply. In the first part, a fast, precise and generic method for calculating the steady-state model of MMC converter is developed. It has the particularity of taking into account the switching frequency as opposed to conventional approaches using modeling in mean values. This tool is very useful in evaluating the harmonic content that is constrained by the specifications, it is the heart of the design environment of the converter. Unlike conventional converters, there are circulation currents in MMC converter structure that make it complex to analyze. The inductors which are used in the arms of the topology are generally bulky and expensive in terms of volume and mass. It is common to use coupled inductors to reduce ripple, THD, and mass. In the presented work, an equivalent circuit of coupled inductances considering the saturation effect is developed and integrated. The use of coupled inductors increases the complexity of the analysis and the precision of its sizing method is critical for the overall optimization of the converter. An analytical model for the dimensioning of these components has been developed and integrated as well as a higher complexity model which uses the finite element method calculation. The proposed optimal and global design methodology uses a nonlinear optimization procedure with constraints that drive the steady-state computing tool, multi-level design models of passive component complexity, and other modules to quantify the fault state. To compensate the low precision of the analytical models, a hybrid optimization approach is also implemented. In the hybrid optimization loop, the inductance-sizing model can be corrected by the higher complexity model that uses finite element computation. A better compromise is thus obtained between the precision of the optimal results and convergence time of the iterative global optimization method.
APA, Harvard, Vancouver, ISO, and other styles
12

Chauveau, Clément. "Sizing of the French secondary sources." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-285575.

Full text
Abstract:
The study carried out in this master thesis comes from a larger project from the ENTSO-E(gathering of the European Transmission System Operators (TSO)) focusing on introductionof a European platform for the exchange of balancing energy from frequency restorationreserves. The frequency restoration reserves are the power reserves that allow the TSO tobalance the consumption and the production. The management of these reserves istherefore strategic for the TSO. RTE (the French TSO) takes part to this European project andthis has led RTE to challenge its secondary sources sizing method. The sizing method goal isto do a level of reserve prescription for the next day. The objective of the thesis is toevaluate a new secondary sources sizing method. To achieve the objective, two main tasksare identified. The first is the adaptation of a sizing method suggested by the ENTSO-E. Thisnew sizing method should be applicable to the French power system and its specificities.This requires some modifications from the original suggested method. We will decline themethod into different scenarios with different parameters. The second task focuses on theevaluation of the different scenarios and the comparison with the current sizing method.The evaluation will investigate three aspects for each scenario: • The static aspect, which investigates the statistical characteristics of the scenario’sprescription. • The dynamic aspect, which evaluates the impact of the scenario’s prescription on thepower grid. • The economic, which investigates the scenario’s cost. At the end of the study, RTE will dispose of a set of scenarios that are applicable to theFrench system with satisfying performances in order to select the best scenario consideringits cost/efficiency balance.
De studier som genomförts i detta examensarbete kommer från ett större projekt, frånENTSO-E (gathering of the European Transmission System Operators (TSO)) med fokus på enpresentation av en europeisk plattform för utbyte av balanserad energi från frekvensåterställnings reserven. Frekvens återställnings reserven är kraftreserver som gör det möjligtför en TSO (systemansvarig) att balansera förbrukningen och produktionen. Behandlingen avdessa reserver är därför strategisk för en TSO. RTE (den franska TSOn) deltar i dettaeuropeiska projekt och detta har medfört att RTE ifrågasätter sin nuvarande sekundäramätmetod. Målsättningen för mätmetoden är att veta vilket krav på reserven som behövsför nästa dag. Syftet med detta examensarbete är att utvärdera en ny mätmetod. För att nåmålet identifieras två deluppgifter. Den första är anpassningen av den mätmetod somföreslås av ENTSO-E. Denna nya mått metod bör tillämpas på det franska kraft systemet ochdess förutsättningar. Detta kräver vissa ändringar från den ursprungligt föreslagna metoden.Vi kommer att använda metoden i olika scenarier och med olika parametrar.Den andra deluppgiften fokuserar på utvärderingen av de olika scenarierna och jämförelsenmed den aktuella mätmetoden. Utvärderingen kommer att undersöka tre aspekter för varjescenario: - Den statiska aspekten som undersöker de statiska egenskaperna för scenariots krav. - Den dynamiska aspekten som utvärderar effekterna av scenariots krav på elnätet. - Den ekonomiska som undersöker scenariots kostnad. Slutsatsen är att RTE kommer att kunna använda ett par uppsättningar av de scenarier somär tillämpliga på det franska systemet med tillfredsställande resultat till att kunna välja detbästa scenariot med tanke på dess kostnad och effektivitet.
APA, Harvard, Vancouver, ISO, and other styles
13

Nwamba, André Chidi. "Automated offspring sizing in evolutionary algorithms." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2009. http://scholarsmine.mst.edu/thesis/pdf/Nwamba_09007dcc8068c83d.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2009.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed August 10, 2009) Includes bibliographical references (p. 49-51).
APA, Harvard, Vancouver, ISO, and other styles
14

Leung, Janny M. Y., Thomas L. Magnanti, and Rita Vachani. "Facets and Algorithms for Capacitated Lot Sizing." Massachusetts Institute of Technology, Operations Research Center, 1987. http://hdl.handle.net/1721.1/5210.

Full text
Abstract:
The dynamic economic lot sizing model, which lies at the core of numerous production planning applications, is one of the most highly studied models in all of operations research. And yet, capacitated multi-item versions of this problem remain computationally elusive. We study the polyhedral structure of an integer programming formulation of a single-item capacitated version of this problem, and use these results to develop solution methods for multi-item applications. In particular, we introduce a set of valid inequalities for the problem and show that they define facets of the underlying integer programming polyhedron. Computational results on several single and multiple product examples show that these inequalities can be used quite effectively to develop an efficient cutting plane/branch and bound procedure. Moreover, our results show that in many instances adding certain of these inequalities a priori to the problem formulation, and avoiding the generation of cutting planes, can be equally effective.
APA, Harvard, Vancouver, ISO, and other styles
15

Miller, Andrew J. "Polyhedral approaches to capacitated lot-sizing problems." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/24284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tayali, Nkole Enock. "Particle sizing using laser doppler anemometry techniques." Thesis, Cardiff University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cheah, Boon Soon. "Multi-item, single facility lot sizing problems." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hardwick, Andrew John. "The acoustic sizing of bubbles in liquids." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Riley, James B. (James Barret). "Laser diffraction particle sizing : sampling and inversion." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/58492.

Full text
Abstract:
Thesis (Ph. D.)--Joint Program in Oceanographic Engineering (Massachusetts Institute of Technology, Dept. of Ocean Engineering; and the Woods Hole Oceanographic Institution), 1987.
Bibliography: p. 213-228.
by James B. Riley.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Filho, Márcio Antonio Ferreira Belo. "Lot sizing with setup carryover and crossover." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-29042016-142624/.

Full text
Abstract:
Production planning problems are of paramount importance within supply chain planning, supporting decisions on the transformation of raw materials into finished products. Lot sizing in production planning refers to the tactical/operational decisions related to the size and timing of production orders to satisfy a demand. The objectives of lot-sizing problems are generally economical-related, such as saving costs or increasing profits, though other aspects may be taken into account such as quality of the customer service and reduction of inventory levels. Lot-sizing problems are very common in production activities and an efficient planning of such activities gives the company a clear advantage over concurrent organizations. To that end it is required the consideration of realistic features of the industrial environment and product characteristics. By means of mathematical modelling, such considerations are crucial, though their inclusion results in more complex formulations. Although lot-sizing problems are well-known and largely studied, there is a lack of research in some real-world aspects. This thesis addresses two main characteristics at the lot-sizing context: (a) setup crossover; and (b) perishable products. The former allows the setup state of production line to be carried over between consecutive periods, even if the line is not yet ready for processing production orders. The latter characteristic considers that some products have fixed shelf-life and may spoil within the planning horizon, which clearly affects the production planning. Furthermore, two types of perishable products are considered, according to the duration of their lifetime: medium-term and short-term shelf-lives. The latter case is tighter than the former, implying more constrained production plans, even requiring an integration with other supply chain processes such as distribution planning. Research on stronger mathematical formulations and solution approaches for lot-sizing problems provides valuable tools for production planners. This thesis focuses on the development of mixed-integer linear programming (MILP) formulations for the lot-sizing problems considering the aforementioned features. Novel modelling techniques are introduced, such as the proposal of a disaggregated setup variable and the consideration of lot-sizing instead of batching decisions in the joint production and distribution planning problem. These formulations are subjected to computational experiments in state-of-the-art MILP-solvers. However, the inherent complexity of these problems may require problemdriven solution approaches. In this thesis, heuristic, metaheuristic and matheuristic (hybrid exact and heuristic) procedures are proposed. A lagrangean heuristic addresses the capacitated lot-sizing problem with setup carryover and perishable products. A novel dynamic programming procedure is used to achieve the optimal solution of the uncapacitated single-item lot-sizing problem with setup carryover and perishable item. A heuristic, a fix-and-optimize procedure and an adaptive large neighbourhood search approach are proposed for the operational integrated production and distribution planning. Computational results on generated set of instances based on the literature show that the proposed methods yields competitive performances against other literature approaches.
Problemas de planejamento da produção são de suma importância no planejamento da cadeia de suprimentos, dando suporte às decisões da transformação de matérias-primas em produtos acabados. O dimensionamento de lotes em planejamento de produção é definido pelas decisões tático-operacionais relacionadas com o tamanho das ordens de produção e quando fabricá-las para satisfazer a demanda. Os objetivos destes problemas são geralmente de cunho econômico, tais como a redução de custos ou o aumento de lucros, embora outros aspectos possam ser considerados, tais como a qualidade do serviço ao cliente e a redução dos níveis de estoque. Problemas de dimensionamento de lotes são muito comuns em atividades de produção e um planejamento eficaz de tais atividades, estabelece uma clara vantagem à empresa em relação à concorrência. Para este objetivo, é necessária a consideração de características realistas do ambiente industrial e do produto. Para a modelagem matemática do problema, estas considerações são cruciais, embora sua inclusão resulte em formulações mais complexas. Embora os problemas de dimensionamento de lotes sejam bem conhecidos e amplamente estudados, várias características reais importantes não foram estudadas. Esta tese aborda, no contexto de dimensionamento de lotes, duas características muito relevantes: (a) preservação da preparação total e parcial; e (b) produtos perecíveis. A primeira permite que o estado de preparação de uma linha de produção seja mantido entre dois períodos consecutivos, mesmo que a linha de produção ainda não esteja totalmente pronta para o processamento de ordens de produção. A ultima característica determina que alguns produtos tem prazo de validade fixo, menor ou igual do que o horizonte de planejamento, o que afeta o planejamento da produção. Além disso, de acordo com a duração de sua vida útil, foram considerados dois tipos de produtos perecíveis: produtos com tempo de vida de médio e curto prazo. O ultimo caso resulta em um problema mais apertado do que o anterior, o que implica em planos de produção mais restritos. Isto pode exigir uma integração com outros processos da cadeia de suprimentos, tais como o planejamento de distribuição dos produtos acabados. Pesquisas sobre formulações matemáticas mais fortes e abordagens de solução para problemas de dimensionamento de lotes fornecem ferramentas valiosas para os planejadores de produção. O foco da tese reside no desenvolvimento de formulações de programação linear inteiro-mistas (MILP) para os problemas de dimensionamento de lotes, considerando as características mencionadas anteriormente. Novas técnicas de modelagem foram introduzidas, como a proposta de variáveis de preparação desagregadas e a consideração de decisões de dimensionamento de lotes ao invés de decisões de agrupamento de ordens de produção no problema integrado de planejamento de produção e distribuição. Estas formulações foram submetidas a experimentos computacionais em MILP-solvers de ponta. No entanto, a complexidade inerente destes problemas pode exigir abordagens de solução orientadas ao problema. Nesta tese, abordagens heurísticas, metaheurísticas e matheurísticas (híbrido de métodos exatos e heurísticos) foram propostas para os problemas discutidos. Uma heurística lagrangeana aborda o problema de dimensionamento de lotes com restrições de capacidade, preservação da preparação total e produtos perecíveis. Um novo procedimento de programação dinâmica e utilizado para encontrar a solução ótima do problema de dimensionamento de lotes de um único produto perecível, sem restrições de capacidade e preservação da preparação total. Uma heurística, um procedimento x-and-optimize e uma abordagem por buscas adaptativas em grande vizinhanças são propostas para o problema integrado de planejamento de produção e distribuição. Resultados computacionais em conjuntos de instâncias geradas com base na literatura mostram que os métodos propostos obtiveram performances competitivas com relação a outras abordagens da literatura.
APA, Harvard, Vancouver, ISO, and other styles
21

Alegretti, Caio Graco Prates. "Analytical logical effort formulation for local sizing." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/97867.

Full text
Abstract:
A indústria de microeletrônica tem recorrido cada vez mais à metodologia de projeto baseado em células para fazer frente à crescente complexidade dos projetos de circuitos integrados digitais, uma vez que circuitos baseados em células são projetados mais rápida e economicamente que circuitos full-custom. Entretanto, apesar do progresso ocorrido na área de Electronic Design Automation, circuitos digitais baseados em células apresentam desempenho inferior ao de circuitos full-custom. Assim, torna-se interessante encontrar maneiras de se fazer com que circuitos baseados em células tenham desempenho próximo ao de circuitos full-custom, sem que isso implique elevação significativa nos custos do projeto. Com tal objetivo em vista, esta tese apresenta contribuições para um fluxo automático de otimização local para circuitos digitais baseados em células. Por otimização local se entende a otimização do circuito em pequenas janelas de contexto, onde são feitas otimizações considerando o contexto global. Deste modo, a otimização local pode incluir a detecção e isolamento de regiões críticas do circuito e a geração de redes lógicas e de redes de transistores de diferentes topologias que são dimensionadas de acordo com as restrições de projeto em questão. Como as otimizações locais atuam em um contexto reduzido, várias soluções podem ser obtidas considerando as restrições locais, entre as quais se escolhe a mais adequada para substituir o subcircuito (região crítica) original. A contribuição específica desta tese é o desenvolvimento de um método de dimensionamento de subcircuitos capaz de obter soluções com área ativa mínima, respeitando a capacitância máxima de entrada, a carga a ser acionada, e a restrição de atraso imposta. O método é baseado em uma formulação de logical effort, e a principal contribuição é calcular analiticamente a derivada da área para obter área mínima, ao invés de fazer a derivada do atraso para obter o atraso mínimo, como é feito na formulação tradicional do logical effort. Simulações elétricas mostram que o modelo proposto é muito preciso para uma abordagem de primeira ordem, uma vez que apresenta erros médios de 1,48% para dissipação de potência, 2,28% para atraso de propagação e 6,5% para os tamanhos dos transistores.
Microelectronics industry has been relying more and more upon cell-based design methodology to face the growing complexity in the design of digital integrated circuits, since cell-based integrated circuits are designed in a faster and cheaper way than fullcustom circuits. Nevertheless, in spite of the advancements in the field of Electronic Design Automation, cell-based digital integrated circuits show inferior performance when compared with full-custom circuits. Therefore, it is desirable to find ways to bring the performance of cell-based circuits closer to that of full-custom circuits without compromising the design costs of the former circuits. Bearing this goal in mind, this thesis presents contributions towards an automatic flow of local optimization for cellbased digital circuits. By local optimization, it is meant circuit optimization within small context windows, in which optimizations are done taking into account the global context. This way, local optimization may include the detection and isolation of critical regions of the circuit and the generation of logic and transistor networks; these networks are sized according to the existing design constraints. Since local optimizations act in a reduced context, several solutions may be obtained considering local constraints, out of which the fittest solution is chosen to replace the original subcircuit (critical region). The specific contribution of this thesis is the development of a subcircuit sizing method capable of obtaining minimum active area solutions, taking into account the maximum input capacitance, the output load to be driven, and the imposed delay constraint. The method is based on the logical effort formulation, and the main contribution is to compute the area derivative to obtain minimum area, instead of making the delay derivative to obtain minimum delay, as it is done in the traditional logical effort formulation. Electrical simulations show that the proposed method is very precise for a first order approach, as it presents average errors of 1.48% in power dissipation, 2.28% in propagation delay, and 6.5% in transistor sizes.
APA, Harvard, Vancouver, ISO, and other styles
22

Hoff, Kristen G. "Lot-sizing in the wood furniture industry." Thesis, Virginia Tech, 1991. http://hdl.handle.net/10919/42123.

Full text
Abstract:
We consider the problem of scheduling production in a wood furniture plant. In particular, we consider the problem of selecting orders from various types of furniture products and determining their lot-sizes for production when there are sequence dependent setup times involved in the production of these items. This is termed the aggregate scheduling problem. In addition, we consider the problem of scheduling work at various production facilities in the presence of capacity constraints once the items for production and their quantities are selected. This is termed the detailed scheduling problem. The aggregate scheduling problem is formulated as a mixed integer program and solved using a dynamic programming procedure. The detailed scheduling program is a linear program and is solved using a canned linear programming package. In order to understand the state-of-the-art in the furniture industry, various furniture plants in Southwest Virginia were visited and a national survey was conducted. The results are summarized. The survey emphasized the problem addressed in this research. In order to understand the decision points better in the furniture manufacturing process, the IDEF (ICAM definition) procedure is used to describe the furniture manufacturing process. The methodology developed is applied to a real-life problem and the results are summarized.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
23

Chow, Raymond W. L. Carleton University Dissertation Information and Systems Science. "Gate level transistor sizing by nonlinear optimization." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
24

Önal, Mehmet. "Extensions to the economic lot sizing problem." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0024781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Xiao, Wen Qiang. "Essays on scheduling and lot sizing models /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?IEEM%202002%20XIAO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Daud, Wan Rosli Wan. "The mechanism of alkenyl succinic anhydride sizing." Thesis, Online version, 1988. http://ethos.bl.uk/OrderDetails.do?did=1&uin=uk.bl.ethos.329373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

McCarthy, William R. "An investigation of the mechanism of alkaline sizing with alkenyl succinic anhydride." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/5666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Doddannagari, Uday. "A p-cell approach to integer gate sizing." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Gu, Jinxiang. "The forward reserve warehouse sizing and dimensioning problem." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-09092005-233231/.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2006.
Joel Sokol, Committee Member ; Christiaan Paredis, Committee Member ; Gunter Sharp, Committee Member ; Marc Goetschalckx, Committee Chair ; Leon F. McGinnis, Committee Co-Chair. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
30

Zaker, Sassan. "Optimal transistor sizing in VLSI : algorithms & implementation /." [S.l.] : [s.n.], 1994. http://library.epfl.ch/theses/?nr=1223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Weiss, Daniel. "Optimisation in aircraft pre-design with sizing-criteria." Zürich : ETH, Eidgenössische Technische Hochschule Zürich, Centre of Structure Technologies, 2006. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Salzer, David H. "Combat logistics force sizing to ensure endurance reliability." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA305963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Upton, Eric George. "An Intelligent, Robust Approach to Volumetric Aircraft Sizing." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16128.

Full text
Abstract:
Advances in computational power have produced great strides in the later design and production portions of an aircraft s life cycle, and these advances have included the internal layout component of the design and manufacturing process. However, conceptual and preliminary design tools for internal layout remain primarily based on historical regressions and estimations a situation that becomes untenable when considering revolutionary designs or component technologies. Bringing internal layout information forward in the design process can encourage the same level of benefits enjoyed by other disciplines as advances in aerodynamics, structures and other fields propagate forward in the design of complex systems. Accurate prediction of the volume required to contain all of an aircraft s internal components results in a more accurate prediction of aircraft specifications, mission effectiveness, and costs, helping determine if an aircraft is the best choice for continued development. This is not a computationally simple problem, however, and great care must be taken to ensure the efficiency of any proposed solution. Any solution must also address the uncertainty inherent in describing internal components early in the design process. Implementing a methodology that applies notions of an intelligent search for a solution, as well as deals robustly with component sizing, produces a high chance of success. Development of a robust, rapid method for assessing the volumetric characteristics of an aircraft in the context of the conceptual and preliminary design processes can offer many of the benefits of a complete internal layout without the immense assignment of resources typical in the detail phase of the design process. A simplified methodology for volumetrically sizing an aircraft is presented here as well as an assessment of the state-of-the-art techniques for volumetric considerations used in current aircraft design literature. A prototype tool using a combination of original code and publicly available libraries is developed and explored. A sample aircraft design is undertaken with the prototype tool to demonstrate the effectiveness of the methodology.
APA, Harvard, Vancouver, ISO, and other styles
34

Edström, Rickard. "Internet caching - sizing and placement for dynamic content." Thesis, Uppsala universitet, Avdelningen för datorteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-264729.

Full text
Abstract:
Traffic volumes on the Internet continue to increase, with the same links carrying the same data multiple times. In-network caching can alleviate much of the redundant traffic by storing popular items close to the users. This was a master thesis project that involved building on existing research and simulators, simulating a multi-level hierarchical network cache system and its resulting performance, with varying parameters such as placement and sizing of the individual cache nodes. One of the goals of the thesis work was to improve and integrate the simulation frameworks used as a starting point. Another goal was to run simulations with the improved framework and shed light on how a high Quality of Experience (QoE) can be achieved within this kind of cache system, varying the input parameters. An improved and integrated simulation framework was produced, including improved visualization capabilities. Using this improved simulation framework, the behavior of the cache system was studied, in particular how the system behaves with static and dynamic cache sizing approaches. Conclusions drawn are e.g. that the dynamic sizing approach deployed can be a good way to achieve a high QoE. Finally, future research opportunities are identified.
APA, Harvard, Vancouver, ISO, and other styles
35

Lloyd, Jennifer Anne. "Bodies over borders : trans-sizing the expatriate experience." Thesis, University of Newcastle upon Tyne, 2015. http://hdl.handle.net/10443/3048.

Full text
Abstract:
This thesis introduces the concept of trans-sizing to explore the discursive, embodied and relational experiences of expatriate women in Singapore, and the multiple ways that body size and migration experiences intersect within different spaces in the city. The thesis is based on empirical research with women living in Singapore who identified themselves as expatriates. The focus of this study is upon the ways that experiences of body size shape narrations of migration. I explore this relationship through discursive constructions, embodied and emotional experiences and relational encounters. I argue that body size is spatially contingent and significant to the way that identity, difference and migration are imagined and narrated within the city. Furthermore, I argue that narrations of body size are constructed through gendered, medicalised, classed and racialised discourses that divide women from different places. The study explores the multiple ways that experiences of body size and migration intersect in social and cultural spaces within Singapore. I situate this research in the intersections of geographical work on migration and the interdisciplinary field of Fat Studies. In so doing, I highlight the centrality of body size as an axis of identity that is inherently geographical (Longhurst, 2005). Drawing on an in-depth analysis of 45 individual interviews and one focus group, the study values the words and experiences of expatriate women, providing a nuanced and innovative approach to explorations of migration, gender and body size. By developing the concept of trans-sizing, this research responds to the need for cross-cultural approaches to critical work on body size (Cooper, 2009), the gendered nature of expatriate migration (Fechter and Walsh, 2012), and embodied studies of transnationalism (Dunn, 2010), and contributes to the growing body of work that explores body size from a critical and spatial perspective (Colls and Evans, 2009).
APA, Harvard, Vancouver, ISO, and other styles
36

Van, Hoesel Stan, and Albert Wagelmans. "Sensitivity Analysis of the Economic Lot-Sizing Problem." Massachusetts Institute of Technology, Operations Research Center, 1990. http://hdl.handle.net/1721.1/5146.

Full text
Abstract:
In this paper we study sensitivity analysis of the uncapacitated single level economic lot-sizing problem, which was introduced by Wagner and Whitin about thirty years ago. In particular we are concerned with the computation of the maximal ranges in which the numerical problem parameters may vary individually, such that a solution already obtained remains optimal. Only recently it was discovered that faster algorithms than the Wagner-Whitin algorithm exist to solve the economic lot-sizing problem. Moreover, these algorithms reveal that the problem has more structure than was recognized so far. When performing the sensitivity analysis we exploit these newly obtained insights.
APA, Harvard, Vancouver, ISO, and other styles
37

Korampally, Madhuri. "A MEMS based Coulter counter for cell sizing." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/5079.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on March 28, 2008) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
38

Bernier, Lucie 1966. "A genetic algorithm with self-adaptive niche sizing." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22847.

Full text
Abstract:
Optimization of multimodal functions is hard for traditional optimization techniques. Holland's genetic algorithm combined with the concept of niche and speciation already showed success in such difficult problems. Some implementations of this concept exist, but the effective ones all lack flexibility by requiring either previous knowledge about the function to optimize or by imposing some fixed external schedule of exploration. We present an implementation of this concept using minimum spanning trees that is compared with a previous algorithm on random choices of input parameters. Our proposal does not require any a priori knowledge about the function. Two approaches using the minimum spanning trees are studied, one of which--the biggest proportion method (BPM)--outperforms its competitor on the battery of test problems and shows its capability of automatically adapting to the function to optimize.
APA, Harvard, Vancouver, ISO, and other styles
39

Bin, Mohamed Taib I. "Topology, shape and sizing optimization of plate structures." Thesis, Swansea University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636098.

Full text
Abstract:
This thesis deals with structural topology, shape and sizing optimization of plate structures which are the necessary building blocks of a fully integrated design optimization (FIDO) system. The thesis is restricted to plates subjected to lateral loading which are modelled using Mindlin-Reissner (MR) assumptions and to a lesser extent plates subjected to in-plane loading modelled using plane stress assumptions. Stiffened plates with MR plate and Timoshenko beam models used to represent the combined system are briefly considered. In this thesis, for Fe analysis, the 9-node MR plate elements and the 8-node plane stress elements are generally used for the plate structures. However for structural shape optimization (SSO), 6-noded triangular plane stress elements are used. In the FE idealization of the stiffened plates, 9-node plate element and the 3-node Timoshenko beam stiffener elements are employed. Three layered model and single layered plate models of the plate are adopted for topology optimization which is carried out using the evolutionary hard-kill method. The topologies so obtained are compared with the topologies obtained using the homogenization method with artificial materials. Having obtained an optimal plate topology, which is usually presented in the form of a greyscales density plot, it is now necessary to extract a usable structural design. This is achieved by structural model redefinition. First, a combined thresholding and spline fitting algorithm based on a least squares method is used to redefine the boundaries of structural topology image. If further shape or sizing optimization is required, parameterization is then carried out. Next the shape of the plate boundaries may be optimized using shape optimization procedures. Topology optimization, structural model redefinition and a parameterization and shape optimization are then integrated into a FIDO system for plate structures.
APA, Harvard, Vancouver, ISO, and other styles
40

Sandberg, Henrik, and Rasmus Öhman. "Position sizing methods for a trend following CTA." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-147387.

Full text
Abstract:
This study examines whether a trend following managed futures fund can improve its performance by changing its position sizing method. Trades for a simple trend following strategy was simulated on 47 futures contracts over the period 1990-2012, using varying methods for determining position size. Eleven different position sizing methods where investigated, among them Target Volatility, Omega Optimization and correlation ranking methods. Both methods previously detailed in academic papers as well as novel approaches was implemented, and compared to the baseline performance of the strategy. The results from this study show that the Target Volatility method, and to some degree Max Drawdown Minimize and Dynamic Stop Lock-In, improved the performance of strategy. The final recommendation for a trend following managed futures fund is to use Target Volatility as position sizing method, possibly in conjunction with Max Drawdown Minimize.
Denna studie undersöker huruvida en trendföljande managed futures-fond kan förbättra sina resultat genom att ändra positionsskalningsmetod. Handel med en enkel trendföljande strategi simulerades på 47 futureskontrakt åren 1990-2012, för olika metoder att för bestämma positionsstorlek. Elva positionsskalningmetoder undersöktes, exemplevis Target Volatility, Omega Optimization och metoder baserade i korrelationsrankning. Både tidigare beskrivna metoder och nya tillvägagångssätt testades, och jämfördes med den grundläggande strategin med avseende på risk och avkastning. Denna studies resultat visar att framförallt Target Volatility, och i viss uträckning Max Drawdown Minimize och Dynamic Stop Lock-In förbättrade nyckeltalen för den handlade strategin. Den slutgiltiga rekommendationen för en trendföljande managed futures-fond är att använda Target Volatility som positionsskalningsmetod, möjligtvis tillsammans med Max Drawdown Minimize.
APA, Harvard, Vancouver, ISO, and other styles
41

Sun, XingXian. "Technology and economics of sizing single worsted yarn." Thesis, University of Huddersfield, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Fane, de Salis Max Henry. "Acoustic sizing and location of blockages in ducts." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Dupré, Aurore. "Sizing of a short term wind forecasting system." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX002.

Full text
Abstract:
Dans un contexte de réchauffement climatique et de transition énergétique, le développement des énergies renouvelables est indispensable afin de garantir une production d’énergie qui réponde à une demande en croissance constante. Les producteurs éoliens Français bénéficient d’une période de rachat obligatoire de leur production de la part d’EDF durant 15 ans. Après cela, ils doivent vendre leur production sur le marché concurrentiel. Pour ce faire ils doivent annoncer à l’avance la quantité d’énergie qu’ils injecteront sur le réseau. En cas de déséquilibre, des pénalités leurs sont imputées. En France, l’échéance limite pour vendre son énergie est de 30 minutes.Ainsi, dans cette thèse, plusieurs approches de réduction d’échelle, paramétriques (régression linéaire) et non paramétriques (forêts aléatoires) sont développées, calibrées et évaluées. Les échéances considérées vont de 30 min à 3 h. Les méthodes de réduction d’échelle considérées sont très rarement utilisées pour des échéances inférieures à l’heure puisque les modèles numériques sont généralement exécutés toutes les 6 à 12 h. L’utilisation de mesures in-situ dans les méthodes de réduction d’échelle, afin de corriger la prévision numérique à l’initialisation, permet un gain de performance significatif. Comparé avec les méthodes statistiques classiques pour la prévision court terme, l’amélioration par rapport à la méthode de persistance va de 1.5% à 10 min à plus de 30% à 3 h. Afin de limiter l’accumulation d’erreurs lors du passage de la prévision du vent à la prévision de la puissance éolienne, une analyse de l’erreur induite par différentes variables météorologiques, comme la direction du vent ou la densité de l’air, également est présentée. Dans un premier temps, la prévision ferme par ferme est explorée puis la dimension spatiale est introduite. Pour finir, la valeur économique d’un tel système de prévision court terme est explorée. Les différentes étapes du marché de l’électricité sont étudiées et les différentes sources d’incertitude et de variabilité, comme les erreurs de prévision et la volatilité des prix, sont mises en évidence et évaluées. Pour les deux fermes considérées dans cette étude, les résultats montrent que les prévisions court terme permettent une augmentation du revenu annuel entre 4 et 5%
In a context of global warming and energy transition, the development of renewable energies is essential in order to ensure energy production that meets a constantly growing demand. French wind power producers benefit from a “obligation to purchase” from EDF for 15 years. After that, they have to sell their production in the competitive market. To do so, they must announce in advance the amount of energy they will inject into the grid. In case of imbalance, they are charged penalties. In France, the deadline for selling energy is 30 minutes. Thus, in this thesis, several downscaling approaches, parametric (linear regression) and non-parametric (random forests) are developed, calibrated and evaluated. The considered lead times range from 30 min to 3 h.The downscaling methods considered are rarely used for lead times lower than 1 h since numerical models are generally run every 6 to 12 hours. The use of in-situ measurements in downscaling methods to correct the numerical prediction at initialization, allows a significant performance gain. Compared to traditional statistical methods for short term forecasting, the improvement compared to the persistence method ranges from 1.5%, 10 min ahead, to more than 30%, 3 h ahead. In order to limit the accumulation of errors in the conversion from wind speed forecast to wind power forecast, an analysis of the error induced by different meteorological variables, such as wind direction or air density, is presented. First, the forecast at the farm scale is explored and then the spatial dimension is introduced. Finally, the economic value of such a short term forecasting model is explored. The different steps of the electricity market are studied and the different sources of uncertainty and variability, such as forecast errors and price volatility, are identified and assessed. For the two wind farms considered in this study, the results show that the short term forecasts allow an increase in annual income between 4 and 5%
APA, Harvard, Vancouver, ISO, and other styles
44

Goel, Saumya. "Dynamic Probabilistic Lot-Sizing with Service Level Constraints." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1306865399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Quadt, Daniel. "Lot-sizing and scheduling for flexible flow lines /." Berlin : Springer, 2004. http://www.loc.gov/catdir/toc/fy0602/2004109270.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Butler, Marvin, and Bob Rush. "Gibberellic Acid Sizing Trial on Table Grapes, 1987." College of Agriculture, University of Arizona (Tucson, AZ), 1990. http://hdl.handle.net/10150/215713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Butler, Marvin, and Bob Rush. "Gibberellic Acid Sizing Trial on Table Grapes, 1988." College of Agriculture, University of Arizona (Tucson, AZ), 1990. http://hdl.handle.net/10150/215729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Butler, Marvin, and Bob Rush. "Gibberellic Acid Sizing Trial on Table Grapes, 1989." College of Agriculture, University of Arizona (Tucson, AZ), 1990. http://hdl.handle.net/10150/215730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Paudel, Gaurab. "Hexahedral Mesh Refinement Using an Error Sizing Function." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/3447.

Full text
Abstract:
The ability to effectively adapt a mesh is a very important feature of high fidelity finite element modeling. In a finite element analysis, a relatively high node density is desired in areas of the model where there are high error estimates from an initial analysis. Providing a higher node density in such areas improves the accuracy of the model and reduces the computational time compared to having a high node density over the entire model. Node densities can be determined for any model using the sizing functions based on the geometry of the model or the error estimates from the finite element analysis. Robust methods for mesh adaptation using sizing functions are available for refining triangular, tetrahedral, and quadrilateral elements. However, little work has been published for adaptively refining all hexahedral meshes using sizing functions. This thesis describes a new approach to drive hexahedral refinement based upon an error sizing function and a mechanism to compare the sizes of the node after refinement.
APA, Harvard, Vancouver, ISO, and other styles
50

Darwish, Mohammed. "Lot-sizing and scheduling optimization using genetic algorithm." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17045.

Full text
Abstract:
Simultaneous lot-sizing and scheduling problem is the problem to decide what products to be produced on which machine and in which order, as well as the quantity of each product. Problems of this type are hard to solve. Therefore, they were studied for years, and a considerable number of papers is published to solve different lotsizing and scheduling problems, specifically real-case problems. This work proposes a Real-Coded Genetic Algorithm (RCGA) with a new chromosome representation to solve a non-identical parallel machine capacitated lot-sizing and scheduling problem with sequence dependent setup times and costs, machine cost and backlogging. Such a problem can be found in real world production line at furniture manufacturer in Sweden. Backlogging is an important concept in this problem, and it is often ignored in the literature. This study implements three different types of crossover; one of them has been chosen based on numerical experiments. Four mutation operators have been combined together to allow the genetic algorithm to scan the search area and maintain genetic diversity. Other steps like initializing of the population and a reinitializing process have been designed carefully to achieve the best performance and to prevent the algorithm from trapped into the local optimum. The proposed algorithm is implemented and coded in MATLAB and tested for a set of standard medium to large-size problems taken from the literature. A variety of problems were solved to measure the impact of different characteristics of problems such as the number of periods, machines, and products on the quality of the solution provided by the proposed RCGA. To evaluate the performance of the proposed algorithm, the average deviation from the lower bound and runtime for the proposed RCGA are compared with three other algorithms from the literature. The results show that, in addition to its high computational speed, the proposed RCGA outperforms the other algorithms for non-identical parallel machine problems, while it is outperformed by the other algorithms for problems with the more identical parallel machine. The results show that the different characteristics of problem instances, like increasing setup cost, and size of the problem influence the quality of the solutions provided by the proposed RCGA negatively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography