Dissertations / Theses on the topic 'Trusses Design and construction Mathematical models'

To see the other types of publications on this topic, follow the link: Trusses Design and construction Mathematical models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Trusses Design and construction Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Malone, Brett. "Multidisciplinary optimization in aircraft design using analysis technology models." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-10102009-020042/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Law, Gordon Ki-Wai. "Decision support system for construction cycle design." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26715.

Full text
Abstract:
The objective of this thesis is to develop a conceptual design of a computerized environment for detailed design of construction activities associated with projects characterized by significant repetition. High-rise building construction is used as the example of repetitive construction projects. The construction cycle design of a typical floor structure is studied to gain an understanding of the difficulty and complexity involved in the activity design process. Modeling techniques currently used in construction planning, modeling techniques developed in the field of operations research, and assembly line balancing techniques used in industrial engineering are reviewed to determine their applicability for detailed construction cycle design. Using the concept of decision support systems developed in the fields of management science and knowledge engineering for solving ill-structured and ill-defined problems, a conceptual design of a decision support system for construction cycle design is developed.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
3

Mirjalili, Vahid. "Modelling the structural efficiency of cross-sections in limited torsion stiffness design." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99780.

Full text
Abstract:
Most of the current optimization techniques for the design of light-weight structures are unable to generate structural alternatives at the concept stage of design. This research tackles the challenge of developing an optimization method for the early stage of design. The main goal is to propose a procedure to optimize material and shape of stiff shafts in torsion.
Recently introduced for bending stiffness design, shape transformers are presented in this thesis for optimizing the design of shafts in torsion. Shape transformers are geometric parameters defined to classify shapes and to model structural efficiency. The study of shape transformers are centered on concept selection in structural design. These factors are used to formulate indices of material and shape selection for minimum mass design. An advantage of the method of shape transformers is that the contribution of the shape can be decoupled from the contribution of the size of a cross-section. This feature gives the designer insight into the effects that scaling, shape, as well as material have on the overall structural performance.
Similar to the index for bending, the performance index for torsion stiffness design is a function of the relative scaling of two cross-sections. The thesis examines analytically and graphically the impact of scaling on the torsional efficiency of alternative cross-sections. The resulting maps assist the selection of the best material and shape for cross-sections subjected to dimensional constraints. It is shown that shape transformers for torsion, unlike those for bending, are generally function of the scaling direction.
The efficiency maps ease the visual contrast among the efficiency of open-walled cross-sections and that of close-walled cross-sections. As expected, the maps show the relative inefficiency of the former compared to the latter. They can also set the validity range of thin- and thick-walled theory in torsion stiffness design. The analytical results are validated with the numerical data obtained from ANSYS to guarantee the consistency of the models. The thesis concludes with three case studies that demonstrate the method.
APA, Harvard, Vancouver, ISO, and other styles
4

Mason, Brian H. "Analysis and design of composite curved frames." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06102009-063304/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hon, Alan 1976. "Compressive membrane action in reinforced concrete beam-and-slab bridge decks." Monash University, Dept. of Civil Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/5629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hashemolhosseini, Sepehr. "Algorithmic component and system reliability analysis of truss structures." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85710.

Full text
Abstract:
Thesis (MScEng)-- Stellenbosch University, 2013.
ENGLISH ABSTRACT: Most of the parameters involved in the design and analysis of structures are of stochastic nature. This is, therefore, of paramount importance to be able to perform a fully stochastic analysis of structures both in component and system level to take into account the uncertainties involved in structural analysis and design. To the contrary, in practice, the (computerised) analysis of structures is based on a deterministic analysis which fails to address the randomness of design and analysis parameters. This means that an investigation on the algorithmic methodologies for a component and system reliability analysis can help pave the way towards the implementation of fully stochastic analysis of structures in a computer environment. This study is focused on algorithm development for component and system reliability analysis based on the various proposed methodologies. Truss structures were selected for this purpose due to their simplicity as well as their wide use in the industry. Nevertheless, the algorithms developed in this study can be used for other types of structures such as moment-resisting frames with some simple modi cations. For a component level reliability analysis of structures different methods such as First Order Reliability Methods (FORM) and simulation methods are proposed. However, implementation of these methods for the statistically indeterminate structures is complex due to the implicit relation between the response of the structural system and the load effect. As a result, the algorithm developed for the purpose of component reliability analysis should be based on the concepts of Stochastic Finite Element Methods (SFEM) where a proper link between the finite element analysis of the structure and the reliability analysis methodology is ensured. In this study various algorithms are developed based on the FORM method, Monte Carlo simulation, and the Response Surface Method (RSM). Using the FORM method, two methodologies are considered: one is based on the development of a finite element code where required alterations are made to the FEM code and the other is based on the usage of a commercial FEM package. Different simulation methods are also implemented: Direct Monte Carlo Simulation (DMCS), Latin Hypercube Sampling Monte Carlo (LHCSMC), and Updated Latin Hypercube Sampling Monte Carlo (ULHCSMC). Moreover, RSM is used together with simulation methods. Throughout the thesis, the effciency of these methods was investigated. A Fully Stochastic Finite Element Method (FSFEM) with alterations to the finite element code seems the fastest approach since the linking between the FEM package and reliability analysis is avoided. Simulation methods can also be effectively used for the reliability evaluation where ULHCSMC seemed to be the most efficient method followed by LHCSMC and DMCS. The response surface method is the least straight forward method for an algorithmic component reliability analysis; however, it is useful for the system reliability evaluation. For a system level reliability analysis two methods were considered: the ß-unzipping method and the branch and bound method. The ß-unzipping method is based on a level-wise system reliability evaluation where the structure is modelled at different damaged levels according to its degree of redundancy. In each level, the so-called unzipping intervals are defined for the identification of the critical elements. The branch and bound method is based on the identification of different failure paths of the structure by the expansion of the structural failure tree. The evaluation of the damaged states for both of the methods is the same. Furthermore, both of the methods lead to the development of a parallel-series model for the structural system. The only difference between the two methods is in the search approach used for the failure sequence identification. It was shown that the ß-unzipping method provides a better algorithmic approach for evaluating the system reliability compared to the branch and bound method. Nevertheless, the branch and bound method is a more robust method in the identification of structural failure sequences. One possible way to increase the efficiency of the ß-unzipping method is to define bigger unzipping intervals in each level which can be possible through a computerised analysis. For such an analysis four major modules are required: a general intact structure module, a damaged structure module, a reliability analysis module, and a system reliability module. In this thesis different computer programs were developed for both system and component reliability analysis based on the developed algorithms. The computer programs are presented in the appendices of the thesis.
AFRIKAANSE OPSOMMING: Meeste van die veranderlikes betrokke by die ontwerp en analise van strukture is stogasties in hul aard. Om die onsekerhede betrokke in ontwerp en analise in ag te neem is dit dus van groot belang om 'n ten volle stogastiese analise te kan uitvoer op beide komponent asook stelsel vlak. In teenstelling hiermee is die gerekenariseerde analise van strukture in praktyk gebaseer op deterministiese analise wat nie suksesvol is om die stogastiese aard van ontwerp veranderlikes in ag te neem nie. Dit beteken dat die ondersoek na die algoritmiese metodiek vir komponent en stelsel betroubaarheid analise kan help om die weg te baan na die implementering van ten volle rekenaarmatige stogastiese analise van strukture. Di e studie se fokus is op die ontwikkeling van algoritmes vir komponent en stelsel betroubaarheid analise soos gegrond op verskeie voorgestelde metodes. Vakwerk strukture is gekies vir die doeleinde as gevolg van hulle eenvoud asook hulle wydverspreide gebruik in industrie. Die algoritmes wat in die studie ontwikkel is kan nietemin ook vir ander tipes strukture soos moment-vaste raamwerke gebruik word, gegewe eenvoudige aanpassings. Vir 'n komponent vlak betroubaarheid analise van strukture word verskeie metodes soos die "First Order Reliability Methods" (FORM) en simulasie metodes voorgestel. Die implementering van die metodes vir staties onbepaalbare strukture is ingewikkeld as gevolg van die implisiete verband tussen die gedrag van die struktuur stelsel en die las effek. As 'n gevolg, moet die algoritme wat ontwikkel word vir die doel van komponent betroubaarheid analise gebaseer word op die konsepte van stogastiese eindige element metodes ("SFEM") waar 'n duidelike verband tussen die eindige element analise van die struktuur en die betroubaarheid analise verseker is. In hierdie studie word verskeie algoritmes ontwikkel wat gebaseer is op die FORM metode, Monte Carlo simulasie, en die sogenaamde "Response Surface Method" (RSM). Vir die gebruik van die FORM metode word twee verdere metodologieë ondersoek: een gebaseer op die ontwikkeling van 'n eindige element kode waar nodige verandering aan die eindige element kode self gemaak word en die ander waar 'n kommersiële eindige element pakket gebruik word. Verskillende simulasie metodes word ook geïmplimenteer naamlik Direkte Monte Carlo Simulasie (DMCS), "Latin Hypercube Sampling Monte Carlo" (LHCSMC) en sogenaamde "Updated Latin Hypercube Sampling Monte Carlo" (ULHCSMC). Verder, word RSM tesame met die simulasie metodes gebruik. In die tesis word die doeltreffendheid van die bostaande metodes deurgaans ondersoek. 'n Ten volle stogastiese eindige element metode ("FSFEM") met verandering aan die eindige element kode blyk die vinnigste benadering te wees omdat die koppeling tussen die eindige element metode pakket en die betroubaarheid analise verhoed word. Simulasie metodes kan ook effektief aangewend word vir die betroubaarheid evaluasie waar ULHCSMC as die mees doeltre end voorgekom het, gevolg deur LHCSMC en DMCS. The RSM metode is die mees komplekse metode vir algoritmiese komponent betroubaarheid analise. Die metode is egter nuttig vir sisteem betroubaarheid analise. Vir sisteem-vlak betroubaarheid analise is twee metodes oorweeg naamlik die "ß-unzipping" metode and die "branch-and-bound" metode. Die "ß-unzipping" metode is gebaseer op 'n sisteem-vlak betroubaarheid ontleding waar die struktuur op verskillende skade vlakke gemodelleer word soos toepaslik vir die hoeveelheid addisionele las paaie. In elke vlak word die sogenaamde "unzipping" intervalle gedefinieer vir die identifikasie van die kritiese elemente. Die "branch-and-bound" metode is gebaseer op die identifikasie van verskillende faling roetes van die struktuur deur uitbreiding van die falingsboom. The ondersoek van die skade toestande vir beide metodes is dieselfde. Verder kan beide metodes lei tot die ontwikkeling van 'n parallelserie model van die strukturele stelsel. Die enigste verskil tussen die twee metodes is in die soek-benadering vir die uitkenning van falingsmodus volgorde. Dit word getoon dat die "ß-unzipping" metode 'n beter algoritmiese benadering is vir die ontleding van sisteem betroubaarheid vergeleke met die "branch-and-bound" metode. Die "branch-and- bound" metode word nietemin as 'n meer robuuste metode vir die uitkenning van die falings volgorde beskou. Een moontlike manier om die doeltre endheid van die "ß-unzipping" metode te verhoog is om groter "unzipping" intervalle te gebruik, wat moontlik is vir rekenaarmatige analise. Vir so 'n analise word vier hoof modules benodig naamlik 'n algemene heel-struktuur module, 'n beskadigde-struktuur module, 'n betroubaarheid analise module en 'n sisteem betroubaarheid analise module. In die tesis word verskillende rekenaar programme ontwikkel vir beide sisteem en komponent betroubaarheid analise. Die rekenaar programme word in die aanhangsels van die tesis aangebied.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Dong-Shan. "Deformation-based seismic design models for waterfront structures." Thesis, online access from Digital Dissertation Consortium access full-text, 1999. http://libweb.cityu.edu.hk/cgi-bin/er/db/ddcdiss.pl?9933214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Jou-Jun Robert. "Load and resistance factor design of shallow foundations for bridges." Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/44627.

Full text
Abstract:

Load Factor Design (LFD), adopted by AASHTO in the mid-1970, is currently used for bridge superstructure design. However, the AASHTO specifications do not have any LFD provisions for foundations. In this study, a LFD format for the design of shallow foundations for bridges is developed.

Design equations for reliability analysis are formulated. Uncertainties in design parameters for ultimate and serviceability limit states are evaluated. A random field model is employed to investigate the combined inherent spatial variability and systematic error for serviceability limit state. Advanced first order second moment method is then used to compute reliability indices inherent in the current AASHTO specifications. Reliability indices for ultimate and serviceability limit states with different safety factors and dead to live load ratios are investigated. Reliability indices for ultimate limit state are found to be in the range of 2.3 to 3.4, for safety factors between 2 and 3. This is shown to be in good agreement with Meyerhof's conclusion (1970). Reliability indices for serviceability limit state are found to be in the range of 0.43 to 1.40, for ratios of allowable to actual settlement between 1.0 to 2.0. This appears to be in good agreement with what may be expected. Performance factors are then determined using target reliability indices selected on the basis of existing risk levels.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Klostermeier, Christian. "Investigation into the capability of large eddy simulation for turbomachinery design." Thesis, University of Cambridge, 2008. https://www.repository.cam.ac.uk/handle/1810/252106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Chun-Sho. "A process simulation model for the manufacture of composite laminates from fiber-reinforced, polyimide matrix prepreg materials." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40298.

Full text
Abstract:
A numerical simulation model has been developed which describes the manufacture of composite laminates from fiber-reinforced polyimide (PMR-15) matrix prepreg materials. The simulation model is developed in two parts. The first part is the volatile formation model which simulates the production of volatiles and their transport through the composite microstructure during the imidization reaction. The volatile formation model can be used to predict the vapor pressure profile and volatile mass flux. The second part of the simulation model, the consolidation model, can be used to determine the degree of crosslinking, resin melt viscosity, temperature, and the resin pressure inside the composite during the consolidation process. Also, the model is used to predict the total resin flow, thickness change, and total process time. The simulation model was solved by a finite element analysis. Experiments were performed to obtain data for verification of the model. Composite laminates were fabricated from ICI Fiberite HMF2474/66C carbon fabric, PMR-15 prep reg and cured with different cure cycles. The results predicted by the model correlate well with experimental data for the weight loss, thickness, and fiber volume fraction changes of the composite. An optimum processing cycle for the fabrication of PMR-15 polyimide composites was developed by combining the model generated optimal imidization and consolidation cure cycles. The optimal cure cycle was used to manufacture PMR-15 composite laminates and the mechanical and physical properties of the laminates were measured. Results showed that fabrication of PMR-15 composite laminates with the optimal cure cycle results in improved mechanical properties and a significantly reduced the processing time compared with the manufacturer's suggested cure cycle.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Mingyang 1981. "Macromodeling and simulation of linear components characterized by measured parameters." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112589.

Full text
Abstract:
Recently, microelectronics designs have reached extremely high operating frequencies as well as very small die and package sizes. This has made signal integrity an important bottleneck in the design process, and resulted in the inclusion of signal integrity simulation in the computer aided design flow. However, such simulations are often difficult because in many cases it is impossible to derive analytical models for certain passive elements, and the only available data are frequency-domain measurements or full-wave simulations. Furthermore, at such high frequencies these components are distributed in nature and require a large number of poles to be properly characterized. Simple lumped equivalent circuits are therefore difficult to obtain, and more systematic approaches are required. In this thesis we study the Vector Fitting techniques for obtaining such equivalent model and propose a more streamlined approach for preserving passivity while maintaining accuracy.
APA, Harvard, Vancouver, ISO, and other styles
12

Ramachandran, Selvaraj. "Hypoid gear optimization." PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/4419.

Full text
Abstract:
A hypoid gear optimization procedure using the method of feasible directions has been developed. The objective is to reduce the gear set weight with bending strength, contact strength and facewidth-diametral pitch ratio as constraints. The objective function weight, is calculated from the geometric approximation of the volume of the gear and pinion. The design variables selected are number of gear teeth, diametral pitch, and facewidth. The input parameters for starting the initial design phase are power to be transmitted, speed, gear ratio, type of application, mounting condition, type of loading, and the material to be used. In the initial design phase, design parameters are selected or calculated using the standard available procedures. These selected values of design parameters are passed on to the optimization routine as starting points.
APA, Harvard, Vancouver, ISO, and other styles
13

Gonzalez, Robert. "Optimal design, scheduling and operation of pipeless batch chemical plants." Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/11102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chang, Min-Yung. "Active vibration control of composite structures." Diss., This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-09162005-115021/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Snyman, M. F. "Numerical modelling of an offshore pipeline laid from a barge." Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/21804.

Full text
Abstract:
Bibliography: pages 81-85.
This thesis addresses some of the issues involved in using numerical methods to simulate the laying of an offshore pipeline, the objective being to contribute to the expertise of the South African offshore technology. Of particular interest is the prediction of the stresses in the pipe during such an event. The thesis concentrates on the use and suitability of the finite element method to simulate the important aspects of the pipelaying problem. ABAQUS, a nonlinear general purpose finite element code, was chosen as numerical tool, and nonlinear effects such as geometry and drag, as well as contact and lift-off at the boundaries, are included in the models. The analysis is performed in two parts: in the static analysis the displaced equilibrium position of the pipeline under self weight, buoyancy and barge tension is sought, whilst the response due to wave action and barge motion is of interest in the dynamic analysis. Numerical experiments show the suitability of ABAQUS to model the behaviour of slender structures under both static loads and dynamic excitations.
APA, Harvard, Vancouver, ISO, and other styles
16

Qin, Hong. "Construction of uniform designs and usefulness of uniformity in fractional factorial designs." HKBU Institutional Repository, 2002. http://repository.hkbu.edu.hk/etd_ra/456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Carey, Mara L. "An enhanced integrated-circuit implementation of muscular contraction." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/15507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hernandez, Gabriel. "Platform design for customizable products as a problem of access in a geometric space." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/16760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ramoneda, Igor M. "Force modeling in surface grinding based on the wheel topography analysis." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/18845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mathis, Andrew Wiley. "Electromagnetic modeling of interconnects incorporating perforated ground planes." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/14822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kinsman, Roger Gordon. "Outlet discharge coefficients of ventilation ducts." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59271.

Full text
Abstract:
Discharge coefficients are an important parameter in the prediction of the air displacement performance of ventilation outlets and in the design of ventilation ducts.
Discharge coefficients of a wooden ventilation duct 8.54 metres in length and of a constant 0.17 m$ sp2$ cross sectional area were measured. Four different outlet shapes and 3 aperture ratios of each shape were tested. A split plot experimental design was used to evaluate the effect of outlet shape, outlet size, and distance from the fan on discharge coefficient. The relationship between duct performance characteristics and discharge coefficient was examined. A mathematical equation to predict the discharge coefficient was developed and tested.
Discharge coefficient values measured ranged from 0.19 to 1.25 depending on the aperture ratio and distance from the fan. Outlet shape had no significant effect. The apparent effects of aperture ratio and size are due to the effects of head ratio. The equation predicting the discharge coefficient had a maximum error of 5 percent for the aperture ratios of 0.5 and 1.0, and 15 percent at an aperture ratio of 1.5.
APA, Harvard, Vancouver, ISO, and other styles
22

El, Moueddeb Khaled. "Principles of energy and momentum conservation to analyze and model air flow for perforated ventilation ducts." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=42024.

Full text
Abstract:
A theoretical model was developed to predict the air distribution pattern and thus to design perforated ventilation ducts equipped with a fan. The analysis of the air distribution pattern of such systems requires accurate measurement procedures. Several experimental methods were tested and compared. Accordingly, the piezometric flush taps and thermo-anemometer were selected to measure respectively the duct air pressure and the outlet air flow.
Based on the equations of energy and momentum conservation, a model was formulated to predict the air flow performance of perforated ventilation ducts and to evaluate the outlet discharge angle and the duct regain coefficients without evaluating frictional losses. The basic assumptions of the model were validated by experimentally proving the equivalence of the friction losses expressed in the 2 cited equations. When compared to experimental results measured from four wooden perforated ventilation ducts with aperture ratios of 0.5, 1.0, 1.5, and 2.0, the model predicted the outlet air flow along the full length of perforated duct operated under turbulent flow conditions with a maximum error of 9%. The regain coefficient and the energy correction factor were equal to one, and the value of the discharge coefficient remained constant at 0.65, along the full length of the perforated duct. The outlet air jet discharge angle varied along the entire duct length, and was not influenced by friction losses for turbulent flow.
Assuming a common effective outlet area, the model was extended to match the performance of the fan and the perforated duct and to determine their balance operating point.
APA, Harvard, Vancouver, ISO, and other styles
23

Husain, Sarhang Mustafa. "Computational investigation of skimming flow on stepped spillways using the smoothed particle hydrodynamics method." Thesis, Swansea University, 2013. https://cronfa.swan.ac.uk/Record/cronfa43038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Snodgrass, Robert E. "Mitigation of hazards posed by explosions in underground electrical vaults." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/19019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lam, Wai-yin, and 林慧賢. "Plate-reinforced composite coupling beams: experimental and numerical studies." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37311797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Albert, Jacques. "Characterizations and design of planar optical waveguides and directional couplers by two-step K+ -Na+ ion-exchange in glass." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75759.

Full text
Abstract:
Planar optical waveguides fabricated by K$ sp+$-Na$ sp+$ ion-exchange in soda-lime glass substrates are investigated.
Experimental characterizations of planar waveguide with respect to a wide range of fabrication conditions have been carried out, including detailed measurements of the refractive index anisotropy resulting from the large induced surface stresses.
Parallel to this, the non-linear diffusion process of ion-exchange was simulated numerically to provide, along with the results of the characterizations, a complete description of the refractive index profile from any set of fabrication conditions.
The magnitude of the maximum surface index change observed was shown theoretically to be almost entirely due to the induced stress at the surface of the substrate, arising from the presence of the larger potassium ions.
Finally, a novel class of single-mode channel waveguides, made by a "two-step" ion-exchange was analyzed. A simple model for these waveguides was developed and used in the design of two directional coupler structures which were fabricated and measured.
The two-step process was conceived because it relaxes waveguides' dimensional control, yielding single-mode guides of larger size, better suited for low-loss connections to optical fibers. It also provides an additional degree of freedom to adjust device properties.
APA, Harvard, Vancouver, ISO, and other styles
27

Shi, Pingnan. "Algebraic derivation of neural networks and its applications in image processing." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/31511.

Full text
Abstract:
Artificial neural networks are systems composed of interconnected simple computing units known as artificial neurons which simulate some properties of their biological counterparts. They have been developed and studied for understanding how brains function, and for computational purposes. In order to use a neural network for computation, the network has to be designed in such a way that it performs a useful function. Currently, the most popular method of designing a network to perform a function is to adjust the parameters of a specified network until the network approximates the input-output behaviour of the function. Although some analytical knowledge about the function is sometimes available or obtainable, it is usually not used. Some neural network paradigms exist where such knowledge is utilized; however, there is no systematical method to do so. The objective of this research is to develop such a method. A systematic method of neural network design, which we call algebraic derivation methodology, is proposed and developed in this thesis. It is developed with an emphasis on designing neural networks to implement image processing algorithms. A key feature of this methodology is that neurons and neural networks are represented symbolically such that a network can be algebraically derived from a given function and the resulting network can be simplified. By simplification we mean finding an equivalent network (i.e., performing the same function) with fewer layers and fewer neurons. A type of neural networks, which we call LQT networks, are chosen for implementing image processing algorithms. Theorems for simplifying such networks are developed. Procedures for deriving such networks to realize both single-input and multiple-input functions are given. To show the merits of the algebraic derivation methodology, LQT networks for implementing some well-known algorithms in image processing and some other areas are developed by using the above mentioned theorems and procedures. Most of these networks are the first known such neural network models; in the case there are other known network models, our networks have the same or better performance in terms of computation time.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
28

He, Xin, and 何鑫. "Probabilistic quality-of-service constrained robust transceiver designin multiple antenna systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199527.

Full text
Abstract:
In downlink multi-user multiple-input multiple-output (MU-MIMO) systems, different users, even multiple data streams serving one user, might require different quality-of-services (QoS). The transceiver should allocate resources to different users aiming at satisfying their QoS requirements. In order to design the optimal transceiver, channel state information is necessary. In practice, channel state information has to to be estimated, and estimation error is unavoidable. Therefore, robust transceiver design, which takes the channel estimation uncertainty into consideration, is important. For the previous robust transceiver designs, bounded estimation errors or Gaussian estimation errors were assumed. However, if there exists unknown distributed interference, the distribution of the channel estimation error cannot be modeled accurately a priori. Therefore, in this thesis, we investigate the robust transceiver design problem in downlink MU-MIMO system under probabilistic QoS constraints with arbitrary distributed channel estimation error. To tackle the probabilistic QoS constraints under arbitrary distributed channel estimation error, the transceiver design problem is expressed in terms of worst-case probabilistic constraints. Two methods are then proposed to solve the worst-case problem. Firstly, the Chebyshev inequality based method is proposed. After the worst-case probabilistic constraint is approximated by the Chebyshev inequality, an iteration between two convex subproblems is proposed to solve the approximated problem. The convergence of the iterative method is proved, the implementation issues and the computational complexity are discussed. Secondly, in order to solve the worst-case probabilistic constraint more accurately, a novel duality method is proposed. After a series of reformulations based on duality and S-Lemma, the worst-case statistically constrained problem is transformed into a deterministic finite constrained problem, with strong duality guaranteed. The resulting problem is then solved by a convergence-guaranteed iteration between two subproblems. Although one of the subproblems is still nonconvex, it can be solved by a tight semidefinite relaxation (SDR). Simulation results show that, compared to the non-robust method, the QoS requirement is satisfied by both proposed algorithms. Furthermore, among the two proposed methods, the duality method shows a superior performance in transmit power, while the Chebyshev method demonstrates a lower computational complexity.
published_or_final_version
Electrical and Electronic Engineering
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
29

Ko, Chun-Hung. "Systems reliability analysis of bridge superstructures." Thesis, Queensland University of Technology, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
30

Pinheiro, Helder Fleury 1967. "The application of Trefftz-FLAME to electromagnetic wave problems /." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115703.

Full text
Abstract:
Numerical analysis of the electromagnetic fields in large, complex structures is very challenging due to the high computational overhead. Recently, it has been shown that a new method called Trefftz-FLAME ( Flexible Local Approximation MEthod) is suitable for problems where there exist a large number of similar structures.
This thesis develops Trefftz-FLAME in two areas. First, a novel 2D Trefftz-FLAME method incorporates the modal analysis and port boundary condition that are essential to an accurate calculation of reflection and transmission coefficients for photonic crystal devices. The new technique outperforms existing methods in both accuracy and computational cost.
The second area pertains to the 3D, vector problem of electromagnetic wave scattering by aggregates of identical dielectric particles. A methodology for the development of local basis functions is introduced, applicable to particles of any shape and composition. Boundary conditions on the surface of the finite FLAME domain are described, capable of representing the incident wave and absorbing the outgoing radiation. A series of problems involving dielectric spheres is solved to validate the new method. Comparison with exact solutions is possible in some cases and shows that the method is able to produce accurate near-field results even when the computational grid spacing is equal to the radius of the spheres.
APA, Harvard, Vancouver, ISO, and other styles
31

Chan, Julius Koi Wah. "Dynamics and control of an orbiting space platform based mobile flexible manipulator." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29466.

Full text
Abstract:
This paper presents a Lagrangian formulation for studying the dynamics and control of the proposed Space Station based Mobile Servicing System (MSS) for a particular case of in plane libration and maneuvers. The simplified case is purposely considered to help focus on the effects of structural and joint flexibility parameters of the MSS on the complex interactions between the station and manipulator dynamics during slewing and translational maneuvers. The response results suggest that under critical combinations of parameters, the system can become unstable. During maneuvers, the deflection of the MSS can become excessive, leading to positioning error of the payload. At the same time the libration error can also be significant. A linear quadratic regulator is designed to control the deflection of the manipulator and maintain the station at its operating configuration.
Applied Science, Faculty of
Mechanical Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
32

Arunajatesan, Srinivasan. "Numerical modeling of waste incineration in dump combustors." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/12332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Barrows, Richard James. "Two Dimensional Finite Element Modeling of Swift Delta Soil Nail Wall by "ABAQUS"." PDXScholar, 1994. https://pdxscholar.library.pdx.edu/open_access_etds/4741.

Full text
Abstract:
Soil nail walls are a form of mechanical earth stabilization for cut situations. They consist of the introduction of passive inclusions (nails) into soil cut lifts. These nailed lifts are then tied together with a structural facing (usually shotcrete) . The wall lifts are constructed incrementally from the top of cut down. Soil nail walls are being recognized as having potential for large cost savings over other alternatives. The increasing need to provide high capacity roadways in restricted rights of way under structures such as bridges will require increasing use of techniques such as combined soil nail and piling walls. The Swift Delta Soil Nail wall required installing nails between some of the existing pipe piling on the Oregon Slough Bridge. This raised questions of whether the piling would undergo internal stress changes due to the nail wall construction. Thus, it was considered necessary to understand the soil nail wall structure interaction in relation to the existing pile supported abutment. The purpose of this study was to investigate the Swift Delta Wall using finite element (FE) modeling techniques. Valuable data were available from the instrumentation of the swift Delta Wall. These data were compared with the results of the FE modeling. This study attempts to answer the following two questions: 1. Is there potential for the introduction of new bending stresses to the existing piling? 2. Is the soil nail wall system influenced by the presence of the piling? A general purpose FE code called ABAQUS was used to perform both linear and non-linear analyses. The analyses showed that the piling definitely underwent some stress changes. In addition they also indicated that piling influence resulted in lower nail stresses. Comparison of measured data to predicted behavior showed good agreement in wall face deflection but inconsistent agreement in nail stresses. This demonstrated the difficulty of modeling a soil nail due to the many variables resulting from nail installation.
APA, Harvard, Vancouver, ISO, and other styles
34

Mueller, Ralph. "Specification and Automatic Generation of Simulation Models with Applications in Semiconductor Manufacturing." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16147.

Full text
Abstract:
The creation of large-scale simulation models is a difficult and time-consuming task. Yet simulation is one of the techniques most frequently used by practitioners in Operations Research and Industrial Engineering, as it is less limited by modeling assumptions than many analytical methods. The effective generation of simulation models is an important challenge. Due to the rapid increase in computing power, it is possible to simulate significantly larger systems than in the past. However, the verification and validation of these large-scale simulations is typically a very challenging task. This thesis introduces a simulation framework that can generate a large variety of manufacturing simulation models. These models have to be described with a simulation data specification. This specification is then used to generate a simulation model which is described as a Petri net. This approach reduces the effort of model verification. The proposed Petri net data structure has extensions for time and token priorities. Since it builds on existing theory for classical Petri nets, it is possible to make certain assertions about the behavior of the generated simulation model. The elements of the proposed framework and the simulation execution mechanism are described in detail. Measures of complexity for simulation models that are built with the framework are also developed. The applicability of the framework to real-world systems is demonstrated by means of a semiconductor manufacturing system simulation model.
APA, Harvard, Vancouver, ISO, and other styles
35

Baig, Saood Saeed. "A simple moving boundary technique and its application to supersonic inlet starting /." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112555.

Full text
Abstract:
In this thesis, a simple moving boundary technique has been suggested, implemented and verified. The technique may be considered as a generalization of the well-known "ghost" cell approach for boundary condition implementation. According to the proposed idea, the moving body does not appear on the computational grid and is allowed to move over the grid. The impermeable wall boundary condition is enforced by assigning proper gasdynamic values at the grid nodes located inside the moving body close to its boundaries (ghost nodes). The reflection principle taking into account the velocity of the boundaries assigns values at the ghost nodes. The new method does not impose any particular restrictions on the geometry, deformation and law of motion of the moving body.
The developed technique is rather general and can be used with virtually any finite-volume or finite-difference scheme, since the modifications of the schemes themselves are not required. In the present study the proposed technique has been incorporated into a one-dimensional non-adaptive Euler code and a two-dimensional locally adaptive unstructured Euler code.
It is shown that the new approach is conservative with the order of approximation near the moving boundaries. To reduce the conservation error, it is beneficial to use the method in conjunction with local grid adaptation.
The technique is verified for a number of one and two dimensional test cases with analytical solutions. It is applied to the problem of supersonic inlet starting via variable geometry approach. At first, a classical starting technique of changing exit area by a moving wedge is numerically simulated. Then, the feasibility of some novel ideas such as a collapsing frontal body and "tractor-rocket" are explored.
APA, Harvard, Vancouver, ISO, and other styles
36

Fu, Yan. "Modelling of ducted ventilation system in agricultural structures." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=60519.

Full text
Abstract:
Air distribution ducts are used in the environmental control of livestock and poultry building as well as the conditioning of most agricultural produce.
In order to simplify the approach to the design of ventilation ducts, a mathematical equation has been derived to describe the average air velocity of a duct.
The primary objective of the research work was to test goodness of fit of an equation describing the average air velocity of perforated ventilation ducts, under balanced as well as unbalanced air distribution: $V = H sb{o}{X over L} + (V sb{L}-H sb{o}) {X sp2 over L sp2}$.
This equation was successfully tested using data measured from 14 ducts of constant cross-sectional area, built of wood or polyethylene with outlets of various shapes and aperture ratios. Results indicated that aperture ratio and distance along the duct are the two most significant factors influencing the average duct air velocity values, but material and outlet shape had little effect.
APA, Harvard, Vancouver, ISO, and other styles
37

MacKinnon, Ian R. (Ian Roderick) 1964. "Air distribution from ventilation ducts." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59655.

Full text
Abstract:
A wooden, perforated, uniform cross-section duct was examined to determine the optimum levels of aperture ratio and fan speed with respect to uniformity of discharge. The optimum aperture ratio for the 8.54 m long duct was 1.0 with a uniformity coefficient of 90.28%. The fan speed had little effect on the uniformity of discharge. The friction factor was experimentally determined to be 0.048 for a non-perforated duct and this value was assumed to be the same for a perforated duct of similar construction. A kinetic energy correction factor was used to analyze the flow in the duct. Values for this correction factor were determined from experimental data. Values of the coefficient of discharge and the total duct energy were calculated. A mathematical model was proposed based on the conservation of momentum and the Bernoulli's equation. The model responded favourably and predicted the duct velocity nearly perfectly and slightly underestimated the total duct energy.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Qiang Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "A study of high performance twist drill design and the associated predictive force models." Awarded by:University of New South Wales. Mechanical and Manufacturing Engineering, 2007. http://handle.unsw.edu.au/1959.4/31220.

Full text
Abstract:
This thesis presents a detailed analysis of the plane rake faced drill design, its grinding method and grinding wheel geometry. A fundamental geometrical analysis has then been carried out on the major cutting edges of the modified drills according to the national and international standards. It has been shown that this new drill design results in a significant increase in the normal rake angle at lips as well as point relieving at the chisel edge region. Geometrical models for the various drill point features have been established which uniquely define the drill point features of the modified drill design. A comprehensive experimental investigation has been carried out to study the drilling performance of the modified drills, when drilling a high tensile steel, ASSAB 4340, with TiN coated high speed steel drills over a wide range of drilling conditions. Comparing to the drilling performance with conventional twist drills under the corresponding conditions, it has been found that the modified drills can reduce the thrust force by as much as 46.9% with the average of 23.8%; the reduction of drilling torque is also significant at an average of 13.2% and the maximum of 24.9%. Similarly, the new drill design shows great superiorities over the conventional drills in terms of drill-life. In the drill-life tests, a few conventional drills were broken, but all plane rake faced drills performed very well. In order to estimate the cutting performance in process planning on a mathematical and quantitative basis when drilling with the modified drills, predictive cutting force models have been developed based on the unified-generalized mechanics of cutting approach. The models have been assessed qualitatively and quantitatively and showed good agreements with the experimental thrust, torque and power. Empirical-type force equations have also been developed to provide simple alternatives for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
39

Seepersad, Carolyn Conner. "A Robust Topological Preliminary Design Exploration Method with Materials Design Applications." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4868.

Full text
Abstract:
A paradigm shift is underway in which the classical materials selection approach in engineering design is being replaced by the design of material structure and processing paths on a hierarchy of length scales for specific multifunctional performance requirements. In this dissertation, the focus is on designing mesoscopic material and product topology?? geometric arrangement of solid phases and voids on length scales larger than microstructures but smaller than the characteristic dimensions of an overall product. Increasingly, manufacturing, rapid prototyping, and materials processing techniques facilitate tailoring topology with high levels of detail. Fully leveraging these capabilities requires not only computational models but also a systematic, efficient design method for exploring, refining, and evaluating product and material topology and other design parameters for targeted multifunctional performance that is robust with respect to potential manufacturing, design, and operating variations. In this dissertation, the Robust Topological Preliminary Design Exploration Method is presented for designing complex multi-scale products and materials by topologically and parametrically tailoring them for multifunctional performance that is superior to that of standard designs and less sensitive to variations. A comprehensive robust design method is established for topology design applications. It includes computational techniques, guidelines, and a multiobjective decision formulation for evaluating and minimizing the impact of topological and parametric variation on the performance of a preliminary topological design. A method is also established for multifunctional topology design, including thermal topology design techniques and multi-stage, distributed design methods for designing preliminary topologies with built-in flexibility for subsequent modification for enhanced performance in secondary functional domains. Key aspects of the approach are demonstrated by designing linear cellular alloys??ered metallic cellular materials with extended prismatic cells?? three applications. Heat exchangers are designed with increased heat dissipation and structural load bearing capabilities relative to conventional heat sinks for microprocessor applications. Cellular materials are designed with structural properties that are robust to dimensional and topological imperfections such as missing cell walls. Finally, combustor liners are designed to increase operating temperatures and efficiencies and reduce harmful emissions for next-generation turbine engines via active cooling and load bearing within topologically and parametrically customized cellular materials.
APA, Harvard, Vancouver, ISO, and other styles
40

Zaerr, Jon Benjamin 1963. "Development and evaluation of a dynamic phantom using four independently perfused in vitro kidneys as a tool for investigating hyperthermia systems." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/291341.

Full text
Abstract:
A dynamic phantom for use in investigating hyperthermia heating systems has been designed, constructed, and tested. A computer controlled the flow rate of 80% Ethanol to each of 4 preserved in vitro canine kidneys which acted as the phantom material. The flow rates were regulated with stepper motor controlled valves and measured with flow meters by the computer. This provided a flexible system for adjusting the perfusion as desired. The system was tested with step and ramp changes in perfusion under constant power ultrasound and with a temperature controlled perfusion algorithm, all of which yielded repeatable results. The dynamic phantom developed in this work shows potential for expediting investigations of hyperthermia controllers, temporal blood flow patterns, and inverse problems. Its computer based nature gives it great flexibility which would lend itself well to automated testing procedures.
APA, Harvard, Vancouver, ISO, and other styles
41

Bishop, Gregory Raymond H. ""On stochastic modelling of very large scale integrated circuits : an investigation into the timing behaviour of microelectronic systems" /." Title page, contents and abstract only, 1993. http://web4.library.adelaide.edu.au/theses/09PH/09phb6222.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Xinghua, and 刘兴华. "Power system operation integrating clean energy and environmental considerations." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43085866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Tome, Leo D. "The development of an integrated effectiveness model for aerial targets." Thesis, Stellenbosch : University of Stellenbosch, 2007. http://hdl.handle.net/10019.1/2373.

Full text
Abstract:
Thesis (MScEng (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2007.
During the design or acquisition of missile systems the effectiveness of the system needs to be evaluated. Often actual testing is not possible and therefore mathematical models need to be constructed and solved with the aid of software. The current simulation model is investigated, verified, and a mathematical model to aid in the design of the detonic payload, developed. The problem is confined to the end-game scenario with the developed simulation model focusing on the last milliseconds before warhead detonation. The model, that makes use of the raytracing methodology, models the warhead explosion in the vicinity of a target and calculates the probability of kill for the specific warhead design against the target. Using the data generated by the simulation model, the warhead designer can make the necessary design changes to improve the design. A heuristic method was developed and is discussed which assists in this design process. There is, however, a large population of possible designs. Meta-heuristic methods may be employed in reducing this population and to help confine the manual search to a considerably smaller search area. A fuze detection model as well as the capability to generate truly random intercept scenarios was developed as to enable employment of meta-heuristic search methods. The simulation model, as well as design optimising technology, has successfully been incorporated into a Windows based software package known as EVA (The Effectiveness and Vulnerability Analyser).
APA, Harvard, Vancouver, ISO, and other styles
44

Dickinson, Alex. "Complexity management and modelling of VLSI systems." Title page, contents and abstract only, 1988. http://web4.library.adelaide.edu.au/theses/09PH/09phd553.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Johnson, Pamela Christine. "Bicycle Level of Service: Where are the Gaps in Bicycle Flow Measures?" PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1975.

Full text
Abstract:
Bicycle use is increasing in many parts of the U.S. Local and regional governments have set ambitious bicycle mode share goals as part of their strategy to curb greenhouse gas emissions and relieve traffic congestion. In particular, Portland, Oregon has set a 25% mode share goal for 2030 (PBOT 2010). Currently bicycle mode share in Portland is 6.1% of all trips. Other cities and regional planning organizations are also setting ambitious bicycle mode share goals and increasing bicycle facilities and programs to encourage bicycling. Increases in bicycle mode share are being encouraged to increase. However, cities with higher-than-average bicycle mode share are beginning to experience locations with bicycle traffic congestion, especially during peak commute hours. Today, there are no established methods are used to describe or measure bicycle traffic flows. In the 1960s, the Highway Capacity Manual (HCM) introduced Level of Service (LOS) measurements to describe traffic flow and capacity of motor vehicles on highways using an A-to-F grading system; "A" describes free flow traffic with no maneuvering constraints for the driver and an "F" grade corresponds to over capacity situations in which traffic flow breaks down or becomes "jammed". LOS metrics were expanded to highway and road facilities, operations and design. In the 1990s, the HCM introduced LOS measurements for transit, pedestrians, and bicycles. Today, there are many well established and emerging bicycle level of service (BLOS) methods that measure the stress, comfort and perception of safety of bicycle facilities. However, it was been assumed that bicycle traffic volumes are low and do not warrant the use of a LOS measure for bicycle capacity and traffic flow. There are few BLOS methods that take bicycle flow into consideration, except for in the case of separated bicycle and bicycle-pedestrian paths. This thesis investigated the state of BLOS capacity methods that use bicycle volumes as a variable. The existing methods were applied to bicycle facility elements along a corridor that experiences high bicycle volumes in Portland, Oregon. Using data from the study corridor, BLOS was calculated and a sensitivity analysis was applied to each of the methods to determine how sensitive the models are to each of the variables used. An intercept survey was conducted to compare the BLOS capacity scores calculated for the corridor with the users' perception. In addition, 2030 bicycle mode share for the study corridor was estimated and the implications of increased future bicycle congestion were discussed. Gaps in the BLOS methods, limitations of the thesis study and future research were summarized. In general, the existing methods for BLOS capacity are intended for separated paths; they are not appropriate for existing high traffic flow facilities. Most of the BLOS traffic flow methods that have been developed are most sensitive to bicycle volumes. Some of these models may be a good starting point to improve BLOS capacity and traffic flow measures for high bicycle volume locations. Without the tools to measure and evaluate the patterns of bicycle capacity and traffic flow, it will be difficult to monitor and mitigate bicycle congestion and to plan for efficient bicycle facilities in the future. This report concludes that it is now time to develop new BLOS capacity measures that address bicycle traffic flow.
APA, Harvard, Vancouver, ISO, and other styles
46

Rallabhandi, Sriram Kishore. "Sonic Boom Minimization through Vehicle Shape Optimization and Probabilistic Acoustic Propagation." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6937.

Full text
Abstract:
Sonic boom annoyance is an important technical showstopper for commercial supersonic aircraft operations. It has been proposed that aircraft can be shaped to alleviate sonic boom. Choosing the right aircraft shape reflecting the design requirements is a fundamental and most important step that is usually over simplified in the conceptual stages of design by resorting to a qualitative selection of a baseline configuration based on historical designs and designers perspective. Final aircraft designs are attempted by minor shape modifications to this baseline configuration. This procedure may not yield large improvements in the objectives, especially when the baseline is chosen without a rigorous analysis procedure. Traditional analyses and implementations tend to have a complex algorithmic flow, tight coupling between tools used and computational limitations. Some of these shortcomings are overcome in this study and a diverse mix of tools is seamlessly integrated to provide a simple, yet powerful and automatic procedure for sonic boom minimization. A shape optimization procedure for supersonic aircraft design using better geometry generation and improved analysis tools has been successfully demonstrated. The geometry engine provides dynamic reconfiguration and efficient manipulation of various components to yield unstructured watertight geometries. The architecture supports an assimilation of different components and allows configuration changes to be made quickly and efficiently because changes are localized to each component. It also enables an automatic way to combine linear and non-linear analyses tools. It has been shown in this study that varying atmospheric conditions could have a huge impact on the sonic boom annoyance metrics and a quick way of obtaining probability estimates of relevant metrics was demonstrated. The well-accepted theoretical sonic boom minimization equations are generalized to a new form and the relevant equations are derived to yield increased flexibility in aircraft design process. Optimum aircraft shapes are obtained in the conceptual design stages weighing in various conflicting objectives. The unique shape optimization procedure in conjunction with parallel genetic algorithms improves the computational time of the analysis and allows quick exploration of the vast design space. The salient features of the final designs are explained. Future research recommendations are made.
APA, Harvard, Vancouver, ISO, and other styles
47

Willis, Craig Robert. "Design of unreinforced masonry walls for out-of-plane loading / Craig Robert Willis." 2004. http://hdl.handle.net/2440/22133.

Full text
Abstract:
"November 2004"
Bibliography: p.167-179.
xi, 333 p. : ill., photos (col.) ; 30 cm.
Title page, contents and abstract only. The complete thesis in print form is available from the University Library.
Focuses on behavioural models of masonry walls with a view to improving their accuracy and extending their application. Results include a numerical model and mathematical expressions capable of predicting the key stages of the non-linear load-deflection behaviour of walls subjected to vertical bending and axial loading; new mathematical expressions for horizontal and diagonal bending moment capacities that are dimensionally consistent and account for the beneficial effects of compressive stress; and. Experimental test data for masonry sections subjected to horizontal and diagonal bending, which were used in the development and verification of the new mathematical expressions.
Thesis (Ph.D.)--University of Adelaide, School of Civil and Environmental Engineering, 2004
APA, Harvard, Vancouver, ISO, and other styles
48

Wangcharoenrung, Chayawee. "Development of adaptive transducer based on biological sensory mechanism." Thesis, 2005. http://hdl.handle.net/2152/1718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lingam, Naga Sasidhar. "Low power design techniques for high speed pipelined ADCs." Thesis, 2009. http://hdl.handle.net/1957/10294.

Full text
Abstract:
Real world is analog but the processing of signals can best be done in digital domain. So the need for Analog to Digital Converters(ADCs) is ever rising as more and more applications set in. With the advent of mobile technology, power in electronic equipment is being driven down to get more battery life. Because of their ubiquitous nature, ADCs are prime blocks in the signal chain in which power is intended to be reduced. In this thesis, four techniques to reduce power in high speed pipelined ADCs have been proposed. The first is a capacitor and opamp sharing technique that reduces the load on the first stage opamp by three fold. The second is a capacitor reset technique that aids removing the sample and hold block to reduce power. The third is a modified MDAC which can take rail-to-rail input swing to get an extra bit thus getting rid of a power hungry opamp. The fourth is a hybrid architecture which makes use of an asynchronous SAR ADC as the backend of a pipelined ADC to save power. Measurement and simulation results that prove the efficiency of the proposed techniques are presented.
Graduation date: 2009
APA, Harvard, Vancouver, ISO, and other styles
50

Hardas, Chinmaya S. "Component placement sequence optimization in printed circuit board assembly using genetic algorithms." Thesis, 2003. http://hdl.handle.net/1957/30048.

Full text
Abstract:
Over the last two decades, the assembly of printed circuit boards (PCB) has generated a huge amount of industrial activity. One of the major developments in PCB assembly was introduction of surface mount technology (SMT). SMT has displaced through-hole technology as a primary means of assembling PCB over the last decade. It has also made it easy to automate PCB assembly process. The component placement machine is probably the most important piece of manufacturing equipment on a surface mount assembly line. It is used for placing components reliably and accurately enough to meet the throughput requirements in a cost-effective manner. Apart from the fact that it is the most expensive equipment on the PCB manufacturing line, it is also often the bottleneck. There are a quite a few areas for improvements on the machine, one of them being component placement sequencing. With the number of components being placed on a PCB ranging in hundreds, a placement sequence which requires near minimum motion of the placement head can help optimize the throughput rates. This research develops an application using genetic algorithm (GA) to solve the component placement sequencing problem for a single headed placement machine. Six different methods were employed. The effects of two parameters which are critical to the execution of a GA were explored at different levels. The results obtained show that the one of the methods performs significantly better than the others. Also, the application developed in this research can be modified in accordance to the problems or machines seen in the industry to optimize the throughput rates.
Graduation date: 2004
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography