Dissertations / Theses on the topic 'Design Space Analysis'

To see the other types of publications on this topic, follow the link: Design Space Analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Design Space Analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Puri, Manpreet Singh. "Design and analysis of inflatable space structures." Thesis, University of Strathclyde, 2016. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=28497.

Full text
Abstract:
This thesis gives the conceptualization of inflation of inflatable membrane space structures. Although there has been little study using software simulation and the majority of documented research is based on theoretical numerical calculations. This research advanced the prior understanding of wrinkling within inflated membranes by using complex structures subjected to deformation loads. Within this thesis, a computational framework for the numerical analysis of the interaction between acting forces on the membrane and the membrane structure dynamics is presented. Moreover, in the case with thin membrane deformations, the synergy between the membrane wrinkling and structural forces has to be examined. This membrane structure-anatomical forces correlation results in a dynamic wrinkling problem, which can only be modelled easily and effectively by a simulation software that can integrate each assumption and attribute within the analysis. In the structural simulation within Abaqus FEA software, key consideration has to be given in modelling the geometric non-linearity behaviour of the membrane. By using the existing continuum expression for the virtual internal work in curvilinear coordinates. This is used to derive the modified fundamental formulation in which all subsequent analysis is established on and the initial equilibrium shape of the membrane. A critical feature of the new formulation is the prospect of adding pre-stressed forces to the membrane structure. The approach developed, established on an alteration of the material stiffness matrix to integrate the effects of wrinkling and deformation, can be utilized to calculate the behaviour of the membrane within a finite element simulation. In the wrinkling model, the state of the membrane element (taut, wrinkled or slack) is characterized by a mixed wrinkling criterion. Once it has been identified that the membrane element is wrinkled, an iterative scheme looks for the wrinkled orientation angle and the precise stress distribution, including only uni-axial tension in the wrinkle direction, is then derived. The wrinkling model has been verified and validated by contrasting the simulated conclusions with documented results for the instance of a time-independent isotropic membrane subjected to shear and axial loading. Utilizing the time integration method, a time-dependant pseudo-elastic stiffness matrix was represented and therefore, rather than calculating the convolution integral all through the Abaqus simulation, then we can calculate the behaviour of a membrane structure by superposition of a series of step by step increments in basic finite element software. The theoretical computations from the Abaqus/Explicit analysis were compared with documented results for the shear and axial loading. The results agreed very well, assuming friction and any relativistic dynamic effects were excluded. The discrepancy between the shear loading solution is 7% while the discrepancy between the axial loading is only 5% between the Abaqus modeland the documented model. This discrepancy could be the resultant of the source of energy dissipation from the visco-elastic behaviour during the loading and unloading of forces. It can be stated that for the Kapton HN membrane, this result falls within acceptable range but to increase accuracy, the load and unloading will be carried out on a set steady amplitude to inhibit in shock effects within the model. A three-dimensional finite element model which integrates wrinkling and frictionless contact has been developed to simulate the adaptive smart cell and cylindrical membrane structure. The loading of both structures is given by a non-uniform differential inflation pressure with a continual gradient adjacent to height. The resultant solutions are computed using Abaqus/Explicit software, with an integrated user-defined material subroutine to account for elastic wrinkling deformation that administers a combined stress-strain criterion. Frictionless contact within the membrane structure is prescribed for both complex structures (Adaptive Smart Structures Model and Inflatable Beam Model) in order to prohibit the penetration of the membrane structure through itself. Both the complex inflatable membrane wrinkling models accomplish the purpose of exceptional subgrid scale performance in relation to accuracy, competency, computing hardware & software expense, complexity and the model convergence rate. The numerical algorithm is created in general context and is flexible for a large variety of material models. For a closed membrane structure, the skew symmetric constraint parameters vanish, while the existing symmetric domain variables mirror preservation of the system. This procedure does not demand the discretization of the fluid (gas) domain or the link between coupling of fluid (gas) and membrane. As a result of this basic fact, the computation is drastically simplified. The adaptive structures model introduces a novel approach in harnessing solar power for reuse on the ground as a stable source of power. The simulations were based on the space part of the stiff structure created of hexagonal membrane cells. Simulations are carried out in Abaqus Finite Element Analysis software for simplicity & a comparison for validation purposes is tested against an experimental inflatable cell within a vacuum chamber. It was showcased that the final configuration could be achieved regardless of the packaging shape of the inflatable cell array. The inflatable beam model is comprised of two sections, the bending & buckling of the inflated beam and the post-inflation of the bent and buckled beam. Abaqus software was used to simulate the inflatable beam during each configuration utilizing the integration of a modified VUMAT subroutine. A comparison is showcased representing the importance of the integration of the VUMAT subroutine within our Abaqus model.
APA, Harvard, Vancouver, ISO, and other styles
2

Faedi, Alberto. "Design and Analysis of Optical Links for Space Communications." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22859/.

Full text
Abstract:
Studio e presentazione di link ottici in ambiente deep-space. Implementazione di simulatori in C++ relativi a due diverse tecniche di modulazione e demodulazione. Studio del link budget tra un satellite relay in orbita attorno alla Terra e un satellite in orbita attorno a pianeti solari (Venere, Marte, Urano e Nettuno). Studio e ricerca sul problema del puntamento in ambito ottico, per collegamenti deep-space.
APA, Harvard, Vancouver, ISO, and other styles
3

Lacroix, Frédéric 1973. "Design, analysis and implementation of free-space optical interconnects." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=38072.

Full text
Abstract:
Optical interconnects represent an attractive alternative technology for the implementation of dense, high-speed interconnects, as they do not suffer from many of the problems plaguing electrical interconnects such as frequency-dependent crosstalk and attenuation.
However, optics has still not been accepted commercially as an interconnect technology. There is concern regarding the cost and complexity of the optomechanics needed to achieve the very fine alignments necessary to guarantee that the light emitted from the source actually falls on the receiver. The demonstration of a simple-to-assemble, dense and robust optical interconnect would constitute an important proof of the practicality of this technology. The photonic backplane demonstrator system presented in this thesis addresses these issues through a novel approach; the system uses slow Gaussian beams (f/16) and a clustered design to maximize misalignment tolerances. This in turn relaxes the positioning and packaging requirements for the components, thus simplifying assembly.
This thesis pursues two sets of complementary goals; the first set is concerned with the demonstration of some desirable optomechanical characteristics for optical interconnects such as passive alignment, repeatability and stability while the second set of goals is concerned with a verification of hypotheses often used in the design and implementation of optical interconnects. Such hypotheses are often used in practice to design optical interconnects despite the fact that little data exists in the literature to warrant their use. It therefore makes good sense to spend some time verifying the accuracy of these models. This will provide a solid engineering foundation for the design of future systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Rising, John M. (John Michael). "Safety-guided design & analysis of space launch vehicles." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118525.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 103-108).
The advent of commercial launch systems has brought about a new age of space launch vehicle design. In order to survive in a competitive market, space launch providers must design systems with new technologies in shorter development times. This changing nature of space launch vehicle design requires a new way to perform safety analysis. Traditional hazard analysis techniques do not deliver adequate insight early in the design process, when most of the safety-related decisions are made. Early design decisions are often made using "lessons-learned" from previous launch systems, rather than interactive feedback from the new vehicle design actually being developed. Furthermore, traditional techniques use reliability theory as their foundation, resulting in the use of excessive design margin and redundancy as the "default" vehicle design choices. This equivocation of safety and reliability may have made sense for simpler launch vehicles of the past, but most modern space launch vehicle accidents have resulted from incorrect software specifications, component interaction accidents, and other design errors independent of the reliability of individual components. The space launch industry needs safety analysis methods and design processes that identify and correct these hazards early in the vehicle design process, when modifications to correct safety issues are more effective and less costly. This work shows how Systems-Theoretic Process Analysis (STPA) can been used as a powerful tool to identify, mitigate, and possibly eliminate hazards throughout the entire space launch vehicle lifecycle. This work begins by reviewing traditional hazard analysis techniques and the changing nature of launch vehicle accidents. Next, it describes how STPA can be integrated into the space launch vehicle lifecycle to design safer systems. It then demonstrates the safety-guided design of a small-lift launch vehicle using STPA. Finally, this work shows how STPA can be used to satisfy regulatory and range safety requirements. The thesis of this work is that integration of STPA into the design of space launch vehicles can make a significant contribution to reducing launch vehicle accidents.
by John M. Rising.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
5

Moss, Vaughan. "Thermal design and analysis of the SKA SA MeerKAT Digitiser." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29782.

Full text
Abstract:
The Square Kilometre Array Project is a multi-national venture attempting to build the world's largest radio telescope. Australia and South Africa (together with other African countries), will be host to the SKA site. Both countries are building pre-cursor radio telescopes to demonstrate their ability to successfully host the project. Square Kilometre Array South Africa (SKA SA) is currently constructing the MeerKAT Radio Telescope in the Karoo Desert. Radio telescopes are conventionally designed to have the signal Digitiser located in the pedestal of the radio telescope antenna structure to shield the incoming radio signal from being contaminated by the electromagnetic interference (EMI)/radio frequency interference (RFI) noise created by the Digitiser electronics. However, if a Digitiser could be placed near the antenna feed, this would decrease the length of the signal path between the receiver and the Digitiser, which would decrease noise on the signal. The aim of this thesis is to present a viable thermal design for an externally, near-feed mounted, passively cooled Digitiser on the MeerKAT Radio Telescope. This has never been done before. Through calculation, simulation and design iteration this aim was achieved, resulting in an operational Digitiser system which is being used on the MeerKAT Radio Telescope and could potentially also be used in SKA Phase 1.
APA, Harvard, Vancouver, ISO, and other styles
6

Ball, Jeffrey Craig. "Design and analysis of multifunctional composite structures for nano-satellites." Thesis, Cape Peninsula University of Technology, 2017. http://hdl.handle.net/20.500.11838/2572.

Full text
Abstract:
Thesis (MTech (Mechanical Engineering))--Cape Peninsula University of Technology, 2017.
The aim of this thesis is to investigate the applications of multifunctional compos- ite (MFC) technology to nano-satellite structures and to produce a working concept design, which can be implemented on future Cube-Satellites (CubeSats). MFC tech- nologies can be used to optimise the performance of the satellite structure in terms of mass, volume and the protection it provides. The optimisation of the structure will allow further room for other sub-systems to be expanded and greater payload allowance. An extensive literature view of existing applications of MFC materials has been conducted, along with the analysis of a MFC CubeSat structural design account- ing for the environmental conditions in space and well-known design practices used in the space industry. Numerical analysis data has been supported by empirical analysis that was done where possible on the concept material and structure. The ndings indicate that the MFC technology shows an improvement over the conventional alu- minium structures that are currently being used. Improvements in rigidity, mass and internal volume were observed. Additional functions that the MFC structure o ers include electrical circuitry and connections through the material itself, as well as an increase electromagnetic shielding capability through the use of carbon- bre composite materials. Empirical data collected on the MFC samples also show good support for the numerical analysis results. The main conclusion to be drawn from this work is that multifunctional composite materials can indeed be used for nano-satellite structures and in the same light, can be tailor-made to the speci c mission requirements of the satellite. The technology is in its infancy still and has vast room for improvement and technological development beyond this work and well into the future. Further improvements and additional functions can be added through the inclusion of various other materials.
APA, Harvard, Vancouver, ISO, and other styles
7

Brathwaite, Joy Danielle. "Value-informed space systems design and acquisition." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/43748.

Full text
Abstract:
Investments in space systems are substantial, indivisible, and irreversible, characteristics that make them high-risk, especially when coupled with an uncertain demand environment. Traditional approaches to system design and acquisition, derived from a performance- or cost-centric mindset, incorporate little information about the spacecraft in relation to its environment and its value to its stakeholders. These traditional approaches, while appropriate in stable environments, are ill-suited for the current, distinctly uncertain and rapidly changing technical, and economic conditions; as such, they have to be revisited and adapted to the present context. This thesis proposes that in uncertain environments, decision-making with respect to space system design and acquisition should be value-based, or at a minimum value-informed. This research advances the value-centric paradigm by providing the theoretical basis, foundational frameworks, and supporting analytical tools for value assessment of priced and unpriced space systems. For priced systems, stochastic models of the market environment and financial models of stakeholder preferences are developed and integrated with a spacecraft-sizing tool to assess the system's net present value. The analytical framework is applied to a case study of a communications satellite, with market, financial, and technical data obtained from the satellite operator, Intelsat. The case study investigates the implications of the value-centric versus the cost-centric design and acquisition choices. Results identify the ways in which value-optimal spacecraft design choices are contingent on both technical and market conditions, and that larger spacecraft for example, which reap economies of scale benefits, as reflected by their decreasing cost-per-transponder, are not always the best (most valuable) choices. Market conditions and technical constraints for which convergence occurs between design choices under a cost-centric and a value-centric approach are identified and discussed. In addition, an innovative approach for characterizing value uncertainty through partial moments, a technique used in finance, is adapted to an engineering context and applied to priced space systems. Partial moments disaggregate uncertainty into upside potential and downside risk, and as such, they provide the decision-maker with additional insights for value-uncertainty management in design and acquisition. For unpriced space systems, this research first posits that their value derives from, and can be assessed through, the value of information they provide. To this effect, a Bayesian framework is created to assess system value in which the system is viewed as an information provider and the stakeholder an information recipient. Information has value to stakeholders as it changes their rational beliefs enabling them to yield higher expected pay-offs. Based on this marginal increase in expected pay-offs, a new metric, Value-of-Design (VoD), is introduced to quantify the unpriced system's value. The Bayesian framework is applied to the case of an Earth Science satellite that provides hurricane information to oil rig operators using nested Monte Carlo modeling and simulation. Probability models of stakeholders' beliefs, and economic models of pay-offs are developed and integrated with a spacecraft payload generation tool. The case study investigates the information value generated by each payload, with results pointing to clusters of payload instruments that yielded higher information value, and minimum information thresholds below which it is difficult to justify the acquisition of the system. In addition, an analytical decision tool, probabilistic Pareto fronts, is developed in the Cost-VoD trade space to provide the decision-maker with additional insights into the coupling of a system's probable value generation and its associated cost risk.
APA, Harvard, Vancouver, ISO, and other styles
8

Trisno, Sugianto. "Design and analysis of advanced free space optical communication systems." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3400.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Babilis, Lambros. "Micro-Formian for the analysis and design of space frames." Thesis, University of Surrey, 1988. http://epubs.surrey.ac.uk/842973/.

Full text
Abstract:
The analysis and design of a triactive frame involves the generation of a large amount of information describing the structure, in the form of numerical data. The generation of the required data is time consuming and prone to error. The present work attempts to offer a convenient computer assisted method for automated data generation. Micro-Formian, the computer program written for this purpose, is based on the ideas of formex algebra, which have been expanded in order to make better use of the available technology. The main features of Micro-Formian are its user-friendliness, its portability and its power in the generation of data. The user friendliness is achieved by an on-screen manual and a continuous dialogue with the user. The data generation procedures are based on the principles of formex algebra and the concept of geometric potential. The portability is achieved by using ANSI standard Pascal and by expressing all the hardware dependent variables in a parametric form. The research is, in essence, an investigation, into the philosophy behind computer aided structural analysis, whereby it examines the practicality of an interactive computing system for the automatic handling of structural design. A new technique related to mesh generation is implemented in an attempt to investigate the degree of optimization that can be achieved through a semi-automated procedure.
APA, Harvard, Vancouver, ISO, and other styles
10

Nakanishi, Hideyuki. "Design and Analysis of Social Interaction in Virtual Meeting Space." 京都大学 (Kyoto University), 2001. http://hdl.handle.net/2433/150615.

Full text
Abstract:
本文データは平成22年度国立国会図書館の学位論文(博士)のデジタル化実施により作成された画像ファイルを基にpdf変換したものである
Kyoto University (京都大学)
0048
新制・課程博士
博士(情報学)
甲第9064号
情博第35号
新制||情||8(附属図書館)
UT51-2001-F394
京都大学大学院情報学研究科社会情報学専攻
(主査)教授 石田 亨, 教授 林 春男, 教授 酒井 徹朗
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
11

Suo, Xiaoyuan. "A Design and Analysis of Graphical Password." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/27.

Full text
Abstract:
The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, I conduct a comprehensive survey of the existing graphical password techniques. I classify these techniques into two categories: recognition-based and recall-based approaches. I discuss the strengths and limitations of each method and point out the future research directions in this area. I also developed three new techniques against the common problem exists in the present graphical password techniques. In this thesis, the scheme of each new technique will be proposed; the advantages of each technique will be discussed; and the future work will be anticipated.
APA, Harvard, Vancouver, ISO, and other styles
12

Arney, Dale Curtis. "Rule-based graph theory to enable exploration of the space system architecture design space." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44840.

Full text
Abstract:
NASA's current plans for human spaceflight include an evolutionary series of missions based on a steady increase in capability to explore cis-lunar space, the Moon, near-Earth asteroids, and eventually Mars. Although the system architecture definition has the greatest impact on the eventual performance and cost of an exploration program, selecting an optimal architecture is a difficult task due to the lack of methods to adequately explore the architecture design space and the resource-intensive nature of architecture analysis. This research presents a modeling framework to mathematically represent and analyze the space system architecture design space using graph theory. The framework enables rapid exploration of the design space without the need to limit trade options or the need for user interaction during the exploration process. The architecture design space for three missions in a notional evolutionary exploration program, which includes staging locations, vehicle implementation, and system functionality, for each mission destination is explored. Using relative net present value of various system architecture options, the design space exploration reveals that the launch vehicle selection is the primary driver in reducing cost, and other options, such as propellant type, staging location, and aggregation strategy, provide less impact.
APA, Harvard, Vancouver, ISO, and other styles
13

McKerlie, Diane Lisa Humanski. "Explicit design knowledge : investigating design space analysis in practice and opportunities for its development." Thesis, London South Bank University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299910.

Full text
Abstract:
In the context of knowledge management, the challenge for organizations is to convert individual human knowledge into structural capital so that the knowledge becomes persistent in the organization, making it more accessible and hence more usable. How to codify the knowledge of a workforce, including the tacit knowledge of experts, and how to apply that codified knowledge with success are unresolved issues. The conversion of individual knowledge into structural capital is of particular relevance in the field of design. Design is a complex activity that creates valuable knowledge. However, that knowledge is often implicit, unstructured, and embedded in procedures, methods, documentation, design artifacts, and of course in the minds of designers and other project stakeholders. In addition, design teams are often multidisciplinary and include experts who apply tacit knowledge to arrive at solutions. Design projects extend over time so that the risk of losing design knowledge increases. Information in itself is not knowledge for the purposes of structural capital. A user interface (UI) design specification for example, does not capture the knowledge used to create that design. The specification tells us what the artifact should be, but it does not tell us how the design came to be or why it is the way it is. Design rationale (DR) is a field of study surrounding the reasoning behind design decisions and the reasoning process that leads to the design of an artifact. The objective of creating a design rationale is to make the reasons for design decisions explicit. Design space analysis (DSA) is one perspective on design rationale that explores alternative design solutions and the assessment of each against design objectives. The rationale behind design decisions provides insight about the design knowledge that was applied and is therefore, of interest to the structural capital of organizations. Moreover, the process of making the rationale explicit is of interest to the domain of user interface design. The challenge for UI designers and the question addressed in this research is how to make the design rationale explicit and use it to effectively support the design process? The proposed solution is to conduct design space analysiS as part of the process of de.slgn. To. test this solution it is important to explore the implications of generating design rationale in practice and to explore whether DSA reflects the knowledge that expert deSigners apply. The "DSA study" demonstrated and examined the use of design space analysis by UI experts in a long-term, practical, design setting. The findings suggest that design space analysis supports communication and the reasoning process, and it provides context around past design decisions. It was also found that conducting design space analysis encourages designers to accumulate design ideas and develop an understanding of design problems in a systematic way. In addition, the study showed that designers are capable of producing and using the notation, but that the effort to conduct DSA is an obstacle to its use in practice. Conclusions are drawn that DSA can structure the reasoning aspect of design knowledge. The "design skills study" identified the skills that user interface experts apply in practice. The findings indicate that many of the skills of UI experts correspond to the skills that are emphasized by DSA. The study emphasized the pervasiveness and importance of the communication activity in design, as well as the role of reasoning in communication and decision making. The study also identified design activities that receive comparatively little attention from UI experts and design skills that may be comparatively poor. Conclusions are drawn that DSA reflects in part the knowledge that designers apply in practice. Findings from the above studies point to two approaches that maximize the positive effects of DSA and minimize the effort to conduct a design space analysis. I describe these approaches as coaching and heuristics. Informal evaluations indicate that coaching and heuristics warrant further investigation. The findings from each of the studies have implications for design space analysis. These are discussed around several themes: the tension between the processes of designing and structuring design knowledge, the trade-off in effort between structuring design knowledge and interpreting unstructured design knowledge, design knowledge and the complementary roles of communication and documentation, and DSA as it pertains to expert and novice designers. It is inevitable that where there are new findings and solutions there are also new questions to be explored. Several interesting questions raised by these investigations suggest an agenda for future work.
APA, Harvard, Vancouver, ISO, and other styles
14

Levander, Fredrik, and Per Sakari. "Design and Analysis of an All-optical Free-space Communication Link." Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1198.

Full text
Abstract:

Free Space Optics (FSO) has received a great deal of attention lately both in the military and civilian information society due to its potentially high capacity, rapid deployment, portability and high security from deception and jamming. The main issue is that severe weather can have a detrimental impact on the performance, which may result in an inadequate availability.

This report contains a feasibility study for an all-optical free-space link intended for short-range communication (200-500 m). Laboratory tests have been performed to evaluate the link design. Field tests were made to investigate availability and error performance under the influence of different weather conditions. Atmospheric impact due to turbulence related effects have been studied in detail. The most crucial part of the link design turned out to be the receiver optics and several design solutions were investigated. The main advantage of an all-optical design, compared to commercially available electrooptical FSO-systems, is the potentially lower cost.

APA, Harvard, Vancouver, ISO, and other styles
15

Eslami, Kiasari Abbas. "Performance Analysis and Design Space Exploration of On-Chip Interconnection Networks." Doctoral thesis, KTH, Elektroniksystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-136409.

Full text
Abstract:
The advance of semiconductor technology, which has led to more than one billion transistors on a single chip, has enabled designers to integrate dozens of IP (intellectual property) blocks together with large amounts of embedded memory. These advances, along with the fact that traditional communication architectures do not scale well have led to significant changes in the architecture and design of integrated circuits. One solution to these problems is to implement such a complex system using an on-chip interconnection network or network-on-chip (NoC). The multiple concurrent connections of such networks mean that they have extremely high bandwidth. Regularity can lead to design modularity providing a standard interface for easier component reuse and improved interoperability. The present thesis addresses the performance analysis and design space exploration of NoCs using analytical and simulation-based performance analysis approaches. At first, we developed a simulator aimed to performance analysis of interconnection networks. The simulator is then used to evaluate the performance of networks topologies and routing algorithms since their choice heavily affect the performance of NoCs. Then, we surveyed popular mathematical formalisms – queueing theory, network calculus, schedulability analysis, and dataflow analysis – and how they have been applied to the analysis of on-chip communication performance in NoCs. We also addressed research problems related to modelling and design space exploration of NoCs. In the next step, analytical router models were developed that analyse NoC performance. In addition to providing aggregate performance metrics such as latency and throughput, our approach also provides feedback about the network characteristics at a fine-level of granularity. Our approach explicates the impact that various design parameters have on the performance, thereby providing invaluable insight into NoC design. This makes it possible to use the proposed models as a powerful design and optimisation tool. We then used the proposed analytical models to address the design space exploration and optimisation problem. System-level frameworks to address the application mapping and to design routing algorithms for NoCs were presented. We first formulated an optimisation problem of minimizing average packet latency in the network, and then solved this problem using the simulated annealing heuristic. The proposed framework can also address other design space exploration problems such as topology selection and buffer dimensioning.

QC 20131205

APA, Harvard, Vancouver, ISO, and other styles
16

Gouw, Reza Raymond. "Nuclear design analysis of square-lattice honeycomb space nuclear rocket engine." [Florida] : State University System of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/amt2440/master.pdf.

Full text
Abstract:
Thesis (M.E.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains x, 69 p.; also contains graphics. Vita. Includes bibliographical references (p. 68).
APA, Harvard, Vancouver, ISO, and other styles
17

Lo, Kung-meng. "Design and analysis of optical layouts for free space optical switching." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2002. https://ro.ecu.edu.au/theses/1645.

Full text
Abstract:
The Intelligent Reconfigurable Optical Switch (IROS) is an N x N optical switch that uses two reflective Liquid Crystal (LC) Spatial Light Modulators (SLM} in a "Z" configuration to steer the incoming beams from the input ports to the designated output ports. Currently, the maximum steering capacity of practical SLM is limited to few degrees. For a dense optical switch, this makes the optical path between the input and output fiber ports relatively long. When a Gaussian beam is switched from an input fiber port to an output fiber port in a long free space interconnection topology, the beam's illumination undergoes significant divergence, which contributes to power loss and induces cross talk on the output ports. A possible method to minimize these problems is to design an optical system to collimate divergent beams.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhao, Yueqin. "A Downtown Space Reservation System: Its Design and Evaluation." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29021.

Full text
Abstract:
This research explores the feasibility of providing innovative and effective solutions for traffic congestion. The design of reservation systems is being considered as an alternative and/or complementary travel demand management (TDM) strategy. A reservation indicates that a user will follow a booking procedure defined by the reservation system before traveling so as to obtain the right to access a facility or resource. In this research, the reservation system is introduced for a cordon-based downtown road network, hereafter called the Downtown Space Reservation System (DSRS). The research is executed in three steps. In the first step, the DSRS is developed using classic optimization techniques in conjunction with an artificial intelligence technology. The development of this system is the foundation of the entire research, and the second and third steps build upon it. In the second step, traffic simulation models are executed so as to assess the impact of the DSRS on a hypothetical transportation road network. A simulation model provides various transportation measures and helps the decision maker analyze the system from a transportation perspective. In this step, multiple simulation runs (demand scenarios) are conducted and performance insights are generated. However, additional performance measurement and system design issues need to be addressed beyond the simulation paradigm. First, it is not the absolute representation of performance that matters, but the concept of relative performance that is important. Moreover, a simulation does not directly demonstrate how key performance measures interact with each other, which is critical when trying to understand a system structure. To address these issues, in the third step, a comprehensive performance measurement framework has been applied. An analytical technique for measuring the relative efficiency of organizational units, or in this case, demand scenarios called network Data Envelopment Analysis (DEA), is used. The network model combines the perspectives of the transportation service provider, the user and the community, who are the major stakeholders in the transportation system. This framework enables the decision maker to gain an in-depth appreciation of the system design and performance measurement issues.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Hamidi, Fatemeh. "REVITALISING URBAN SPACE, AN ANT-BASED ANALYSIS OF THE FUNCTIONING OF THREE REDESIGNED PUBLIC SPACES IN ROSENGÅRD." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23104.

Full text
Abstract:
Public space functions are essential for society to function because they can support social exchanges and building public life. This master thesis is a study of public life that unfolds in the setting of three redesigned public spaces in Rosengård, including Bokalerna, Rosens Röda Matta, and Rosengård Centrum. Drawing on a conceptual toolbox developed from a territorial actor-network theory (ANT) I examine the socio-material exchanges that take place because of the redesigned materialities of space and explore their impact on the quality of the selected public places. I employ qualitative methods - visual ethnography and interviews - to address the questions of 1) how material topographies mediate social exchange and 2) What actors or events are important for assembling everyday sociality in the selected three public spaces.I made use of six operative concepts of anchors, base camps, multicore and monocore spaces, tickets and rides, ladders, and finally punctiform, linear and field seating to explore their impact on the quality of the selected public places in terms of affording or hindering social exchanges. My field observations of the three sites and interviews indicate that the Rosengård Centrum accommodate a more pronounced public life compared the other, and perhaps the most popular one in the district. The programmed materialities and multiple points of organised activities allow space to facilitate heterogeneous clusterings of humans and non-human entities and the formation of a diverse collective. Moreover, the organization of a mixture of monocore and multicore space in combination with sheltered anchor spots appears to be essential for assembling and stabilising human collectives and everyday sociality in Rosengård.My findings suggest that, while many of the discussions in the literature concentrate on centres of cities or large metropolitan areas, much could still be learned from a thorough study of public spaces at a finer scale and neighbourhood level.
APA, Harvard, Vancouver, ISO, and other styles
20

Mukundan, Sudharsan. "Structural design and analysis of a lightweight composite sandwich space radiator panel." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/1613.

Full text
Abstract:
The goal of this study is to design and analyze a sandwich composite panel with lightweight graphite foam core and carbon epoxy face sheets that can function as a radiator for the given payload in a satellite. This arrangement provides a lightweight, structurally efficient structure to dissipate the heat from the electronics box to the surroundings. Three-dimensional finite element analysis with MSC Visual Nastran is undertaken for modal, dynamic and heat transfer analysis to design a radiator panel that can sustain fundamental frequency greater than 100 Hz and dissipate 100 W/m2 and withstand launch loads of 10G. The primary focus of this research is to evaluate newly introduced graphite foam by Poco Graphite Inc. as a core in a sandwich structure that can satisfy structural and thermal design requirements. The panel is a rectangular plate with a cutout that can hold the antenna. The panel is fixed on all the sides. The objective is not only to select an optimum design configuration for the face sheets and core but also to explore the potential of the Poco foam core in its heat transfer capacity. Furthermore the effects of various parameters such as face sheet lay-up, orientation, thickness and material properties are studied through analytical models to validate the predictions of finite element analysis. The optimum dimensions of the sandwich panel are determined and structural and thermal response of the Poco foam is compared with existing aluminum honeycomb core.
APA, Harvard, Vancouver, ISO, and other styles
21

Neiswander, Michael John. "The urban cemetery : the paradigm of sacred space, an analysis and design." Thesis, Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/22381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Garcia, Sandrine L. "Analysis of a space experimental design for high-Tc superconductive thermal bridges." Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/42752.

Full text
Abstract:

Infrared sensor satellites are used to monitor the conditions in the earth's upper atmosphere. In these systems, the electronic links connecting the cryogenically cooled infrared detectors to the significantly warmer amplification electronics act as thermal bridges and, consequently, the mission lifetimes of the satellites are limited due to cryogenic evaporation. High-temperature superconductor (HTS) materials have been proposed by researchers at the National Aeronautics and Space Administration Langley's Research Center (NASA-LaRC) as an alternative to the currently used manganin wires for electrical connection. The potential for using HTS films as thermal bridges has provided the motivation for the design and the analysis of a spaceflight experiment to evaluate the performance of this superconductive technology in the space environment The initial efforts were focused on the preliminary design of the experimental system which allows for the quantitative comparison of superconductive leads with manganin leads, and on the thermal conduction modeling of the proposed system (Lee, 1994). Most of the HTS materials were indicated to be potential replacements for the manganin wiresc In the continuation of this multi-year research, the objectives of this study were to evaluate the sources of heat transfer on the thermal bridges that have been neglected in the preliminary conductive model and then to develop a methodology for the estimation of the thermal conductivities of the HTS thennal bridges in space.

The Joule heating created by the electrical current through the manganin wires was incorporated as a volumetric heat source into the manganin conductive model. The radiative heat source on the HTS thermal bridges was determined by performing a separate radiant interchange analysis within a high-Tc superconductor housing area. Both heat sources indicated no significant contribution on the cryogenic heat load, which validates the results obtained in the preliminary conduction model.

A methodology was presented for the estimation of the thermal conductivities of the individual HTS thermal bridge materials and the effective thermal conductivities of the composite HTS thermal bridges as functions of temperature. This methodology included a sensitivity analysis and the demonstration of the estimation procedure using simulated data with added random errors. The thermal conductivities could not be estimated as functions of temperature; thus the effective thermal conductivities of the HTS thermal bridges were analyzed as constants.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
23

Quintero, Francisco Javier. "Analysis and optimal design of a resonant switching converter for space applications." Diss., The University of Arizona, 1990. http://hdl.handle.net/10150/185003.

Full text
Abstract:
The design of converters for space applications is subject to a number of unusual constraints, such as low volume and weight, high efficiency operation, minimum components stress, low noise interference and resistance to ionizing radiation. The diode clamped series resonant converter (DCSRC) can be designed to satisfy some of the design constraints. A new approach in the analysis of the DCSRC, and a systematic way of designing for high efficiency and minimum component stress is presented. The direct relationship between the phase plane and the resonant wave shapes allows us to synthesize the closed-form solution and generate the output plane, which relates the normalized output current to the normalized output voltage for any load and any ratio of switching to resonant frequencies. The converter operation is optimized by superimposing the functions that describe the transistor stress and resonant tank component stress on the output plane. Experimental results are in good agreement with both the mathematical model and simulation. The effects of ionizing radiation on the converter performance under simulated space radiation conditions is also investigated.
APA, Harvard, Vancouver, ISO, and other styles
24

Romero, Ignacio. "Dynamic analysis and control system design of a deployable space robotic manipulator." Thesis, Cranfield University, 2001. http://dspace.lib.cranfield.ac.uk/handle/1826/13328.

Full text
Abstract:
This thesis presents a dynamic analysis and a control system for a flexible space manipulator, the Deployable Robotic Manipulator or DRM, which has a deployable/retractable link. The link extends (or retracts) from the containing slewing link of the manipulator to change the DRM's length and hence its workspace. This makes the system dynamics time varying and therefore any control strategy has to adapt to this fact. The aim of the control system developed is to slew the manipulator through a predetermined angle given a maximum angular acceleration, to reduce flexural vibrations of the manipulator and to have a certain degree of robustness, all of this while carrying a payload and while the length of the manipulator is changing. The control system consists of a slewing motor that rotates the manipulator using the open-loop assumed torque method and two reaction wheel actuators, one at the base and one at the tip of the manipulator, which are driven by a closed-loop damping control law. Two closed-loop control laws are developed, a linear control law and a Lyapunov based control law. The linear control law is based on collocated output feedback. The Lyapunov control law is developed for each of the actuators using Lyapunov stability theory to produce vibration control that can achieve the objectives stated above for different payloads, while the manipulator is rotating and deploying or retracting. The response of the system is investigated by computer simulation for two-dimensional vibrations of the deployable manipulator. Both the linear and Lyapunov based feedback control laws are found to eliminate vibrations for a range of payloads, and to increase the robustness of the slewing mechanism to deal with uncertain payload characteristics.
APA, Harvard, Vancouver, ISO, and other styles
25

Hertel, Thorsten Walter. "Analysis and design of conical spiral antennas in free space and over ground." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/15018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Thelin, Christopher Murray. "Application and Evaluation of Full-Field Surrogate Models in Engineering Design Space Exploration." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/8625.

Full text
Abstract:
When designing an engineering part, better decisions are made by exploring the entire space of design variations. This design space exploration (DSE) may be accomplished manually or via optimization. In engineering, evaluating a design during DSE often consists of running expensive simulations, such as finite element analysis (FEA) in order to understand the structural response to design changes. The computational cost of these simulations can make thorough DSE infeasible, and only a relatively small subset of the designs are explored. Surrogate models have been used to make cheap predictions of certain simulation results. Commonly, these models only predict single values (SV) that are meant to represent an entire part's response, such as a maximum stress or average displacement. However, these single values cannot return a complete prediction of the detailed nodal results of these simulations. Recently, surrogate models have been developed that can predict the full field (FF) of nodal responses. These FF surrogate models have the potential to make thorough and detailed DSE much more feasible and introduce further design benefits. However, these FF surrogate models have not yet been applied to real engineering activities or been demonstrated in DSE contexts, nor have they been directly compared with SV surrogate models in terms of accuracy and benefits.This thesis seeks to build confidence in FF surrogate models for engineering work by applying FF surrogate models to real DSE and engineering activities and exploring their comparative benefits with SV surrogate models. A user experiment which explores the effects of FF surrogate models in simple DSE activities helps to validate previous claims that FF surrogate models can enable interactive DSE. FF surrogate models are used to create Goodman diagrams for fatigue analysis, and found to be more accurate than SV surrogate models in predicting fatigue risk. Mode shapes are predicted and the accuracy of mode comparison predictions are found to require a larger amount of training samples when the data is highly nonlinear than do SV surrogate models. Finally, FF surrogate models enable spatially-defined objectives and constraints in optimization routines that efficiently search a design space and improve designs.The studies in this work present many unique FF-enabled design benefits for real engineering work. These include predicting a complete (rather than a summary) response, enabling interactive DSE of complex simulations, new three-dimensional visualizations of analysis results, and increased accuracy.
APA, Harvard, Vancouver, ISO, and other styles
27

Kempf, Jean-Francois. "On computer-aided design-space exploration for multi-cores." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM110/document.

Full text
Abstract:
La complexité croissante des systèmes embarqués nécessite des formalismes de modélisation qui peuvent être simulés et analysés pour explorer l'espace des alternatives de conception. Cette thèse décrit le développement d'un formalisme de modélisation et des outils pour l'exploration de l'espace de design au plus tôt dans le flot de conception. Nous étendons le model-checking classique au pire cas pour les automates temporisés à l'analyse stochastique basée sur un raffinement des intervalles d'incertitude temporelle par des distributions sur les délais. D'une part, nous introduisons le formalisme des Duration Probabilistic Automata (DPA) à partir duquel nous pouvons réaliser de l'analyse ainsi que de l'optimisation. D'autre part nous présentons DESPEX (Design Space Explorer), un outil d'évaluation de performance de modèles de haut niveau des applications qui s'exécutent sur les plates-formes multi-coeurs. Nous montrons également son utilisation sur plusieurs cas d'étude
The growing complexity of embedded systems calls for modeling formalisms that can be simulated and analyzed to explore the space of design alternatives. This thesis describes the development of a modeling formalism and tools for design space exploration at early design stage.We extend the classical worst-case model checking for timed automata to stochastic analysis based on a refinement of temporal uncertainty intervals into delay distribution. On one hand we introduce the formalism of Duration Probabilistic Automata (DPA) supporting analysis as well as optimization. On the other hand we provide DESPEX (DEsign SPace EXplorer), a tool for performance evaluation of high-level models of applications running on multi-core platforms. We also show its usage on several case studies
APA, Harvard, Vancouver, ISO, and other styles
28

Lafleur, Jarret Marshall. "A Markovian state-space framework for integrating flexibility into space system design decisions." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/43749.

Full text
Abstract:
The past decades have seen the state of the art in aerospace system design progress from a scope of simple optimization to one including robustness, with the objective of permitting a single system to perform well even in off-nominal future environments. Integrating flexibility, or the capability to easily modify a system after it has been fielded in response to changing environments, into system design represents a further step forward. One challenge in accomplishing this rests in that the decision-maker must consider not only the present system design decision, but also sequential future design and operation decisions. Despite extensive interest in the topic, the state of the art in designing flexibility into aerospace systems, and particularly space systems, tends to be limited to analyses that are qualitative, deterministic, single-objective, and/or limited to consider a single future time period. To address these gaps, this thesis develops a stochastic, multi-objective, and multi-period framework for integrating flexibility into space system design decisions. Central to the framework are five steps. First, system configuration options are identified and costs of switching from one configuration to another are compiled into a cost transition matrix. Second, probabilities that demand on the system will transition from one mission to another are compiled into a mission demand Markov chain. Third, one performance matrix for each design objective is populated to describe how well the identified system configurations perform in each of the identified mission demand environments. The fourth step employs multi-period decision analysis techniques, including Markov decision processes (MDPs) from the field of operations research, to find efficient paths and policies a decision-maker may follow. The final step examines the implications of these paths and policies for the primary goal of informing initial system selection. Overall, this thesis unifies state-centric concepts of flexibility from economics and engineering literature with sequential decision-making techniques from operations research. The end objective of this thesis' framework and its supporting analytic and computational tools is to enable selection of the next-generation space systems today, tailored to decision-maker budget and performance preferences, that will be best able to adapt and perform in a future of changing environments and requirements. Following extensive theoretical development, the framework and its steps are applied to space system planning problems of (1) DARPA-motivated multiple- or distributed-payload satellite selection and (2) NASA human space exploration architecture selection.
APA, Harvard, Vancouver, ISO, and other styles
29

Goswami, Mohit. "Application of product family design for engineered systems in changing market space." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2008. http://scholarsmine.mst.edu/thesis/pdf/Goswami_09007dcc804feafa.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2008.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed May 19, 2008) Includes bibliographical references (p. 33-35).
APA, Harvard, Vancouver, ISO, and other styles
30

Alemany, Kristina. "Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missions." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31821.

Full text
Abstract:
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Braun, Robert; Committee Member: Clarke, John-Paul; Committee Member: Russell, Ryan; Committee Member: Sims, Jon; Committee Member: Tsiotras, Panagiotis. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
31

Suciu, Floarea. "Performance analysis of timed Petri nets by decomposition of the state space." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0028/MQ34234.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bhatia, Manav. "Design-oriented thermoelastic analysis, sensitivities, and approximations for shape optimization of aerospace vehicles /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/9970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Anzalone, Evan John. "Agent and model-based simulation framework for deep space navigation analysis and design." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/52163.

Full text
Abstract:
As the number of spacecraft in simultaneous operation continues to grow, there is an increased dependency on ground-based navigation support. The current baseline system for deep space navigation utilizes Earth-based radiometric tracking, which requires long duration, often global, observations to perform orbit determination and generate a state update. The age, complexity, and high utilization of the assets that make up the Deep Space Network (DSN) pose a risk to spacecraft navigation performance. With increasingly complex mission operations, such as automated asteroid rendezvous or pinpoint planetary landing, the need for high accuracy and autonomous navigation capability is further reinforced. The Network-Based Navigation (NNAV) method developed in this research takes advantage of the growing inter-spacecraft communication network infrastructure to allow for autonomous state measurement. By embedding navigation headers into the data packets transmitted between nodes in the communication network, it is possible to provide an additional source of navigation capability. Simulation results indicate that as NNAV is implemented across the deep space network, the state estimation capability continues to improve, providing an embedded navigation network. To analyze the capabilities of NNAV, an analysis and simulation framework is designed that integrates navigation and communication analysis. Model-Based Systems Engineering (MBSE) and Agent-Based Modeling (ABM) techniques are utilized to foster a modular, expandable, and robust framework. This research has developed the Space Navigation Analysis and Performance Evaluation (SNAPE) framework. This framework allows for design, analysis, and optimization of deep space navigation and communication architectures. SNAPE captures high-level performance requirements and bridges them to specific functional requirements of the analytical implementation. The SNAPE framework is implemented in a representative prototype environment using the Python language and verified using industry standard packages. The capability of SNAPE is validated through a series of example test cases. These analyses focus on the performance of specific state measurements to state estimation performance, and demonstrate the core analytic functionality of the framework. Specific cases analyze the effects of initial error and measurement uncertainty on state estimation performance. The timing and frequency of state measurements are also investigated to show the need for frequent state measurements to minimize navigation errors. The dependence of navigation accuracy on timing stability and accuracy is also demonstrated. These test cases capture the functionality of the tool as well as validate its performance. The SNAPE framework is utilized to capture and analyze NNAV, both conceptually and analytically. Multiple evaluation cases are presented that focus on the Mars Science Laboratory's (MSL) Martian transfer mission phase. These evaluation cases validate NNAV and provide concrete evidence of its operational capability for this particular application. Improvement to onboard state estimation performance and reduced reliance on Earth-based assets is demonstrated through simulation of the MSL spacecraft utilizing NNAV processes and embedded packets within a limited network containing DSN and MRO. From the demonstrated state estimation performance, NNAV is shown to be a capable and viable method of deep space navigation. Through its implementation as a state augmentation method, the concept integrates with traditional measurements and reduces the dependence on Earth-based updates. Future development of this concept focuses on a growing network of assets and spacecraft, which allows for improved operational flexibility and accuracy in spacecraft state estimation capability and a growing solar system-wide navigation network.
APA, Harvard, Vancouver, ISO, and other styles
34

Giroudot, Frédéric. "NoC-based Architectures for Real-Time Applications : Performance Analysis and Design Space Exploration." Thesis, Toulouse, INPT, 2019. https://oatao.univ-toulouse.fr/25921/1/Giroudot_Frederic.pdf.

Full text
Abstract:
Les architectures mono-processeur montrent leurs limites en termes de puissance de calcul face aux besoins des systèmes actuels. Bien que les architectures multi-cœurs résolvent partiellement ce problème, elles utilisent en général des bus pour interconnecter les cœurs, et cette solution ne passe pas à l'échelle. Les architectures dites pluri-cœurs ont été proposées pour palier les limitations des processeurs multi-cœurs. Elles peuvent réunir jusqu'à des centaines de cœurs sur une seule puce, organisés en dalles contenant une ou plusieurs entités de calcul. La communication entre les cœurs se fait généralement au moyen d'un réseau sur puce constitué de routeurs reliés les uns aux autres et permettant les échanges de données entre dalles. Cependant, ces architectures posent de nombreux défis, en particulier pour les applications temps-réel. D'une part, la communication via un réseau sur puce provoque des scénarios de blocage entre flux, ce qui complique l'analyse puisqu'il devient difficile de déterminer le pire cas. D'autre part, exécuter de nombreuses applications sur des systèmes sur puce de grande taille comme des architectures pluri-cœurs rend la conception de tels systèmes particulièrement complexe. Premièrement, cela multiplie les possibilités d'implémentation qui respectent les contraintes fonctionnelles, et l'exploration d'architecture résultante est plus longue. Deuxièmement, une fois une architecture matérielle choisie, décider de l'attribution de chaque tâche des applications à exécuter aux différents cœurs est un problème difficile, à tel point que trouver une une solution optimale en un temps raisonnable n'est pas toujours possible. Ainsi, nos premières contributions s'intéressent à cette nécessité de pouvoir calculer des bornes fiables sur le pire cas des latences de transmission des flux de données empruntant des réseaux sur puce dits "wormhole". Nous proposons un modèle analytique, BATA, prenant en compte la taille des mémoires tampon des routeurs et applicable à une configuration de flux de données périodiques générant un paquet à la fois. Nous étendons ensuite le domaine d'applicabilité de BATA pour couvrir un modèle de traffic plus général ainsi que des architectures hétérogènes. Cette nouvelle méthode, appelée G-BATA, est basée sur une structure de graphe pour capturer les interférences possibles entre flux de données. Elle permet également de diminuer le temps de calcul de l'analyse, améliorant la capacité de l'approche à passer à l'échelle. Dans une seconde partie, nous proposons une méthode pour la conception d'applications temps-réel s'exécutant sur des plateformes pluri-cœurs. Cette méthode intègre notre modèle d'analyse G-BATA dans un processus de conception systématique, faisant en outre intervenir un outil de modélisation et de simulation de systèmes reposant sur des concepts d'ingénierie dirigée par les modèles, TTool, et un logiciel pour l'analyse de performance pire-cas des réseaux, WoPANets. Enfin, nous proposons une validation de nos contributions grâce à (a) une série d'expériences sur une plateforme physique et (b) deux études de cas d'applications réelle; le système de contrôle d'un véhicule autonome et une application de décodeur 5G
Monoprocessor architectures have reached their limits in regard to the computing power they offer vs the needs of modern systems. Although multicore architectures partially mitigate this limitation and are commonly used nowadays, they usually rely on intrinsically non-scalable buses to interconnect the cores. The manycore paradigm was proposed to tackle the scalability issue of bus-based multicore processors. It can scale up to hundreds of processing elements (PEs) on a single chip, by organizing them into computing tiles (holding one or several PEs). Intercore communication is usually done using a Network-on-Chip (NoC) that consists of interconnected onchip routers allowing communication between tiles. However, manycore architectures raise numerous challenges, particularly for real-time applications. First, NoC-based communication tends to generate complex blocking patterns when congestion occurs, which complicates the analysis, since computing accurate worst-case delays becomes difficult. Second, running many applications on large Systems-on-Chip such as manycore architectures makes system design particularly crucial and complex. On one hand, it complicates Design Space Exploration, as it multiplies the implementation alternatives that will guarantee the desired functionalities. On the other hand, once a hardware architecture is chosen, mapping the tasks of all applications on the platform is a hard problem, and finding an optimal solution in a reasonable amount of time is not always possible. Therefore, our first contributions address the need for computing tight worst-case delay bounds in wormhole NoCs. We first propose a buffer-aware worst-case timing analysis (BATA) to derive upper bounds on the worst-case end-to-end delays of constant-bit rate data flows transmitted over a NoC on a manycore architecture. We then extend BATA to cover a wider range of traffic types, including bursty traffic flows, and heterogeneous architectures. The introduced method is called G-BATA for Graph-based BATA. In addition to covering a wider range of assumptions, G-BATA improves the computation time; thus increases the scalability of the method. In a second part, we develop a method addressing design and mapping for applications with real-time constraints on manycore platforms. It combines model-based engineering tools (TTool) and simulation with our analytical verification technique (G-BATA) and tools (WoPANets) to provide an efficient design space exploration framework. Finally, we validate our contributions on (a) a serie of experiments on a physical platform and (b) two case studies taken from the real world: an autonomous vehicle control application, and a 5G signal decoder application
APA, Harvard, Vancouver, ISO, and other styles
35

Parashar, Neha. "Design Space Analysis and a Novel Routing Agorithm for Unstructured Networks-on-Chip." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/89.

Full text
Abstract:
Traditionally, on-chip network communication was achieved with shared medium networks where devices shared the transmission medium with only one device driving the network at a time. To avoid performance losses, it required a fast bus arbitration logic. However, a single shared bus has serious limitations with the heterogeneous and multi-core communication requirements of today's chip designs. Point-to-point or direct networks solved some of the scalability issues, but the use of routers and of rather complex algorithms to connect nodes during each cycle caused new bottlenecks. As technology scales, the on-chip physical interconnect presents an increasingly limiting factor for performance and energy consumption. Network-on-chip, an emerging interconnect paradigm, provide solutions to these interconnect and communication challenges. Motivated by future bottom-up self-assembled fabrication techniques, which are believed to produce largely unstructured interconnect fabrics in a very inexpensive way, the goal of this thesis is to explore the design trade-offs of such irregular, heterogeneous, and unreliable networks. The important measures we care about for our complex on-chip network models are the information transfer, congestion avoidance, throughput, and latency. We use two control parameters and a network model inspired by Watts and Strogatz's small-world network model to generate a large class of different networks. We then evaluate their cost and performance and introduce a function which allows us to systematically explore the trade-offs between cost and performance depending on the designer's requirement. We further evaluate these networks under different traffic conditions and introduce an adaptive and topology-agnostic ant routing algorithm that does not require any global control and avoids network congestion.
APA, Harvard, Vancouver, ISO, and other styles
36

Lekkakos, Spyridon-Damianos. "Failure record discounting in Bayesian analysis in Probabilistic Risk Assessment (PRA) : a space system application." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35113.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2006.
Includes bibliographical references (p. 83-85).
In estimating a system-specific binomial probability of failure on demand in Probabilistic Risk Assessment (PRA), the corresponding number of observed failures may be not directly applicable due to design or procedure changes that have been implemented in the system as a result of past failures. A methodology has been developed by NASA to account for partial applicability of past failures in Bayesian analysis by discounting the failure records. A series of sensitivity analyses on a specific case study showed that failure record discounting may result in failure distributions that are both optimistic and narrow. An alternative approach, which builds upon NASA's method, is proposed. This method combines an optimistic interpretation of the data, obtained with failure record discounting, with a pessimistic one, obtained with standard Bayesian updating without discounting, in a linear pooling fashion. The interpretation of the results in the proposed approach is done in such way that it displays the epistemic uncertainties that are inherent in the data and provides a better basis for the decision maker to make a decision based on his / her risk attitude. A comparison of the two methods is made based on the case study.
by Spyridon-Damianos Lekkakos.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
37

Young, Leyland Gregory. "Geometrically exact modeling, analysis and design of high precision membranes /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p3099647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Owojaiye, Gbenga Adetokunbo. "Design and performance analysis of distributed space time coding schemes for cooperative wireless networks." Thesis, University of Hertfordshire, 2012. http://hdl.handle.net/2299/8970.

Full text
Abstract:
In this thesis, space-time block codes originally developed for multiple antenna systems are extended to cooperative multi-hop networks. The designs are applicable to any wireless network setting especially cellular, adhoc and sensor networks where space limitations preclude the use of multiple antennas. The thesis first investigates the design of distributed orthogonal and quasi-orthogonal space time block codes in cooperative networks with single and multiple antennas at the destination. Numerical and simulation results show that by employing multiple receive antennas the diversity performance of the network is further improved at the expense of slight modification of the detection scheme. The thesis then focuses on designing distributed space time block codes for cooperative networks in which the source node participates in cooperation. Based on this, a source-assisting strategy is proposed for distributed orthogonal and quasi-orthogonal space time block codes. Numerical and simulation results show that the source-assisting strategy exhibits improved diversity performance compared to the conventional distributed orthogonal and quasi-orthogonal designs.Motivated by the problem of channel state information acquisition in practical wireless network environments, the design of differential distributed space time block codes is investigated. Specifically, a co-efficient vector-based differential encoding and decoding scheme is proposed for cooperative networks. The thesis then explores the concatenation of differential strategies with several distributed space time block coding schemes namely; the Alamouti code, square-real orthogonal codes, complex-orthogonal codes, and quasiorthogonal codes, using cooperative networks with different number of relay nodes. In order to cater for high data rate transmission in non-coherent cooperative networks, differential distributed quasi-orthogonal space-time block codes which are capable of achieving full code-rate and full diversity are proposed. Simulation results demonstrate that the differential distributed quasi-orthogonal space-time block codes outperform existing distributed space time block coding schemes in terms of code rate and bit-error-rate performance. A multidifferential distributed quasi-orthogonal space-time block coding scheme is also proposed to exploit the additional diversity path provided by the source-destination link.A major challenge is how to construct full rate codes for non-coherent cooperative broadband networks with more than two relay nodes while exploiting the achievable spatial and frequency diversity. In this thesis, full rate quasi-orthogonal codes are designed for noncoherent cooperative broadband networks where channel state information is unavailable. From this, a generalized differential distributed quasi-orthogonal space-frequency coding scheme is proposed for cooperative broadband networks. The proposed scheme is able to achieve full rate and full spatial and frequency diversity in cooperative networks with any number of relays. Through pairwise error probability analysis we show that the diversity gain of the proposed scheme can be improved by appropriate code construction and sub-carrier allocation. Based on this, sufficient conditions are derived for the proposed code structure at the source node and relay nodes to achieve full spatial and frequency diversity. In order to exploit the additional diversity paths provided by the source-destination link, a novel multidifferential distributed quasi-orthogonal space-frequency coding scheme is proposed. The overall objective of the new scheme is to improve the quality of the detected signal at the destination with negligible increase in the computational complexity of the detector.Finally, a differential distributed quasi-orthogonal space-time-frequency coding scheme is proposed to cater for high data rate transmission and improve the performance of noncoherent cooperative broadband networks operating in highly mobile environments. The approach is to integrate the concept of distributed space-time-frequency coding with differential modulation, and employ rotated constellation quasi-orthogonal codes. From this, we design a scheme which is able to address the problem of performance degradation in highly selective fading environments while guaranteeing non-coherent signal recovery and full code rate in cooperative broadband networks. The coding scheme employed in this thesis relaxes the assumption of constant channel variation in the temporal and frequency dimensions over long symbol periods, thus performance degradation is reduced in frequencyselective and time-selective fading environments. Simulation results illustrate the performance of the proposed differential distributed quasi-orthogonal space-time-frequency coding scheme under different channel conditions.
APA, Harvard, Vancouver, ISO, and other styles
39

Siebert, Joseph R. "Design hazard analysis, and system level testing of a university propulsion system for spacecraft application." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2009. http://scholarsmine.mst.edu/thesis/pdf/Siebert_09007dcc8063c59e.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2009.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 14, 2009) Includes bibliographical references (p. 201-203).
APA, Harvard, Vancouver, ISO, and other styles
40

Kozic, Nina. "Statistical shape space analysis based on level sets for optimization of orthopaedic implant design." [S.l.] : [s.n.], 2009. http://www.zb.unibe.ch/download/eldiss/09kozic_n.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Waswa, M. B. Peter (Peter Moses Bweya). "Spacecraft design-for-demise strategy, analysis and impact on low earth orbit space missions." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46797.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 102-106) and index.
Uncontrolled reentry into the Earth atmosphere by LEO space missions whilst complying with stipulated NASA Earth atmospheric reentry requirements is a vital endeavor for the space community to pursue. An uncontrolled reentry mission that completely ablates does not require a provision for integrated controlled reentry capability. Consequently, not only will such a mission design be relatively simpler and cheaper, but also mission unavailability risk due to a controlled reentry subsystem failure is eliminated, which improves mission on-orbit reliability and robustness. Intentionally re-designing the mission such that the spacecraft components ablate (demise) during uncontrolled reentry post-mission disposal is referred to as Design-for-Demise (DfD). Re-designing spacecraft parts to demise guarantees adherence to NASA reentry requirements that dictate the risk of human casualty anywhere on Earth due to a reentering debris with KE =/> 15J be less than 1:10,000 (0.0001). NASA sanctioned missions have traditionally ad- dressed this requirement by integrating a controlled reentry provision. However, momentum is building for a new paradigm shift towards designing reentry missions to demise instead. Therefore, this thesis proposes a DfD decision making methodology; DfD implementation and execution strategy throughout the LEO mission life-cycle; scrutinizes reentry analysis software tools and uses NASA Debris Analysis Software (DAS) to demonstrate the reentry demisability analysis process; proposes methods to identify and redesign hardware parts for demise; and finally considers the HETE-2 mission as a DfD demisability case study. Reentry analysis show HETE-2 mission to be compliant with NASA uncontrolled atmospheric reentry requirements.
by Waswa M.B. Peter.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
42

Tseng, Chun-Hao. "Safety performance analyzer for constructed environments (SPACE)." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1148572816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Tibert, Gunnar. "Deployable Tensegrity Structures for Space Applications." Doctoral thesis, KTH, Mekanik, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Callado, Jose Carlos Pereira Lucas. "Interactivity in housing design : a comparative analysis of the 'Avenidas Novas', 'Alvalade' and 'Olivais Norte' districts - Lisbon." Thesis, University of Newcastle Upon Tyne, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Rezaiesarlak, Reza. "Design and Detection Process in Chipless RFID Systems Based on a Space-Time-Frequency Technique." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/73506.

Full text
Abstract:
Recently, Radio Frequency Identification (RFID) technology has become commonplace in many applications. It is based on storing and remotely retrieving the data embedded on the tags. The tag structure can be chipped or chipless. In chipped tags, an integrated IC attached to the antenna is biased by an onboard battery or interrogating signal. Compared to barcodes, the chipped tags are expensive because of the existence of the chip. That was why chipless RFID tags are demanded as a cheap candidate for chipped RFID tags and barcodes. As its name expresses, the geometry of the tag acts as both modulator and scatterer. As a modulator, it incorporates data into the received electric field launched from the reader antenna and reflects it back to the receiving antenna. The scattered signal from the tag is captured by the antenna and transferred to the reader for the detection process. By employing the singularity expansion method (SEM) and the characteristic mode theory (CMT), a systematic design process is introduced by which the resonant and radiation characteristics of the tag are monitored in the pole diagram versus structural parameters. The antenna is another component of the system. Taking advantage of ultra-wideband (UWB) technology, it is possible to study the time and frequency domain characteristics of the antenna used in chipless RFID system. A new omni-directional antenna element useful in wideband and UWB systems is presented. Then, a new time-frequency technique, called short-time matrix pencil method (STMPM), is introduced as an efficient approach for analyzing various scattering mechanisms in chipless RFID tags. By studying the performance of STMPM in early-time and late-time responses of the scatterers, the detection process is improved in cases of multiple tags located close to each other. A space-time-frequency algorithm is introduced based on STMPM to detect, identify, and localize multiple multi-bit chipless RFID tags in the reader area. The proposed technique has applications in electromagnetic and acoustic-based detection of targets.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Jones, Graham. "Thermal analysis and testing of a spaceborne passive cooler." Thesis, University of Oxford, 1994. http://ora.ox.ac.uk/objects/uuid:cef5bd18-c80d-442c-bb70-6414fdf29b61.

Full text
Abstract:
This thesis describes the thermal design and thermal testing of the development model radiative cooler for the Composite Infra-Red Spectrometer (CIRS) due for launch on the Cassini spacecraft in 1997. The radiative cooler is used to cool the instrument's Focal Plane Assembly (FPA) to approximately 80K. The FPA holds two arrays of HgCdTe detectors for the mid infra-red spectrometer of the instrument which covers the wavelength range 7μm to 17μm. The FPA is mounted from the optics on a titanium alloy tripod and is cooled conductively by the radiator via a flexible link and a cold finger. A range of thermal models of the system have been developed ranging from a simple, analytical model to a finite difference numerical model. A calorimeter was designed to perform heat leak measurements on samples of Multi- Layer Insulation (MLI) blankets to determine the number and type of shields required for the MLI blanket covering the back of the cooler radiator. A test facility incorporating a vacuum system, a space simulator target, and a simulator for the CIRS instrument was designed and constructed for testing the assembled cooler. Various configurations of the Development Model (DM) CIRS cooler were tested as components became available and the results obtained compared to the thermal model predictions. It was found that the cooler will attain a temperature of 80K in operation, but with less excess cooling power than predicted by the thermal models.
APA, Harvard, Vancouver, ISO, and other styles
47

Mittal, Shawn. "Computational Analysis and Design of the Electrothermal Energetic Plasma Source Concept." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/52705.

Full text
Abstract:
Electrothermal (ET) Plasma Technology has been used for many decades in a wide variety of scientific and industrial applications. Due to its numerous applications and configurations, ET plasma sources can be used in everything from small scale space propulsion thrusters to large scale material deposition systems for use in a manufacturing setting. The sheer number of different types of ET sources means that there is always additional scientific research and characterization studies that can be done to either explore new concepts or improve existing designs. The focus of this work is to explore a novel electrothermal energetic plasma source (ETEPS) that uses energetic gas as the working fluid in order to harness the combustion and ionization energy of the subsequently formed energetic plasma. The goal of the work is to use computer code and engineering methods in order to successfully characterize the capabilities of the ETEPS concept and to then design a prototype which will be used for further study. This thesis details the background of ET plasma physics, the ETEPS concept physics, and the computational and design work done in order to demonstrate the feasibility of using the ETEPS source in two roles: space thrusters and electrothermal plasma guns.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
48

Robertson, Bradford E. "A hybrid probabilistic method to estimate design margin." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50375.

Full text
Abstract:
Weight growth has been a significant factor in nearly every space and launch vehicle development program. In order to account for weight growth, program managers allocate a design margin. However, methods of estimating design margin are not well suited for the task of assigning a design margin for a novel concept. In order to address this problem, a hybrid method of estimating margin is developed. This hybrid method utilizes range estimating, a well-developed method for conducting a bottom-up weight analysis, and a new forecasting technique known as executable morphological analysis. Executable morphological analysis extends morphological analysis in order to extract quantitative information from the morphological field. Specifically, the morphological field is extended by adding attributes (probability and mass impact) to each condition. This extended morphological field is populated with alternate baseline options with corresponding probabilities of occurrence and impact. The overall impact of alternate baseline options can then be estimated by running a Monte Carlo analysis over the extended morphological field. This methodology was applied to two sample problems. First, the historical design changes of the Space Shuttle Orbiter were evaluated utilizing original mass estimates. Additionally, the FAST reference flight system F served as the basis for a complete sample problem; both range estimating and executable morphological analysis were performed utilizing the work breakdown structure created during the conceptual design of this vehicle.
APA, Harvard, Vancouver, ISO, and other styles
49

Wood, Brandon C. (Brandon Charles) 1974. "An analysis method for conceptual design of complexity and autonomy in complex space system architectures." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/82209.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics; and, (S.M.)--Massachusetts Institute of Technology, Technology and Policy Program, 2001.
Includes bibliographical references (p. 97-99).
by Brandon C. Wood.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
50

Bird, Gregory David. "Linear and Nonlinear Dimensionality-Reduction-Based Surrogate Models for Real-Time Design Space Exploration of Structural Responses." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8653.

Full text
Abstract:
Design space exploration (DSE) is a tool used to evaluate and compare designs as part of the design selection process. While evaluating every possible design in a design space is infeasible, understanding design behavior and response throughout the design space may be accomplished by evaluating a subset of designs and interpolating between them using surrogate models. Surrogate modeling is a technique that uses low-cost calculations to approximate the outcome of more computationally expensive calculations or analyses, such as finite element analysis (FEA). While surrogates make quick predictions, accuracy is not guaranteed and must be considered. This research addressed the need to improve the accuracy of surrogate predictions in order to improve DSE of structural responses. This was accomplished by performing comparative analyses of linear and nonlinear dimensionality-reduction-based radial basis function (RBF) surrogate models for emulating various FEA nodal results. A total of four dimensionality reduction methods were investigated, namely principal component analysis (PCA), kernel principal component analysis (KPCA), isometric feature mapping (ISOMAP), and locally linear embedding (LLE). These methods were used in conjunction with surrogate modeling to predict nodal stresses and coordinates of a compressor blade. The research showed that using an ISOMAP-based dual-RBF surrogate model for predicting nodal stresses decreased the estimated mean error of the surrogate by 35.7% compared to PCA. Using nonlinear dimensionality-reduction-based surrogates did not reduce surrogate error for predicting nodal coordinates. A new metric, the manifold distance ratio (MDR), was introduced to measure the nonlinearity of the data manifolds. When applied to the stress and coordinate data, the stress space was found to be more nonlinear than the coordinate space for this application. The upfront training cost of the nonlinear dimensionality-reduction-based surrogates was larger than that of their linear counterparts but small enough to remain feasible. After training, all the dual-RBF surrogates were capable of making real-time predictions. This same process was repeated for a separate application involving the nodal displacements of mode shapes obtained from a FEA modal analysis. The modal assurance criterion (MAC) calculation was used to compare the predicted mode shapes, as well as their corresponding true mode shapes obtained from FEA, to a set of reference modes. The research showed that two nonlinear techniques, namely LLE and KPCA, resulted in lower surrogate error in the more complex design spaces. Using a RBF kernel, KPCA achieved the largest average reduction in error of 13.57%. The results also showed that surrogate error was greatly affected by mode shape reversal. Four different approaches of identifying reversed mode shapes were explored, all of which resulted in varying amounts of surrogate error. Together, the methods explored in this research were shown to decrease surrogate error when performing DSE of a turbomachine compressor blade. As surrogate accuracy increases, so does the ability to correctly make engineering decisions and judgements throughout the design process. Ultimately, this will help engineers design better turbomachines.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography