Dissertations / Theses on the topic 'Space exploration systems'

To see the other types of publications on this topic, follow the link: Space exploration systems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Space exploration systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Künzli, Simon. "Efficient design space exploration for embedded systems /." Aachen : Shaker Verlag, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Özlük, Ali Cemal. "Design Space Exploration for Building Automation Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-130600.

Full text
Abstract:
In the building automation domain, there are gaps among various tasks related to design engineering. As a result created system designs must be adapted to the given requirements on system functionality, which is related to increased costs and engineering effort than planned. For this reason standards are prepared to enable a coordination among these tasks by providing guidelines and unified artifacts for the design. Moreover, a huge variety of prefabricated devices offered from different manufacturers on the market for building automation that realize building automation functions by preprogrammed software components. Current methods for design creation do not consider this variety and design solution is limited to product lines of a few manufacturers and expertise of system integrators. Correspondingly, this results in design solutions of a limited quality. Thus, a great optimization potential of the quality of design solutions and coordination of tasks related to design engineering arises. For given design requirements, the existence of a high number of devices that realize required functions leads to a combinatorial explosion of design alternatives at different price and quality levels. Finding optimal design alternatives is a hard problem to which a new solution method is proposed based on heuristical approaches. By integrating problem specific knowledge into algorithms based on heuristics, a promisingly high optimization performance is achieved. Further, optimization algorithms are conceived to consider a set of flexibly defined quality criteria specified by users and achieve system design solutions of high quality. In order to realize this idea, optimization algorithms are proposed in this thesis based on goal-oriented operations that achieve a balanced convergence and exploration behavior for a search in the design space applied in different strategies. Further, a component model is proposed that enables a seamless integration of design engineering tasks according to the related standards and application of optimization algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Arney, Dale Curtis. "Rule-based graph theory to enable exploration of the space system architecture design space." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44840.

Full text
Abstract:
NASA's current plans for human spaceflight include an evolutionary series of missions based on a steady increase in capability to explore cis-lunar space, the Moon, near-Earth asteroids, and eventually Mars. Although the system architecture definition has the greatest impact on the eventual performance and cost of an exploration program, selecting an optimal architecture is a difficult task due to the lack of methods to adequately explore the architecture design space and the resource-intensive nature of architecture analysis. This research presents a modeling framework to mathematically represent and analyze the space system architecture design space using graph theory. The framework enables rapid exploration of the design space without the need to limit trade options or the need for user interaction during the exploration process. The architecture design space for three missions in a notional evolutionary exploration program, which includes staging locations, vehicle implementation, and system functionality, for each mission destination is explored. Using relative net present value of various system architecture options, the design space exploration reveals that the launch vehicle selection is the primary driver in reducing cost, and other options, such as propellant type, staging location, and aggregation strategy, provide less impact.
APA, Harvard, Vancouver, ISO, and other styles
4

Watkinson, Emily Jane. "Space nuclear power systems : enabling innovative space science and exploration missions." Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/40461.

Full text
Abstract:
The European Space Agency’s (ESA’s) 241Am radioisotope power systems (RPSs) research and development programme is ongoing. The chemical form of the americium oxide ‘fuel’ has yet to be decided. The fuel powder will need to be sintered. The size and shape of the oxide powder particles are expected to influence sintering. The current chemical flow-sheet creates lath-shaped AmO2. Investigations with surrogates help to minimise the work with radioactive americium. This study has proposed that certain cubic Ce1-xNdxO2-(x/2) oxides (Ia-3 crystal structures with 0.5 < x < 0.7) could be potential surrogates for some cubic AmO2-(x/2) phases. A new wet-chemical-synthesis-based process for fabricating Ce1-xNdxO2-(x/2) with a targeted x-values has been demonstrated. It uses a continuous oxalate coprecipitation and calcination route. An x of 0.6 was nominally targeted. Powder X-ray diffraction (PXRD) and Raman spectroscopy confirmed its Ia-3 structure. An increase in precipitation temperature (25 °C to 60 °C) caused an increase in oxalate particle median size. Lath/plate-shaped particles were precipitated. Ce Nd oxide PXRD data was Rietveld refined to precisely determine its lattice parameter. The data will be essential for future sintering trials with the oxide where variations in its crystal structure during sintering will be investigated. Sintering investigations with micrometric CeO2 and Nd2O3 have been conducted to understand how AmO2 and Am2O3 may sinter. This is the first reported pure Nd2O3 spark plasma sintering (SPS) investigation. A comparative study on the SPS and the cold-press-and-sinter of CeO2 has been conducted. This is the first study to report sintering lath-shaped CeO2 particles. Differences in their sizes and specific surface areas affected powder cold-pressing and caused variations in cold-pressed-and-sintered CeO2 relative density and Vickers hardness. The targeted density range (85-90%) was met using both sintering techniques. The cold-press-and-sinter method created intact CeO2 discs with reproducible geometry and superior Vickers hardness to those made by SPS.
APA, Harvard, Vancouver, ISO, and other styles
5

Xypolitidis, Benard, and Rudin Shabani. "Architectural Design Space Exploration of Heterogeneous Manycores." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-29528.

Full text
Abstract:
Exploring the benefits of heterogeneous architectures is becoming more desirable dueto migration from single core to manycore architectural systems. A fast way to explorethe heterogeneity is through an architectural design space exploration (ADSE) tool,which gives the designer the option to explore design alternatives before the actualimplementation. Heracles Designer is an ADSE tool which allows the user to modifylarge aspects of the architecture. At present, Heracles Designer is equipped with asingle type of processing core, a MIPS CPU.We have extended the Heracles System in order to enable the system to model het-erogeneity. Our system is called the Heterogeneous Heracles System (HHS), where adifferent type of processing core, the OpenRISC CPU, is interfaced into the HeraclesSystem. Test programs are executed on both the MIPS and OpenRISC CPUs, whichhave provided promising results. In order to provide the designer with the option tomodify the system architecture without changing the source code, a GUI named AD-SET was created. ADSET provides the designer with the ability to modify the coresettings, memory system configuration and network topology configuration.In the HHS the MIPS core can only execute basic instructions, while the OpenRISCcan execute more advanced instructions, giving a designer the option to explore theeffects of heterogeneity based on the big little architectural concept. The results of ourwork provides an infrastructure on how to integrate different types of processing coresinto the HHS.
APA, Harvard, Vancouver, ISO, and other styles
6

Joshi, Prachi. "Design Space Exploration for Embedded Systems in Automotives." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82839.

Full text
Abstract:
With ever increasing contents (safety, driver assistance, infotainment, etc.) in today's automotive systems that rely on electronics and software, the supporting architecture is integrated by a complex set of heterogeneous data networks. A modern automobile contains up to 100 ECUs and several heterogeneous communication buses (such as CAN, FlexRay, etc.), exchanging thousands of signals. The automotive Original Equipment Manufacturers (OEMs) and suppliers face a number of challenges such as reliability, safety and cost to incorporate the growing functionalities in vehicles. Additionally, reliability, safety and cost are major concerns for the industry. One of the important challenges in automotive design is the efficient and reliable transmission of signals over communication networks such as CAN and CAN-FD. With the growing features in automotives, the OEMs already face the challenge of saturation of bus bandwidth hindering the reliability of communication and the inclusion of additional features. In this dissertation, we study the problem of optimization of bandwidth utilization (BU) over CAN-FD networks. Signals are transmitted over the CAN/CAN-FD bus in entities called frames. The signal-to-frame-packing has been studied in the literature and it is compared to the bin packing problem which is known to be NP-hard. By carefully optimizing signal-to-frame packing, the CAN-FD BU can be reduced. In Chapter 3, we propose a method for offset assignment to signals and show its importance in improving BU. One of our contributions for an industrial setting is a modest improvement in BU of about 2.3%. Even with this modest improvement, the architecture's lifetime could potentially be extended by several product cycles, which may translate to saving millions of dollars for the OEM. Therefore, the optimization of signal-to-frame packing in CAN-FD is the major focus of this dissertation. Another challenge addressed in this dissertation is the reliable mapping of a task model onto a given architecture, such that the end-to-end latency requirements are satisfied. This avoids costly redesign and redevelopment due to system design errors.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Sanchez, Net Marc. "Support of latency-sensitive space exploration applications in future space communication systems." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112458.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 283-300).
Latency, understood as the total time it takes for data acquired by a remote platform (e.g. satellite, rover, astronaut) to be delivered to the final user in an actionable format, is a primary requirement for several near Earth and deep space exploration activities. Some applications such as real-time voice and videoconferencing can only be satisfied by providing continuous communications links to the remote platform and enforcing hard latency requirements on the system. In contrast, other space exploration applications set latency requirements because their data's scientific value is dependent on the timeliness with which it is delivered to the final user. These applications, henceforth termed latency-sensitive, are the main focus of this thesis, as they typically require large amounts of data to be returned to Earth in a timely manner. To understand how current space communication systems induce latency, the concept of network centrality is first introduced. It provides a systematic process for quantifying the relative importance of heterogeneous latency contributors, ranking them, and rapidly identifying bottlenecks when parts of the communication infrastructure are modified. Then, a custom-designed centrality measure is integrated within the system architecture synthesis process. It serves as a heuristic function that prioritizes parts of the system for further in-depth analysis and renders the problem of analyzing end-to-end latency requirements manageable. The thesis includes two primary case studies to demonstrate the usefulness of the proposed approach. The first one focuses on return of satellite-based observations for accurate weather forecasting, particularly how latency limits the amount of data available for assimilation at weather prediction centers. On the other hand, the second case study explores how human science operations on the surface of Mars dictate the end-to-end latency requirement that the infrastructure between Mars and Earth has to satisfy. In the first case study, return of satellite observations for weather prediction during the 2020-2030 decade is analyzed based on future weather satellite programs. Recommendations on how to implement their ground segment are also presented as a function of cost, risk and weather prediction spatial resolution. This case study also serves as proof of concept for the proposed centrality measure, as ranking of latency contributors and network implementations can be compared to current and proposed systems such as JPSS' Common Ground Infrastructure and NPOESS' SafetyNet. The second case study focuses on supporting human science exploration activities on the surface of Mars during the 2040's. It includes astronaut activity modeling, quantification of Mars Proximity and Mars-to-Earth link bandwidth requirements, Mars relay sizing and ground infrastructure costing as a function of latency requirements, as well as benchmarking of new technologies such as optical communications over deep space links. Results indicate that levying tight latency requirements on the network that support human exploration activities at Mars is unnecessary to conduct effective science and incurs in significant cost for the Mars Relay Network, especially when no optical technology is present in the system. When optical communications are indeed present, mass savings for the relay system are also possible, albeit trading latency vs. infrastructure costs is less effective and highly dependent on the performance of the deep space optical link.
by Marc Sanchez Net.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Rabbah, Rodric Michel. "Design Space Exploration and Optimization of Embedded Memory Systems." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11605.

Full text
Abstract:
Recent years have witnessed the emergence of microprocessors that are embedded within a plethora of devices used in everyday life. Embedded architectures are customized through a meticulous and time consuming design process to satisfy stringent constraints with respect to performance, area, power, and cost. In embedded systems, the cost of the memory hierarchy limits its ability to play as central a role. This is due to stringent constraints that fundamentally limit the physical size and complexity of the memory system. Ultimately, application developers and system engineers are charged with the heavy burden of reducing the memory requirements of an application. This thesis offers the intriguing possibility that compilers can play a significant role in the automatic design space exploration and optimization of embedded memory systems. This insight is founded upon a new analytical model and novel compiler optimizations that are specifically designed to increase the synergy between the processor and the memory system. The analytical models serve to characterize intrinsic program properties, quantify the impact of compiler optimizations on the memory systems, and provide deep insight into the trade-offs that affect memory system design.
APA, Harvard, Vancouver, ISO, and other styles
9

Künzli, Simon [Verfasser]. "Efficient Design Space Exploration for Embedded Systems / Simon Künzli." Aachen : Shaker, 2006. http://d-nb.info/1170533213/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, Jonathan. "STASE: set theory-influenced architecture space exploration." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52330.

Full text
Abstract:
The first of NASA's high-level strategic goals is to extend and sustain human activities across the solar system. As the United States moves into the post-Shuttle era, meeting this goal is more challenging than ever. There are several desired outcomes for this goal, including development of an integrated architecture and capabilities for safe crewed and cargo missions beyond low Earth orbit. NASA's Flexible Path for the future human exploration of space provides the guidelines to achieve this outcome. Designing space system architectures to satisfy the Flexible Path starts early in design, when a downselection process works to reduce the broad spectrum of feasible system architectures into a more refined set that contains a handful of alternatives that are to be considered and studied further in the detailed design phases. This downselection process is supported by what is referred to as architecture space exploration (ASE). ASE is a systems engineering process which generates the design knowledge necessary to enable informed decision-making. The broad spectrum of potential system architectures can be impractical to evaluate. As the system architecture becomes more complex in its structure and decomposition, its space encounters a factorial growth in the number of alternatives to be considered. This effect is known in the literature as combinatorial explosion. For the Flexible Path, the development of new space system architectures can occur over the period of a decade or more. During this time, a variety of changes can occur which lead to new requirements that necessitate the development of new technologies, or changes in budget and schedule. Developing comprehensive and quantitative design knowledge early during design helps to address these challenges. Current methods focus on a small number of system architecture alternatives. From these alternatives, a series of 'one off' -type of trade studies are performed to refine and generate more design knowledge. These small-scale studies are unable to adequately capture the broad spectrum of possible architectures and typically use qualitative knowledge. The focus of this research is to develop a systems engineering method for system-level ASE during pre-phase A design that is rapid, exhaustive, flexible, traceable, and quantitative. Review of literature found a gap in currents methods that were able to achieve this research objective. This led to the development of the Set Theory-Influenced Architecture Space Exploration (STASE) methodology. The downselection process is modeled as a decision-making process with STASE serving as a supporting systems engineering method. STASE is comprised of two main phases: system decomposition and system synthesis. During system decomposition, the problem is broken down into three system spaces. The architecture space consists of the categorical parameters and decisions that uniquely define an architecture, such as the physical and functional aspects. The design space contains the design parameters that uniquely define individual point designs for a given architecture. The objective space holds the objectives that are used in comparing alternatives. The application of set theory across the system spaces enables an alternative form of representing system alternatives. This novel application of set theory allows the STASE method to mitigate the problem of combinatorial explosion. The fundamental definitions and theorems of set theory are used to form the mathematical basis for the STASE method. A series of hypotheses were formed to develop STASE in a scientific way. These hypotheses are confirmed by experiments using a proof of concept over a subset of the Flexible Path. The STASE method results are compared against baseline results found using the traditional process of representing individual architectures as the system alternatives. The comparisons highlight many advantages of the STASE method. The greatest advantage is that STASE comprehensively explores the architecture space more rapidly than the baseline. This is because the set theory-influenced representation of alternatives has a summation growth with system complexity in the architecture space. The resultant option subsets provide additional design knowledge that enables new ways of visualizing results and comparing alternatives during early design. The option subsets can also account for changes in some requirements and constraints so that new analysis of system alternatives is not required. An example decision-making process was performed for the proof of concept. This notional example starts from the entire architecture space with the goal of minimizing the total cost and the number of launches. Several decisions are made for different architecture parameters using the developed data visualization and manipulation techniques until a complete architecture was determined. The example serves as a use-case example that walks through the implementation of the STASE method, the techniques for analyzing the results, and the steps towards making meaningful architecture decisions.
APA, Harvard, Vancouver, ISO, and other styles
11

Kernstine, Kemp H. "Design space exploration of stochastic system-of-systems simulations using adaptive sequential experiments." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44799.

Full text
Abstract:
The complexities of our surrounding environments are becoming increasingly diverse, more integrated, and continuously more difficult to predict and characterize. These modeling complexities are ever more prevalent in System-of-Systems (SoS) simulations where computational times can surpass real-time and are often dictated by stochastic processes and non-continuous emergent behaviors. As the number of connections continue to increase in modeling environments and the number of external noise variables continue to multiply, these SoS simulations can no longer be explored with traditional means without significantly wasting computational resources. This research develops and tests an adaptive sequential design of experiments to reduce the computational expense of exploring these complex design spaces. Prior to developing the algorithm, the defining statistical attributes of these spaces are researched and identified. Following this identification, various techniques capable of capturing these features are compared and an algorithm is synthesized. The final algorithm will be shown to improve the exploration of stochastic simulations over existing methods by increasing the global accuracy and computational speed, while reducing the number of simulations required to learn these spaces.
APA, Harvard, Vancouver, ISO, and other styles
12

Wessen, Randii. "Market-based systems for solving space exploration resource allocation problems." Thesis, University of South Wales, 2002. https://pure.southwales.ac.uk/en/studentthesis/marketbased-systems-for-solving-space-exploration-resource-allocation-problems(074a1185-dcb1-4e7d-b771-988b34529722).html.

Full text
Abstract:
The very nature of space exploration implies "doing that which has never been done before." As such, the resources needed to meet the objectives of such a grand endeavor, if available, are extremely scarce. Every mission since the birth of the space programme has been resource limited. To overcome these scarcity or resource issues, natural laws developed in economics can be used. Such economic systems, which are referred to as market-based systems, are based on the laws of supply and demand. Supply and demand knowledge reveals true information about users needs for resources. This information removes the need to appeal to a higher authority or multiple meetings to resolve over subscription issues. The nature of this research programme was to apply a market-based system to a varied set of planetary exploration resource allocation problems. In the past, resource constrained problems were solved through the use of many engineers and a large number of "working" meetings. The approach was successful but was exceedingly time-consuming, labor-intensive, and very expensive. The questions addressed in this work were, 1. Could a market-based approach solve space exploration allocation problems, and 2. What were the limits of the type of problems that could be solved? Prior to this research, only one attempt had been made to apply a market-based system to a space exploration problem. The work was performed in 1991 to solve the over subscription of mission requests for Deep Space Network (DSN) antennas. [1] The work was never approved to move from the experimental phase into an operational environment. The research described in this overview was based on the DSN attempt and then extended to many different types of problems. This overview will discuss the application of this technique to the following four projects: 1. Development of the instrument payload for the Cassini mission to Saturn; 2. Manifest of Space Shuttle Secondary payloads; 3. Allocation of spacecraft time for RADAR observations during Earth orbital operations; and, 4. Manifest of Space Shuttles, which are destined for the International Space Station. Results from this research prove that market-based systems can solve resource over-subscription issues faced during development and operations of planetary spacecraft missions. In addition, the application of economic principles represents a unique and innovative approach to solving spacecraft resource issues and has been incorporated into the set of management tools available to solve issues in a quicker, cheaper and faster environment.
APA, Harvard, Vancouver, ISO, and other styles
13

Hamann, Arne. "Iterative design space exploration and robustness optimization for embedded systems." Göttingen Cuvillier, 2008. http://d-nb.info/992301777/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tierno, Antonio. "Automatic Design Space Exploration of Fault-tolerant Embedded Systems Architectures." Doctoral thesis, Università degli studi di Trento, 2023. https://hdl.handle.net/11572/364571.

Full text
Abstract:
Embedded Systems may have competing design objectives, such as to maximize the reliability, increase the functional safety, minimize the product cost, and minimize the energy consumption. The architectures must be therefore configured to meet varied requirements and multiple design objectives. In particular, reliability and safety are receiving increasing attention. Consequently, the configuration of fault-tolerant mechanisms is a critical design decision. This work proposes a method for automatic selection of appropriate fault-tolerant design patterns, optimizing simultaneously multiple objective functions. Firstly, we present an exact method that leverages the power of Satisfiability Modulo Theory to encode the problem with a symbolic technique. It is based on a novel assessment of reliability which is part of the evaluation of alternative designs. Afterwards, we empirically evaluate the performance of a near-optimal approximation variation that allows us to solve the problem even when the instance size makes it intractable in terms of computing resources. The efficiency and scalability of this method is validated with a series of experiments of different sizes and characteristics, and by comparing it with existing methods on a test problem that is widely used in the reliability optimization literature.
APA, Harvard, Vancouver, ISO, and other styles
15

La, Tour Paul A. (Paul Alexis). "Combining tradespace exploration with system dynamics to explore future space architectures." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106593.

Full text
Abstract:
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2016.
Some pages printed landscape orientation. Cataloged from PDF version of thesis.
Includes bibliographical references (pages 342-351).
This work proposes a merger of Tradespace Exploration with System Dynamics modeling techniques in a complementary approach. It tests the value of this mixed method for modeling the multiplicity of inputs and complexity of feedback loops that affect the cost, schedule and performance of satellite constellations within the Department of Defense. The resulting simulation enables direct comparison of the effect of changing architectural design points and policy choices with respect to satellite acquisitions and fielding. A generation-over-generation examination of policy choices is made possible through the application of soft systems modeling of experience and learning effects. The resulting model enables examination of possible futures given variations in assumptions about both internal and external forces on a satellite production pipeline. This thesis performs a policy analysis examining the current path of the Global Positioning System acquisition and compares it to equivalent position navigation and timing capability delivered through a variety of disaggregated options while varying: design lives, production quantities, non-recurring engineering and time between generations. The extensibility of this technique is investigated by adapting the model to the mission area of Weather and Climate Sensing. This thesis then performs a policy analysis examining different disaggregated approaches for the Joint Polar Satellite, focusing on the impact of complexity. Discussion of factors such as design choices, context variables, tuning variables, model execution and construction is also included.
by Paul A. La Tour.
Ph. D. in Engineering Systems
APA, Harvard, Vancouver, ISO, and other styles
16

Jones, Adam T. (Adam Thomas). "Design space exploration and optimization using modern ship design tools." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92124.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Engineering Systems Division, 2014.
Thesis: Nav. E., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 163-164).
Modern Naval Architects use a variety of computer design tools to explore feasible options for clean sheet ship designs. Under the Naval Sea Systems Command (NAVSEA), the Naval Surface Warfare Center, Carderock Division (NSWCCD) has created computer tools for ship design and analysis purposes. This paper presents an overview of some of these tools, specifically the Advanced Ship and Submarine Evaluation Tool (ASSET) version 6.3 and the Integrated Hull Design Environment (IHDE). This paper provides a detailed explanation of a ship design using these advanced tools and presents methods for optimizing the performance of the hullform, the selection of engines for fuel efficiency, and the loading of engines for fuel efficiency. The detailed ship design explores the design space given a set of specific requirements for a cruiser-type naval vessel. The hullform optimization technique reduces a ships residual resistance by using both ASSET and IHDE in a Design of Experiments (DoE) approach to reaching an optimum solution. The paper will provide a detailed example resulting in a 12% reduction in total ship drag by implementing this technique on a previously designed hullform. The reduction of drag results in a proportional reduction in the amount of fuel used to push the ship through the water. The engine selection optimization technique uses MATLAB to calculate the ideal engines to use for fuel minimization. For a given speed-time or power-time profile, the code will evaluate hundreds of combinations of engines and provide the optimum engine combination and engine loading for minimizing the total fuel consumption. This optimization has the potential to reduce fuel consumption of current naval warships by upwards of 30%.
by Adam T. Jones.
S.M.
Nav. E.
APA, Harvard, Vancouver, ISO, and other styles
17

Oliveira, Marcio Ferreira da Silva. "Model driven engineering methodology for design space exploration of embedded systems." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/102694.

Full text
Abstract:
Heutzutage sind wir von Geräten umgeben, die sowohl Hardware wie auch Software- Komponenten beinhalten. Diese Geräte unterstützen ein breites Spektrum an verschiedenen Domänen, so zum Beispiel Telekommunikation, Luftfahrt, Automobil und andere. Derartige Systeme sind überall aufzufinden und werden als Eingebettete Systeme bezeichnet, da sie zur Informationsverarbeitung in andere Produkte eingebettet werden, wobei die Informationsverarbeitung des eingebetteten Systems jedoch nicht die bezeichnende Funktion des Produkts ist. Die ständig zunehmende Komplexität moderner eingebettete Systeme erfordert die Verwendung von mehreren Komponenten um die Funktionen von einem einzelnen System zu implementieren. Eine solche Steigerung der Funktionalität führt jedoch ebenfalls zu einem Wachstum in der Entwurfs-Komplexität, die korrekt und effizient beherrscht werden muss. Neben hohen Anforderungen bezüglich Leistungsaufnahme, Performanz und Kosten hat auch Time-to-Market-Anforderungen großen Einfluss auf den Entwurf von Eingebetteten Systemen. Design Space Exploration (DSE) beschreibt die systematische Erzeugung und Auswertung von Entwurfs-Alternativen, um die Systemleistung zu optimieren und den gestellten Anforderungen an das System zu genügen. Bei der Entwicklung von Eingebetteten Systemen, speziell beim Platform-Based Design (PBD) führt die zunehmende Anzahl von Design-Entscheidungen auf mehreren Abstraktionsebenen zu einer Explosion der möglichen Kombinationen von Alternativen, was auch für aktuelle DSE Methoden eine Herausforderung darstellt. Jedoch vermag üblicherweise nur eine begrenzte Anzahl von Entwurfs-Alternativen die zusätzlich formulierten nicht-funktionalen Anforderungen zu erfüllen. Darüber hinaus beeinflusst jede Entwurfs- Entscheidung weitere Entscheidungen und damit die resultierenden Systemeigenschaften. Somit existieren Abhängigkeiten zwischen Entwurfs-Entscheidungen und deren Reihenfolge auf dem Weg zur Implementierung des Systems. Zudem gilt es zwischen einer spezifischen Heuristik für eine bestimmte DSE, welche zu verbesserten Optimierungsresultaten führt, sowie globalen Verfahren, welche ihrerseits zur Flexibilität hinsichtlich der Anwendbarkeit bei verschiedenen DSE Szenarien beitragen, abzuwägen. Um die genannten Herausforderungen zu lösen wird eine Modellgetriebene Entwicklung (englisch Model-Driven Engineering, kurz MDE) Methodik für DSE vorgeschlagen. Für diese Methodik wird ein DSE-Domain-Metamodell eingeführt um relevante DSEKonzepte wie Entwurfsraum, Entwurfs-Alternativen, Auswertungs- und Bewertungsverfahren, Einschränkungen und andere abzubilden. Darüber hinaus modelliert das Metamodell verschiedenen DSE-Frage- stellungen, was zur Verbesserung der Flexibilität der vorgeschlagenen Methodik beiträgt. Zur Umsetzung von DSE-Regeln, welche zur Steuerung, Einschränkung und Generierung der Ent- wurfs-Alternativen genutzt werden, finden Modell-zu-Modell-Transformationen Anwendung. Durch die Fokussierung auf die Zuordnung zwischen den Schichten in einem PBDAnsatz wird eine neuartige Entwurfsraumabstraktion eingeführt, um multiple Entwurfsentscheidungen als singuläres DSE Problem zu repräsentieren. Diese auf dem Categorial Graph Product aufbauende Abstraktion entkoppelt den Explorations-Algorithmus vom Entwurfsraum und ist für Umsetzung in automatisierte Werkzeugketten gut geeignet. Basierend auf dieser Abstraktion profitiert die DSE-Methode durch die eingeführte MDEMethodik als solche und ermöglicht nunmehr neue Optimierungsmöglichkeiten sowie die Verbesserung der Integration von DSE in Entwicklungsprozesse und die Spezifikation von DSE-Szenarien.
Atualmente dispositivos contendo hardware e software são encontrados em todos os lugares. Estes dispositivos prestam suporte a uma varieadade de domínios, como telecomunicações, automotivo e outros. Eles são chamados “sistemas embarcados”, pois são sistemas de processamento montados dentro de produtos, cujo sistema de processamento não faz parte da funcionalidade principal do produto. O acréscimo de funções nestes sistemas implica no aumento da complexidade de seu projeto, o qual deve ser adequadamente gerenciado, pois além de requisitos rigorosos em relação à dissipação de potência, desempenho e custos, a pressão sobre o prazo para introdução de um produto no mercado também dificulta seu projeto. Exploração do espaço de projeto (DSE) é a atividade sistemática de gerar e avaliar alternativas de projetos, com o objetivo de otimizar suas propriedades. No desenvolvimento de sistemas embarcados, especialmente em Projeto Baseado em Plataformas (PBD), metodologias de DSE atuais são desafiadas pelo crescimento do número de decisões de projeto, o qual implica na explosão da combinação de alternativas. Porém, somente algumas destas resultam em projetos que atedem os requisitos nãofuncionais. Além disso, as decisões influenciam umas às outras, de forma que a ordem em que estas são tomadas alteram a implementação final do sistema. Outro desafio é o balanço entre flexibilidade da metodologia e seu desempenho, pois métodos globais de otimização são flexíveis, mas apresentam baixo desempenho. Já heurísticas especialmente desenvolvidas para o cenário de DSE em questão apresentam melhor desempenho, porém dificilmente são aplicáveis a diferentes cenários. Com o intuito de superar os desafios é proposta uma metodologia de projeto dirigido por modelos (MDE) adquada para DSE. Um metamodelo do domínio de DSE é definido para representar conceitos como espaço de projeto, métodos de avaliação e restrições. O metamodelo também representa diferentes problemas de DSE aprimorando a flexibilidade da metodologia. Regras de transformações de modelos implementam as regras de DSE, as quais são utilizadas para restringir e guiar a geração de projetos alternativos. Restringindo-se ao mapeamento entre camadas no PBD é proposta uma abstração para representar o espaço de projeto. Ela representa múltiplas decisões de projeto envolvidas no mapeamento como um único problema de DSE. Esta representação é adequada para a implementação em ferramentas automática de DSE e pode beneficiar o processo de DSE com uma abordagem de MDE, aprimorando a especificação de cenários de DSE e sua integração no processo de desenvolvimento.
Nowadays we are surrounded by devices containing hardware and software components. These devices support a wide spectrum of different domains, such as telecommunication, avionics, automobile, and others. They are found anywhere, and so they are called Embedded Systems, as they are information processing systems embedded into enclosing products, where the processing system is not the main functionality of the product. The ever growing complexity in modern embedded systems requires the utilization of more components to implement the functions of a single system. Such an increasing functionality leads to a growth in the design complexity, which must be managed properly, because besides stringent requirements regarding power, performance and cost, also time-to-market hinders the design of embedded systems. Design Space Exploration (DSE) is the systematic generation and evaluation of design alternatives, in order to optimize system properties and fulfill requirements. In embedded system development, specifically in Platform-Based Design (PBD), current DSE methodologies are challenged by the increasing number of design decisions at multiple abstraction levels, which leads to an explosion of combination of alternatives. However, only a reduced number of these alternatives leads to feasible designs, which fulfill non-functional requirements. Moreover, each design decision influences subsequent decisions and system properties, hence there are inter-dependencies between design decisions, so that the order decisions are made matters to the final system implementation. Furthermore, there is a trade-off between heuristics for specific DSE, which improves the optimization results, and global optimizers, which improve the flexibility to be applied in different DSE scenarios. In order to overcome the identified challenges an MDE methodology for DSE is proposed. For this methodology a DSE Domain metamodel is proposed to represent relevant DSE concepts such as design space, design alternatives, evaluation method, constraints and others. Moreover, this metamodel represents different DSE problems, improving the flexibility of the proposed framework. Model transformations are used to implement DSE rules, which are used to constrain, guide, and generate design candidates. Focusing on the mapping between layers in a PBD approach, a novel design space abstraction is provided to represent multiple design decisions involved in the mapping as a single DSE problem. This abstraction is based on Categorical Graph Product, decoupling the exploration algorithm from the design space and being well suited to be implemented in automatic exploration tools. Upon this abstraction, the DSE method can benefit from the MDE methodology, opening new optimization opportunities, and improving the DSE integration into the development process and specification of DSE scenarios.
APA, Harvard, Vancouver, ISO, and other styles
18

Holschuh, Bradley Thomas. "Space exploration challenges : characterization and enhancement of space suit mobility and planetary protection policy analysis." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62036.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics; and, (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 189-193).
This thesis addresses two challenges associated with advanced space and planetary exploration: characterizing and improving the mobility of current and future gas pressurized space suits; and developing effective domestic Planetary Protection policies for the emerging private space industry. Gas-pressurized space suits are known to be highly resistive to astronaut movement. As NASA seeks to return to planetary exploration, there is a critical need to improve full body space suit mobility for planetary exploration. Volume effects (the torque required to displace gas due to internal volume change during movement) and structural effects (the additional torque required to bend the suit materials in their pressurized state) are cited as the primary contributors to suit rigidity. Constant volume soft joints have become the design goal of space suit engineers, and simple joints like the elbow are believed to have nearly achieved such performance. However, more complex joints like the shoulder and waist have not yet achieved comparable optimization. As a result, it is hypothesized that joints like the shoulder and waist introduce a third, and not well studied, contributor to space suit rigidity: pressure effects (the additional work required to compress gas in the closed operating volume of the suit during movement). This thesis quantifies the individual contributors to space suit rigidity through modeling and experimentation. An Extravehicular Mobility Unit (EMU) space suit arm was mounted in a -30kPa hypobaric chamber, and both volume and torque measurements were taken versus elbow angle. The arm was tested with both open and closed operating volumes to determine the contribution of pressure effects to total elbow rigidity. These tests were then repeated using a full EMU volume to determine the actual impact of elbow pressure effects on rigidity when connected to the full suit. In both cases, structural and volume effects were found to be primary contributors to elbow joint rigidity, with structural effects dominating at low flexion angles and volume effects dominating at high flexion angles; pressure effects were detected in the tests that used only the volume of the arm, but were found to be a secondary contributor to total rigidity (on average < 5%). These pressure effects were not detected in the tests that used the volume representative of a full EMU. Unexpected structural effects behavior was also measured at high (> 75°) flexion angles, suggesting that the underlying mechanisms of these effects are not yet fully understood, and that current models predicting structural effects behavior do not fully represent the actual mechanisms at work. The detection of pressure effects in the well-optimized elbow joint, even if only in a limited volume, suggests that these effects may prove significant for sub-optimized, larger, multi-axis space suit joints. A novel, fast-acting pressure control system, developed in response to these findings, was found to be capable of mitigating pressure spikes due to volume change (and thus, pressure effects). Implementation of a similar system in future space suit designs could lead to improvements in overall suit mobility. A second study, which focused on the implications of the development of the US private space industry on domestic Planetary Protection policy, is also presented. As signatories of the 1967 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space (commonly known as the Outer Space Treaty), the United States is responsible for implementing Planetary Protection procedures designed to prevent biological contamination of the Solar System, as well as contamination of the Earth by any samples returned from extra-terrestrial bodies. NASA has established policies and procedures to comply with this treaty, and has successfully policed itself independently and autonomously since the signing of the treaty. However, for the first time in the history of the American space program, private entities outside of NASA have developed the capability and interest to send objects into space and beyond Earth orbit, and no current protocol exists to guarantee these profit-minded entities comply with US Planetary Protection obligations (a costly and time-consuming process). This thesis presents a review of US Planetary Protection obligations, including NASA's procedures and infrastructure related to Planetary Protection, and based on these current protocols provides policy architecture recommendations for the emerging commercial spaceflight industry. It was determined that the most effective policy architecture for ensuring public and private compliance with Planetary Protection places NASA in control of all domestic Planetary Protection matters, and in this role NASA is charged with overseeing, supporting, and regulating the private spaceflight industry. The underlying analysis and architecture tradeoffs that led to this recommendation are presented and discussed.
by Bradley Thomas Holschuh.
S.M.in Technology and Policy
S.M.
APA, Harvard, Vancouver, ISO, and other styles
19

Erbaş, Çaǧkan. "System-level modeling and design space exploration for multiprocessor embedded system-on-chip architectures." Amsterdam : Amsterdam : Vossiuspers ; Universiteit van Amsterdam [Host], 2006. http://dare.uva.nl/document/38007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Silva, Jeferson Santiago da. "Architectural exploration of digital systems design for FPGAs using C/C++/SystemC specification languages." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/119082.

Full text
Abstract:
A crescente demanda por alto desempenho computacional e massivo processamento de dados tem impulsionado o desenvolvimento de sistemas-on-chip. Um dos alvos de implementação para sistemas digitais complexos são os dispositivos FPGA (Field-programmable Gate Array), muito utilizados para prototipação de sistemas e rápido desenvolvimento de produtos eletrônicos complexos. Certos aspectos ineficientes relacionados aos dispositivos FPGA estão relacionadas com degradação no desempenho e na potência consumida em relação ao projeto de hardware customizado. Neste contexto, esta dissertação de mestrado propõe um estudo sobre técnicas de otimização em FPGAs. Este trabalho apresenta uma revisão da literatura sobre os métodos de redução de potência e área aplicados ao projeto de FPGA. Técnicas para aumento de desempenho e aceleração do tempo de desenvolvimento de projetos são apresentadas com base em referencias clássicas e do estado-da-arte. O principal foco deste trabalho é discutir sobre as técnicas de alto nível e apresentar os resultados obtidos nesta área, comparando com os projetos HDL (Hardware Description Language) codificados a mão. Neste trabalho, é apresentado uma metodologia para o desenvolvimento rápido projetos digitais utilizando ambientes HLS (High-Level Synthesis. Estes métodos incluem eficiente particionamento de código de alto nível, para a correta exploração de diretivas de síntese em ferramentas HLS. Porém, o fluxo HLS não guiado apresentou pobres resultados de síntese quando comparado com modelos HDL codificado a mão. Para preencher essa lacuna, foi desenvolvido um método iterativo para exploração de espaço de projeto com o objetivo de melhorar os resultados de área. Nosso método é descrito em uma linguagem de script de alto nível e é compatível com o VivadoTM HLS Compiler. O método proposto é capaz de detectar pontos chave para otimização, inserção automatica de diretivas síntese e verificação dos resultados com objetivo de reduzir o consumo de área. Os resultados experimentais utlizando o método de DSE (Design Space Exploration) provaram ser mais eficazes que o fluxo HLS não guiado, em ao menos 50% para um processador VLIW e em 43% para um filtro FIR (Finite Impulse Response de 12a ordem. Os resultados em área, em termos de flip-flops, foram até 4X menores em comparação com o fluxo HLS não guiado, enquanto redução no desempenho ficou em cerca de 38%, no caso do processador VLIW. No exemplo do filtro FIR, a redução no número flip-flops chegou a 3X, sem relevante aumento no número de LUTs e redução no desempenho.
The increasing demand for high computational performance and massive data processing has driven the development of systems-on-chip. One implementation target for complex digital systems are FPGA (Field-programmable Gate Array) devices, heavily used for prototyping systems or complex and fast time-to-market electronic products development. Certain inefficient aspects of FPGA devices relate to performance and power degradation with respect to custom hardware design. In this context, this master thesis proposes a survey on FPGA optimization techniques. This work presents a literature review on methods of power and area reduction applied to FPGA designs. Techniques for performance increasing and design speedup enhancing will be presented based on classic and state-of-the-art academic works. The main focus of this work is to discuss high-level design techniques and to present the results obtained in synthesis examples we developed, comparing with hand-coded HDL (Hardware Description Language) designs. In this work we present our methodology for fast digital design development using High-Level Synthesis (HLS) environments. Our methods include efficient high-level code partitioning for proper synthesis directives exploration in HLS tools. However, a non-guided HLS flow showed poor synthesis results when compared to hand-coded HDL designs. To fill this gap, we developed an iterative design space exploration method aiming at improving the area results. Our method is described in a high-level script language and it is compatible with the Xilinx VivadoTM HLS compiler. Our method is capable of detecting optimization checkpoints, automatic synthesis directives insertion, and check the results aiming at reducing area consumption. Our Design Space Exploration (DSE) experimental results proved to be more efficient than non-guided HLS design flow by at least 50% for a VLIW (Very Long Instruction Word) processor and 62% for a 12th-order FIR (Finite Impulse Response) filter implementation. Our area results in terms of flip-flops were up to 4X lower compared to a non-guided HLS flow, while the performance overhead was around 38%, for the VLIW processor compilation. In the FIR filter example, the flip-flops reduction were up to 3X, with no relevant LUTs and performance overhead.
APA, Harvard, Vancouver, ISO, and other styles
21

Briao, Eduardo Wenzel. "Métodos de Exploração de Espaço de Projeto em Tempo de Execução em Sistemas Embarcados de Tempo Real Soft baseados em Redes-Em-Chip." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/13157.

Full text
Abstract:
A complexidade no projeto de sistemas eletrônicos tem aumentado devido à evolução tecnológica e permite a concepção de sistemas inteiros em um único chip (SoCs – do inglês, Systems-on-Chip). Com o objetivo de reduzir a alta complexidade de projeto, custos de projeto e o tempo de lançamento do produto no mercado, os sistemas são desenvolvidos em módulos funcionais, pré-verificados e pré-projetados, denominados de núcleos de propriedade intelectual (IP – do inglês, Intellectual Property). Esses núcleos IP podem ser reutilizados de outros projetos ou adquiridos de terceiros. Entretanto, é necessário prover uma estrutura de comunicação para interligar esses núcleos e as estruturas atuais (barramentos) são inadequadas para atender as necessidades dos futuros SoCs (compartilhamento de banda, falta de escalabilidade). As redes-em-chip (NoCs{ XE "NoCs" } – do inglês, Networks-on-Chip) vêm sendo apresentadas como uma solução para atender essas restrições. No desenvolvimento de sistemas embarcados baseados em redes-em-chip, deve-se personalizar a rede para atendimento de restrições. Essa exploração de espaço de projeto (EEP), segundo uma infinidade de trabalhos, é realizada em tempo de projeto, supondo-se que é conhecido o perfil das aplicações que devem ser executadas pelo sistema. No entanto, cada vez mais sistemas embarcados aproximam-se de dispositivos genéricos de processamento (como palmtops), onde as tarefas a serem executadas não são inteiramente conhecidas a priori. Com a mudança dinâmica da carga de trabalho de um sistema embarcado, a busca pelo atendimento de requisitos pode então ser enfrentada por mecanismos adaptativos, que implementam dinamicamente a EEP. No âmbito deste trabalho, a EEP em tempo de execução provê mecanismos adaptativos que deverão realizar suas funções para atendimento de restrições de projeto. Consequentemente, EEP em tempo de execução pode permitir resultados ainda melhores, no que diz respeito a sistemas embarcados com restrições de projetos rígidas. É possível maximizar o tempo de duração da energia da bateria que alimenta um sistema embarcado ou, até mesmo, diminuir a taxa de perda de deadlines em um sistema de tempo real soft, realocando em tempo de execução tarefas de modo a gerar menor taxa de comunicação entre os processadores, desde que o sistema seja executado em um tempo suficiente para amortizar os custos de migração. Neste trabalho, foi utilizada a combinação de heurísticas de alocação da área dos Sistemas Computacionais Distribuídos como, por exemplo, algoritmos bin-packing e linear clustering. Resultados mostraram que a realocação de tarefas, utilizando uma combinação Worst-Fit e Linear Clustering, reduziu o consumo de energia e a taxa de perda de deadlines em 17% e 37%, respectivamente, utilizando o modelo de migração por cópia.
The complexity of electronic systems design has been increasing due to the technological evolution, which now allows the inclusion of a complete system on a single chip (SoC – System-on-Chip). In order to cope with the corresponding design complexity and reduce design costs and time-to-market, systems are built by assembling pre-designed and pre-verificated functional modules, called IP (Intellectual Property) cores. IP cores can be reused from previous designs or acquired from third-party vendors. However, an adequate communication architecture is required to interconnect these IP cores. Current communication architectures (busses) are unsuitable for the communication requirements of future SoCs (sharing of bandwidth, lack of scalability). Networks-on-Chip (NoC) arise as one of the solutions to fulfill these requirements. While developing NoC-based embedded systems, the NoC customization is mandatory to fulfill design constraints. This design space exploration (DSE), according to most approaches in the literature, is achieved at compile-time (off-line DSE), assuming the profiles of the tasks that will be executed in the embedded system are known a priori. However, nowadays, embedded systems are becoming more and more similar to generic processing devices (such as palmtops), where the tasks to be executed are not completely known a priori. Due to the dynamic modification of the workload of the embedded system, the fulfillment of requirements can be accomplished by using adaptive mechanisms that implement dynamically the DSE (run-time DSE or on-line DSE). In the scope of this work, DSE is on-line. In other words, when the system is running, adaptive mechanisms will be executed to fulfill the requirements of the system. Consequently, on-line DSE can achieve better results than off-line DSE alone, especially considering embedded systems with tight constraints. It is thus possible to maximize the lifetime of the battery that feeds an embedded system, or even to decrease the deadline miss ratio in a soft real-time system, for example by relocating tasks dynamically in order to generate less communication among the processors, provided that the system runs for enough execution time in order to amortize the migration overhead.In this work, a combination of allocation heuristics from the domain of Distributed Computing Systems is applied, for instance bin-packing and linear clustering algorithms. Results shows that applying task reallocation using the Worst-Fit and Linear Clustering combination reduces the energy consumption and deadline miss ratio by 17% and 37%, respectively, using the copy task migration model.
APA, Harvard, Vancouver, ISO, and other styles
22

Lafleur, Jarret Marshall. "A Markovian state-space framework for integrating flexibility into space system design decisions." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/43749.

Full text
Abstract:
The past decades have seen the state of the art in aerospace system design progress from a scope of simple optimization to one including robustness, with the objective of permitting a single system to perform well even in off-nominal future environments. Integrating flexibility, or the capability to easily modify a system after it has been fielded in response to changing environments, into system design represents a further step forward. One challenge in accomplishing this rests in that the decision-maker must consider not only the present system design decision, but also sequential future design and operation decisions. Despite extensive interest in the topic, the state of the art in designing flexibility into aerospace systems, and particularly space systems, tends to be limited to analyses that are qualitative, deterministic, single-objective, and/or limited to consider a single future time period. To address these gaps, this thesis develops a stochastic, multi-objective, and multi-period framework for integrating flexibility into space system design decisions. Central to the framework are five steps. First, system configuration options are identified and costs of switching from one configuration to another are compiled into a cost transition matrix. Second, probabilities that demand on the system will transition from one mission to another are compiled into a mission demand Markov chain. Third, one performance matrix for each design objective is populated to describe how well the identified system configurations perform in each of the identified mission demand environments. The fourth step employs multi-period decision analysis techniques, including Markov decision processes (MDPs) from the field of operations research, to find efficient paths and policies a decision-maker may follow. The final step examines the implications of these paths and policies for the primary goal of informing initial system selection. Overall, this thesis unifies state-centric concepts of flexibility from economics and engineering literature with sequential decision-making techniques from operations research. The end objective of this thesis' framework and its supporting analytic and computational tools is to enable selection of the next-generation space systems today, tailored to decision-maker budget and performance preferences, that will be best able to adapt and perform in a future of changing environments and requirements. Following extensive theoretical development, the framework and its steps are applied to space system planning problems of (1) DARPA-motivated multiple- or distributed-payload satellite selection and (2) NASA human space exploration architecture selection.
APA, Harvard, Vancouver, ISO, and other styles
23

Zentner, John Marc. "A Design Space Exploration Process for Large Scale, Multi-Objective Computer Simulations." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11572.

Full text
Abstract:
The primary contributions of this thesis are associated with the development of a new method for exploring the relationships between inputs and outputs for large scale computer simulations. Primarily, the proposed design space exploration procedure uses a hierarchical partitioning method to help mitigate the curse of dimensionality often associated with the analysis of large scale systems. Closely coupled with the use of a partitioning approach, is the problem of how to partition the system. This thesis also introduces and discusses a quantitative method developed to aid the user in finding a set of good partitions for creating partitioned metamodels of large scale systems. The new hierarchically partitioned metamodeling scheme, the lumped parameter model (LPM), was developed to address two primary limitations to the current partitioning methods for large scale metamodeling. First the LPM was formulated to negate the need to rely on variable redundancies between partitions to account for potentially important interactions. By using a hierarchical structure, the LPM addresses the impact of neglected, direct interactions by indirectly accounting for these interactions via the interactions that occur between the lumped parameters in intermediate to top-level mappings. Secondly, the LPM was developed to allow for hierarchical modeling of black-box analyses that do not have available intermediaries with which to partition the system around. The second contribution of this thesis is a graph-based partitioning method for large scale, black-box systems. The graph-based partitioning method combines the graph and sparse matrix decomposition methods used by the electrical engineering community with the results of a screening test to create a quantitative method for partitioning large scale, black-box systems. An ANOVA analysis of the results of a screening test can be used to determine the sparse nature of the large scale system. With this information known, the sparse matrix and graph theoretic partitioning schemes can then be used to create potential sets of partitions to use with the lumped parameter model.
APA, Harvard, Vancouver, ISO, and other styles
24

Papavramidis, Konstantinos. "Evaluation of Potential Propulsion Systems for a Commercial Micro Moon Lander." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263954.

Full text
Abstract:
In the advent of Space 4.0 era with the commercialization and increased accessibility of space, a requirement analysis, trade-off options, development status and critical areas of a propulsion system for a Commercial Micro Moon Lander is carried out. An investigation of a suitable system for the current mission is examined in the frame of the ASTRI project of OHB System AG and Blue Horizon. Main trajectory strategies are being investigated and simulations are performed to extract the ∆V requirements. Top-level requirements are extracted which give the first input for the propulsion design. An evaluation of the propulsion requirements is implemented which outlines the factors that are more important and drive the propulsion design. The evaluation implements a dual comparison of the requirements where weighting factors are extracted, resulting the main drivers of the propulsion system design. A trade-off analysis is performed for various types of propulsion systems and a preliminary selection of a propulsion system suitable for the mission is described. A first-iteration architecture of the propulsion, ADCS and GNC subsystems are also presented as well as a component list. A first approach of the landing phase is described and an estimation of the required thrust is calculated. A unified Bipropellant propulsion system is proposed which fills out most of the mission requirements. However, the analysis shows that the total mass of the lander, including all the margins, exceeds a bit the mass limitations but no the volume limitations. The results shows that a decrease in payload capacity or the implementation of a different trajectory strategy can lower the mass below the limit. In addition, further iterations in the lander concept which will give a more detailed design, resulting to no extra margins, can drive the mass below the limit. Finally, a discussion on the results is done, addressing the limitations and the important factors that need to be considered for the mission. The viability of the mission due to its commercial aspect is being questioned and further investigation is suggested to be carried out on the ”micro” lander concept.
I tillkomsten av Space 4.0 era med kommersialisering och ökad tillgänglighet av rymden, en kravanalys, avvägningsalternativ, utvecklingsstatus och kritiska områden av ett framdrivningssystem för en kommersiell mikro månlandare bärs ut. En undersökning av ett lämpligt system för det aktuella uppdraget genomförs inom ramen för ASTRI-projektet för OHB System AG och Blue Horizon. Olika strategier för banor undersöks och simuleringar utförs för att extrahera ΔV-kraven. Topp-nivå krav definieras och ger den första inputen för designen av framdrivningssystemet. En utvärdering av framdrivningskraven implementeras och belyser de viktigaste faktorer som driver design av framdrivningssystemet. En avvägningsanalys utförs för olika typer av framdrivningssystem och ett preliminärt urval av ett framdrivningssystem som är lämpligt för uppdraget beskrivs. En arkitektur för framdrivningen, ADCS och GNC-delsystem presenteras såväl som en komponentlista. Ett första tillvägagångssätt av landningsfasen beskrivs och en uppskattning av den nödvändiga dragkraften beräknas. Ett enhetligt Bi-propellant framdrivningssystem föreslås som uppfyller ut de flesta uppdragskraven. Analysen visar dock att summan av månlandarens massa, inklusive alla marginaler, överstiger massbegränsningarna men inte de volymbegränsningarna uppsatta i projektet. Resultaten visar att en minskning av nyttolastkapaciteten eller genomförandet av en annan banstrategi kan minska den totala massan då den inom gränsvärdena. Dessutom, ytterligare iterationer i månlandarens koncept som kommer att ge en mer detaljerad design, vilket resulterar i inga extra marginaler, kan leda till att den uppskattade massan minskar ytterligare. Slutligen förs en diskussion om resultaten, med hänsyn till de begränsningarna och de viktigaste faktorerna som måste beaktas för uppdraget. Lönsamheten hos uppdraget på grund av sin kommersiella aspekt är ifrågasatt och vidare utredning föreslås utförs på ”mikro” månlandare konceptet.
APA, Harvard, Vancouver, ISO, and other styles
25

Oliveira, Marcio Ferreira da Silva [Verfasser]. "Model-driven engineering methodology for design space exploration of embedded systems / Marcio Ferreira da Silva Oliveira." Paderborn : Universitätsbibliothek, 2014. http://d-nb.info/1051024463/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Noll, Jochen [Verfasser]. "Conceptual Design of Modular Space Transportation and Infrastructure Systems for Future Human Exploration Missions / Jochen Noll." München : Verlag Dr. Hut, 2012. http://d-nb.info/1025821092/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Aksaray, Derya. "Formulation of control strategies for requirement definition of multi-agent surveillance systems." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53121.

Full text
Abstract:
In a multi-agent system (MAS), the overall performance is greatly influenced by both the design and the control of the agents. The physical design determines the agent capabilities, and the control strategies drive the agents to pursue their objectives using the available capabilities. The objective of this thesis is to incorporate control strategies in the early conceptual design of an MAS. As such, this thesis proposes a methodology that mainly explores the interdependency between the design variables of the agents and the control strategies used by the agents. The output of the proposed methodology, i.e. the interdependency between the design variables and the control strategies, can be utilized in the requirement analysis as well as in the later design stages to optimize the overall system through some higher fidelity analyses. In this thesis, the proposed methodology is applied to a persistent multi-UAV surveillance problem, whose objective is to increase the situational awareness of a base that receives some instantaneous monitoring information from a group of UAVs. Each UAV has a limited energy capacity and a limited communication range. Accordingly, the connectivity of the communication network becomes essential for the information flow from the UAVs to the base. In long-run missions, the UAVs need to return to the base for refueling with certain frequencies depending on their endurance. Whenever a UAV leaves the surveillance area, the remaining UAVs may need relocation to mitigate the impact of its absence. In the control part of this thesis, a set of energy-aware control strategies are developed for efficient multi-UAV surveillance operations. To this end, this thesis first proposes a decentralized strategy to recover the connectivity of the communication network. Second, it presents two return policies for UAVs to achieve energy-aware persistent surveillance. In the design part of this thesis, a design space exploration is performed to investigate the overall performance by varying a set of design variables and the candidate control strategies. Overall, it is shown that a control strategy used by an MAS affects the influence of the design variables on the mission performance. Furthermore, the proposed methodology identifies the preferable pairs of design variables and control strategies through low fidelity analysis in the early design stages.
APA, Harvard, Vancouver, ISO, and other styles
28

MUKHERJEE, MADHUBANTI. "ALGORITHMS FOR COUPLING CIRCUIT AND PHYSICAL SYNTHESIS WITH HIGH-LEVEL DESIGN-SPACE EXPLORATION FOR 2D AND 3D SYSTEMS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1112670784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mattos, Julio Carlos Balzano de. "Design space exploration of SW and HW IP based on object oriented methodology for embedded system applications." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/13484.

Full text
Abstract:
O software vem se tornando cada vez mais o principal fator de custo no desenvolvimento de dispositivos embarcados. Atualmente, com o aumento aumentando da complexidade dos sistemas embarcados, se faz necessário o uso de técnicas e metodologias que, ao mesmo tempo, permitam o aumento da produtividade do desenvolvimento de software e permitam manipular as restrições dos sistemas embarcados como tamanho de memória, comportamento de tempo real, desempenho e energia. A análise e projeto orientado a objetos são altamente conhecidos e utilizados na comunidade de engenharia de software. Este paradigma auxilia no desenvolvimento e manutenção do software, porém apresenta uma signi cativa sobrecarga em termos de memória, desempenho e tamanho do código. Esta tese introduz uma metodologia e um conjunto de ferramentas que permitem o uso concomitante de orientação a objetos e os diferentes requisitos dos sistemas embarcados. Para atingir este objetivo, esta tese apresenta uma metodologia para exploração de software embarcado orientado a objetos que permite melhoria em diferentes níveis do processo de desenvolvimento do software baseado em diferentes implementações do mesmo processador. Os resultados da metodologia são apresentados baseados na aplicação de um tocador de MP3.
Software is increasingly becoming the major cost factor for embedded devices. Nowadays, with the growing complexity of embedded systems, it is necessary to use techniques and methodologies that can, at the same time, increase software productivity and manipulate embedded systems constraints - like memory footprint, real-time behavior, performance and energy. Object-oriented modeling and design is a widely known methodology in software engineering. This paradigm may satisfy software portability and maintainability requirements, but it presents overhead in terms of memory, performance and code size. This thesis introduces a methodology and a set of tools that can deal, at the same time, with object orientation and di erent embedded systems requirements. To achieve this goal, the thesis presents a methodology to explore object-oriented embedded software improving di erent levels in the software design based on di erent implementations with the same processor. The results of the methodology are presented based on an MP3 player application.
APA, Harvard, Vancouver, ISO, and other styles
30

Mukherjee, Madhubanti. "Algorithms for coupling circuit and physical synthesis with high-level design-space exploration of 2D and 3D systems." Cincinnati, Ohio : University of Cincinnati, 2004. http://www.ohiolink.edu/etd/view.cgi?acc%5Fnum=ucin1112670784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Özlük, Ali Cemal [Verfasser], Klaus [Akademischer Betreuer] Kabitzsch, and Alexander [Akademischer Betreuer] Fay. "Design Space Exploration for Building Automation Systems / Ali Cemal Özlük. Gutachter: Klaus Kabitzsch ; Alexander Fay. Betreuer: Klaus Kabitzsch." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://d-nb.info/1068154799/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Giroudot, Frédéric. "NoC-based Architectures for Real-Time Applications : Performance Analysis and Design Space Exploration." Thesis, Toulouse, INPT, 2019. https://oatao.univ-toulouse.fr/25921/1/Giroudot_Frederic.pdf.

Full text
Abstract:
Les architectures mono-processeur montrent leurs limites en termes de puissance de calcul face aux besoins des systèmes actuels. Bien que les architectures multi-cœurs résolvent partiellement ce problème, elles utilisent en général des bus pour interconnecter les cœurs, et cette solution ne passe pas à l'échelle. Les architectures dites pluri-cœurs ont été proposées pour palier les limitations des processeurs multi-cœurs. Elles peuvent réunir jusqu'à des centaines de cœurs sur une seule puce, organisés en dalles contenant une ou plusieurs entités de calcul. La communication entre les cœurs se fait généralement au moyen d'un réseau sur puce constitué de routeurs reliés les uns aux autres et permettant les échanges de données entre dalles. Cependant, ces architectures posent de nombreux défis, en particulier pour les applications temps-réel. D'une part, la communication via un réseau sur puce provoque des scénarios de blocage entre flux, ce qui complique l'analyse puisqu'il devient difficile de déterminer le pire cas. D'autre part, exécuter de nombreuses applications sur des systèmes sur puce de grande taille comme des architectures pluri-cœurs rend la conception de tels systèmes particulièrement complexe. Premièrement, cela multiplie les possibilités d'implémentation qui respectent les contraintes fonctionnelles, et l'exploration d'architecture résultante est plus longue. Deuxièmement, une fois une architecture matérielle choisie, décider de l'attribution de chaque tâche des applications à exécuter aux différents cœurs est un problème difficile, à tel point que trouver une une solution optimale en un temps raisonnable n'est pas toujours possible. Ainsi, nos premières contributions s'intéressent à cette nécessité de pouvoir calculer des bornes fiables sur le pire cas des latences de transmission des flux de données empruntant des réseaux sur puce dits "wormhole". Nous proposons un modèle analytique, BATA, prenant en compte la taille des mémoires tampon des routeurs et applicable à une configuration de flux de données périodiques générant un paquet à la fois. Nous étendons ensuite le domaine d'applicabilité de BATA pour couvrir un modèle de traffic plus général ainsi que des architectures hétérogènes. Cette nouvelle méthode, appelée G-BATA, est basée sur une structure de graphe pour capturer les interférences possibles entre flux de données. Elle permet également de diminuer le temps de calcul de l'analyse, améliorant la capacité de l'approche à passer à l'échelle. Dans une seconde partie, nous proposons une méthode pour la conception d'applications temps-réel s'exécutant sur des plateformes pluri-cœurs. Cette méthode intègre notre modèle d'analyse G-BATA dans un processus de conception systématique, faisant en outre intervenir un outil de modélisation et de simulation de systèmes reposant sur des concepts d'ingénierie dirigée par les modèles, TTool, et un logiciel pour l'analyse de performance pire-cas des réseaux, WoPANets. Enfin, nous proposons une validation de nos contributions grâce à (a) une série d'expériences sur une plateforme physique et (b) deux études de cas d'applications réelle; le système de contrôle d'un véhicule autonome et une application de décodeur 5G
Monoprocessor architectures have reached their limits in regard to the computing power they offer vs the needs of modern systems. Although multicore architectures partially mitigate this limitation and are commonly used nowadays, they usually rely on intrinsically non-scalable buses to interconnect the cores. The manycore paradigm was proposed to tackle the scalability issue of bus-based multicore processors. It can scale up to hundreds of processing elements (PEs) on a single chip, by organizing them into computing tiles (holding one or several PEs). Intercore communication is usually done using a Network-on-Chip (NoC) that consists of interconnected onchip routers allowing communication between tiles. However, manycore architectures raise numerous challenges, particularly for real-time applications. First, NoC-based communication tends to generate complex blocking patterns when congestion occurs, which complicates the analysis, since computing accurate worst-case delays becomes difficult. Second, running many applications on large Systems-on-Chip such as manycore architectures makes system design particularly crucial and complex. On one hand, it complicates Design Space Exploration, as it multiplies the implementation alternatives that will guarantee the desired functionalities. On the other hand, once a hardware architecture is chosen, mapping the tasks of all applications on the platform is a hard problem, and finding an optimal solution in a reasonable amount of time is not always possible. Therefore, our first contributions address the need for computing tight worst-case delay bounds in wormhole NoCs. We first propose a buffer-aware worst-case timing analysis (BATA) to derive upper bounds on the worst-case end-to-end delays of constant-bit rate data flows transmitted over a NoC on a manycore architecture. We then extend BATA to cover a wider range of traffic types, including bursty traffic flows, and heterogeneous architectures. The introduced method is called G-BATA for Graph-based BATA. In addition to covering a wider range of assumptions, G-BATA improves the computation time; thus increases the scalability of the method. In a second part, we develop a method addressing design and mapping for applications with real-time constraints on manycore platforms. It combines model-based engineering tools (TTool) and simulation with our analytical verification technique (G-BATA) and tools (WoPANets) to provide an efficient design space exploration framework. Finally, we validate our contributions on (a) a serie of experiments on a physical platform and (b) two case studies taken from the real world: an autonomous vehicle control application, and a 5G signal decoder application
APA, Harvard, Vancouver, ISO, and other styles
33

Oliveira, Marcio Ferreira da Silva. "Exploração do espaço de projeto em sistemas embarcados baseados em plataformas através de estimativas extraídas de modelos UML." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2006. http://hdl.handle.net/10183/8303.

Full text
Abstract:
Objetivando implementar um sistema embarcado baseado principalmente em software, duas abordagens ortogonais estão sendo propostas: Desenvolvimento Baseado em Plataformas, que maximiza o reuso; Desenvolvimento Baseado em Modelos, que aumenta o nível de abstração utilizando conceitos de orientação a objetos e UML para modelar uma aplicação. Porém, com o aumento do nível de abstração, engenheiros de software não possuem a idéia exata do impacto de suas decisões de modelagem em questões importantes, como desempenho, e consumo de energia e de memória para uma plataforma embarcada específica. Neste trabalho, propõe-se estimar a memória de dados e de programa, o desempenho e o consumo de energia, diretamente de especificações em UML, como intuito de realizar a exploração do espaço de projeto já nos estágios iniciais do processo de desenvolvimento. Resultados experimentais apresentam erros reduzidos, quando componentes da plataforma são reutilizados e seus custos já são conhecidos para uma plataforma alvo. Aplicações reais foram modeladas de diferentes formas e demonstram a eficiência da abordagem de estimativa para o estagio inicial de exploração do espaço de projeto, permitindo ao desenvolvedor avaliar e comparar diferentes soluções de modelagem. Os valores estimados utilizados na exploração do espaço de projeto podem alcançar taxas de erros inferiores a 5%.
In order to quickly implement an embedded system that is mainly based on software, two orthogonal approaches have been proposed: Platform-based Design, which maximizes the reuse of components; and Model Driven Development, which rises the abstraction level by using object-oriented concepts and UML for modeling an application. However, with this increasing of the abstraction level, software engineers do not have an exact idea of the impact of their modeling decisions on important issues such as performance, energy, and memory footprint for a given embedded platform. This work proposes to estimate data and program memory, performance, and energy directly from UML model specifications to explore the design space in the early steps of development process. Experimental results show a very small estimation error when platform components are reused and their costs on the target platform are already known. Real-life applications are modeled in different ways and demonstrate the effectiveness of the estimates in an early design space exploration, allowing the designer to evaluate and compare different modeling solutions. The estimated values used in the design space exploration can achieve errors as low as 5%.
APA, Harvard, Vancouver, ISO, and other styles
34

Blocher, Andrew Gene. "Alternative Mission Concepts for the Exploration of Outer Planets Using Small Satellite Swarms." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1820.

Full text
Abstract:
Interplanetary space exploration has thus far consisted of single, expensive spacecraft missions. Mission costs are particularly high on missions to the outer planets and while invaluable, finite budgets limit our ability to perform extensive and frequent investigations of the planets. Planetary systems such as Jupiter and Saturn provide extremely complex exploration environments with numerous targets of interest. Exploring these targets in addition to the main planet requires multiple fly-bys and long mission timelines. In LEO, CubeSats have changed the exploration paradigm, offering a fast and low cost alternative to traditional space vehicles. This new mission development philosophy has the potential to significantly change the economics of interplanetary exploration and a number of missions are being developed to utilize CubeSat class spacecraft beyond earth orbit (e.g., NEAScout, Lunar Ice Cube, Marco and BioSentinel). This paper takes the CubeSat philosophical approach one step further by investigating the potential for small satellite swarms to provide extensive studies of the Saturn system. To do this, an architecture was developed to best replicate the Cassini Primary Mission science objectives using swarms of CubeSats. Cassini was chosen because of its complexity and it defines a well-understood baseline to compare against. The paper outlines the overall mission architecture developed and provides a feasible initial design for the spacecraft in the architecture. The number of swarms needed, number of CubeSats per swarm, size of the CubeSats, overall science output and estimated mission cost are all presented. Additional science objectives beyond Cassini's capabilities are also proposed. Significant scientific returns can be achieved by the swarm based architecture and the risk tolerance afforded by the utilization of large numbers of low-cost sensor carriers. This study found a potential architecture that could reduce the cost of replicating Cassini by as much as 63%. The results of this investigation are not constrained to Saturn and can be easily translated to other targets such as Uranus, Neptune or the asteroid belt.
APA, Harvard, Vancouver, ISO, and other styles
35

Cota, Erika Fernandes. "Reuse-based test planning for core-based systems-on-chip." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2003. http://hdl.handle.net/10183/4180.

Full text
Abstract:
O projeto de sistemas eletrônicos atuais segue o paradigma do reuso de componentes de hardware. Este paradigma reduz a complexidade do projeto de um chip, mas cria novos desafios para o projetista do sistema em relação ao teste do produto final. O acesso aos núcleos profundamente embutidos no sistema, a integração dos diversos métodos de teste e a otimização dos diversos fatores de custo do sistema são alguns dos problemas que precisam ser resolvidos durante o planejamento do teste de produção do novo circuito. Neste contexto, esta tese propõe duas abordagens para o planejamento de teste de sistemas integrados. As abordagens propostas têm como principal objetivo a redução dos custos de teste através do reuso dos recursos de hardware disponíveis no sistema e da integração do planejamento de teste no fluxo de projeto do circuito. A primeira abordagem considera os sistemas cujos componentes se comunicam através de conexões dedicadas ou barramentos funcionais. O método proposto consiste na definição de um mecanismo de acesso aos componentes do circuito e de um algoritmo para exploração do espaço de projeto. O mecanismo de acesso prevê o reuso das conexões funcionais, o uso de barramentos de teste locais, núcleos transparentes e outros modos de passagem do sinal de teste. O algoritmo de escalonamento de teste é definido juntamente com o mecanismo de acesso, de forma que diferentes combinações de custos sejam exploradas. Além disso, restrições de consumo de potência do sistema podem ser consideradas durante o escalonamento dos testes. Os resultados experimentais apresentados para este método mostram claramente a variedade de soluções que podem ser exploradas e a efi- ciência desta abordagem na otimização do teste de um sistema complexo. A segunda abordagem de planejamento de teste propõe o reuso de redes em-chip como mecanismo de acesso aos componentes dos sistemas construídos sobre esta plataforma de comunicação. Um algoritmo de escalonamento de teste que considera as restrições de potência da aplicação é apresentado e a estratégia de teste é avaliada para diferentes configurações do sistema. Os resultados experimentais mostram que a capacidade de paralelização da rede em-chip pode ser explorada para reduzir o tempo de teste do sistema, enquanto os custos de área e pinos de teste são drasticamente minimizados. Neste manuscrito, os principais problemas relacionados ao teste dos sistemas integrados baseados em componentes virtuais são identificados e as soluções já apresentadas na literatura são discutidas. Em seguida, os problemas tratados por este traballho são listados e as abordagens propostas são detalhadas. Ambas as técnicas são validadas através dos sistemas disponíveis no ITC’02 SoC Test Benchmarks. As técnicas propostas são ainda comparadas com outras abordagens de teste apresentadas recentemente. Esta comparação confirma a eficácia dos métodos desenvolvidos nesta tese.
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
36

Bunuan, Paul F. "FIDOE: A Proof-of-concept Martian Robotic Support Cart." Digital WPI, 1999. https://digitalcommons.wpi.edu/etd-theses/906.

Full text
Abstract:
"The National Aeronautics and Space Administration (NASA) plans to send a human exploration team to Mars within the next 25 years. In support of this effort Hamilton Standard Space Systems International (HSSSI), current manufacturers of the Space Shuttle spacesuit, began exploring alternative solutions for supporting an astronaut during a Martian surface exploration. A design concept was developed by HSSSI to integrate a minimally equipped Martian spacesuit with a robotic support cart capable of providing life support assistance, communications, and independent navigational functions. To promote NASA's visionary efforts and increase university relations, HSSSI partnered with Worcester Polytechnic Institute (WPI) to develop a proof-of-concept robotic support cart system, FIDOE - Fully Independent Delivery of Expendables. As a proof-of-concept system, the primary goal of this project was to demonstrate the feasibility of current technologies utilized by FIDOE's communication and controls system for future Martian surface explorations. The primary objective of this project was to procure selected commercial-off-the-shelf components and configure these components into a functional robotic support cart. The design constraints for this project, in addition to the constraints imposed by the Martian environment and HSSSI's Martian spacesuit, were a one-year time frame and a $20,000 budget for component procurement. This project was also constrained by the protocols defined by the NASA demonstration test environment. The final design configuration comprised of 37 major commercial off-the-shelf components and three individual software packages that integrated together to provide FIDOE's communications and control capabilities. Power distribution was internally handled through a combination of a main power source and dedicated power supplies. FIDOE also provided a stowage area for handling assisted life support systems and geological equipment. The proof-of-concept FIDOE system proved that the current technologies represented by the selected components are feasible applications for a Mars effort. Specifically, the FIDOE system demonstrated that the chosen technologies can be integrated to perform assisted life support and independent functions. While some technologies represented by the proof-of-concept system may not adequately address the robustness issues pertaining to the Mars effort, e.g., voice recognition and power management, technology trends indicate that these forms of technology will soon become viable solutions to assisting an astronaut on a Martian surface exploration."
APA, Harvard, Vancouver, ISO, and other styles
37

MEDARDONI, Simone. "Driving the Network-on-Chip Revolution to Remove the Interconnect Bottleneck in Nanoscale Multi-Processor Systems-on-Chip." Doctoral thesis, Università degli studi di Ferrara, 2009. http://hdl.handle.net/11392/2389197.

Full text
Abstract:
The sustained demand for faster, more powerful chips has been met by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SoC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MP-SoC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NoCs) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the onchip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation performs a design space exploration of network-on-chip architectures, in order to point-out the trade-offs associated with the design of each individual network building blocks and with the design of network topology overall. The design space exploration is preceded by a comparative analysis of state-of-the-art interconnect fabrics with themselves and with early networkon- chip prototypes. The ultimate objective is to point out the key advantages that NoC realizations provide with respect to state-of-the-art communication infrastructures and to point out the challenges that lie ahead in order to make this new interconnect technology come true. Among these latter, technologyrelated challenges are emerging that call for dedicated design techniques at all levels of the design hierarchy. In particular, leakage power dissipation, containment of process variations and of their effects. The achievement of the above objectives was enabled by means of a NoC simulation environment for cycleaccurate modelling and simulation and by means of a back-end facility for the study of NoC physical implementation effects. Overall, all the results provided by this work have been validated on actual silicon layout.
APA, Harvard, Vancouver, ISO, and other styles
38

Sakai, Tadashi. "A Study of Variable Thrust, Variable Specific Impulse Trajectories for Solar System Exploration." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4904.

Full text
Abstract:
A study has been performed to determine the advantages and disadvantages of variable thrust and variable specific impulse (Isp) trajectories for solar system exploration. There have been several numerical research efforts for variable thrust, variable Isp, power-limited trajectory optimization problems. All of these results conclude that variable thrust, variable Isp (variable specific impulse, or VSI) engines are superior to constant thrust, constant Isp (constant specific impulse, or CSI) engines. However, most of these research efforts assume a mission from Earth to Mars, and some of them further assume that these planets are circular and coplanar. Hence they still lack the generality. This research has been conducted to answer the following questions: - Is a VSI engine always better than a CSI engine or a high thrust engine for any mission to any planet with any time of flight considering lower propellant mass as the sole criterion? - If a planetary swing-by is used for a VSI trajectory, is the fuel savings of a VSI swing-by trajectory better than that of a CSI swing-by or high thrust swing-by trajectory? To support this research, an unique, new computer-based interplanetary trajectory calculation program has been created. This program utilizes a calculus of variations algorithm to perform overall optimization of thrust, Isp, and thrust vector direction along a trajectory that minimizes fuel consumption for interplanetary travel. It is assumed that the propulsion system is power-limited, and thus the compromise between thrust and Isp is a variable to be optimized along the flight path. This program is capable of optimizing not only variable thrust trajectories but also constant thrust trajectories in 3-D space using a planetary ephemeris database. It is also capable of conducting planetary swing-bys. Using this program, various Earth-originating trajectories have been investigated and the optimized results have been compared to traditional CSI and high thrust trajectory solutions. Results show that VSI rocket engines reduce fuel requirements for any mission compared to CSI rocket engines. Fuel can be saved by applying swing-by maneuvers for VSI engines, but the effects of swing-bys due to VSI engines are smaller than that of CSI or high thrust engines.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Letitia. "Approche orientée modèles pour la sûreté et la sécurité des systèmes embarqués." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT002/document.

Full text
Abstract:
La présence de systèmes et d'objets embarqués communicants dans notre vie quotidienne nous a apporté une myriade d'avantages, allant de l'ajout de commodité et de divertissement à l'amélioration de la sûreté de nos déplacements et des soins de santé. Cependant, les défauts et les vulnérabilités de ces systèmes exposent leurs utilisateurs à des risques de dommages matériels, de pertes financières, et même des dommages corporels. Par exemple, certains véhicules commercialisés, qu'ils soient connectés ou conventionnels, ont déjà souffert d'une variété de défauts de conception entraînant des blessures et la mort. Dans le même temps, alors que les véhicules sont de plus en plus connectés (et dans un avenir proche, autonomes), les chercheurs ont démontré la possibilité de piratage de leurs capteurs ou de leurs systèmes de contrôle interne, y compris l'injection directe de messages sur le bus CAN.Pour assurer la sûreté des utilisateurs et des passants, il faut considérer plusieurs facteurs. La sûreté conventionnelle suggère qu'un système ne devrait pas contenir de défauts logiciels et matériels qui peuvent l'empêcher de fonctionner correctement. La "sûreté de la fonction attendue" consiste à éviter les situations que le système ou ses composants ne peuvent pas gérer, comme des conditions environnementales extrêmes. Le timing peut être critique pour certains systèmes en temps réel, car afin d'éviter des situations dangereuses, le système devra réagir à certains événements, comme l'évitement d'obstacles, dans un délai déterminé. Enfin, la sûreté d'un système dépend de sa sécurité. Un attaquant qui peut envoyer des commandes fausses ou modifier le logiciel du système peut changer son comportement et le mettre dans diverses situations dangereuses. Diverses contre-mesures de sécurité et de sûreté pour les systèmes embarqués, en particulier les véhicules connectés, ont été proposées. Pour mettre en oeuvre correctement ces contre-mesures, il faut analyser et vérifier que le système répond à toutes les exigences de sûreté, de sécurité et de performance, et les faire la plus tôt possible dans les premières phases de conception afin de réduire le temps de mise sur le marché, et éviter les reprises. Cette thèse s'intéresse à la sécurité et la sûreté des les systèmes embarqués, dans le contexte du véhicule autonome de l'Institut Vedecom. Parmi les approches proposées pour assurer la sûreté et la sécurité des les systèmes embarqués, l'ingénierie dirigée par modèle est l'une de ces approches qui couvre l'ensemble du processus de conception, depuis la définition des exigences, la conception du matériel et des logiciels, la simulation/vérification formelle et la génération du code final. Cette thèse propose une méthodologie de modélisation pour une conception sûre et sécurisée, basée sur la méthodologie SysML-Sec, qui implique de nouvelles méthodes de modélisation et de vérification. La modélisation de la sécurité est généralement effectuée dans les dernières phases de la conception. Cependant, la sécurité a un impact sur l'architecture/allocation; les décisions de partitionnement logiciel/matériel devraient être prises en fonction de la capacité de l'architecture à satisfaire aux exigences de sécurité. Cette thèse propose comment modéliser les mécanismes de sécurité et l'impact d'un attaquant dans la phase de partitionnement logiciel/matériel. Comme les protocoles de sécurité ont un impact négatif sur le performance d'un système, c'est important de mesurer l'utilisation des composants matériels et les temps de réponse du système. Des composants surchargés peuvent entraîner des performances imprévisibles et des retards indésirables. Cette thèse traite aussi des mesures de latence des événements critiques pour la sécurité, en se concentrant sur un exemple critique pour les véhicules autonomes : le freinage/réponse après la détection d'obstacles. Ainsi, nos contributions soutiennent la conception sûre et sécurisée des systèmes embarqués
The presence of communicating embedded systems/IoTs in our daily lives have brought a myriad of benefits, from adding conveniences and entertainment, to improving the safety of our commutes and health care. However, the flaws and vulnerabilities in these devices expose their users to risks of property damage, monetary losses, and personal injury. For example, consumer vehicles, both connected and conventional, have succumbed to a variety of design flaws resulting in injuries and death. At the same time, as vehicles are increasingly connected (and in the near future, autonomous), researchers have demonstrated possible hacks on their sensors or internal control systems, including direct injection of messages on the CAN bus.Ensuring the safety of users or bystanders involves considering multiple factors. Conventional safety suggests that a system should not contain software and hardware flaws which can prevent it from correct function. `Safety of the Intended Function' involves avoiding the situations which the system or its components cannot handle, such as adverse extreme environmental conditions. Timing can be critical for certain real-time systems, as the system will need to respond to certain events, such as obstacle avoidance, within a set period to avoid dangerous situations. Finally, the safety of a system depends on its security. An attacker who can send custom commands or modify the software of the system may change its behavior and send it into various unsafe situations. Various safety and security countermeasures for embedded systems, especially connected vehicles, have been proposed. To place these countermeasures correctly requires methods of analyzing and verifying that the system meets all safety, security, and performance requirements, preferably at the early design phases to minimize costly re-work after production. This thesis discusses the safety and security considerations for embedded systems, in the context of Institut Vedecom's autonomous vehicle. Among the proposed approaches to ensure safety and security in embedded systems, Model-Driven Engineering is one such approach that covers the full design process, from elicitation of requirements, design of hardware and software, simulation/formal verification, and final code generation. This thesis proposes a modeling-based methodology for safe and secure design, based on the SysML-Sec Methodology, which involve new modeling and verification methods. Security modeling is generally performed in the last phases of design. However, security impacts the early architecture/mapping and HW/SW partitioning decisions should be made based on the ability of the architecture to satisfy security requirements. This thesis proposes how to model the security mechanisms and the impact of an attacker as relevant to the HW/SW Partitioning phase. As security protocols negatively impact performance, it becomes important to measure both the usage of hardware components and response times of the system. Overcharged components can result in unpredictable performance and undesired delays. This thesis also discusses latency measurements of safety-critical events, focusing on one critical to autonomous vehicles: braking as after obstacle detection. Together, these additions support the safe and secure design of embedded systems
APA, Harvard, Vancouver, ISO, and other styles
40

Scannell, Peter. "Three-dimensional Information Space : An Exploration of a World Wide Web-based, Three-dimensional, Hierarchical Information Retrieval Interface Using Virtual Reality Modeling Language." Thesis, University of North Texas, 1997. https://digital.library.unt.edu/ark:/67531/metadc278715/.

Full text
Abstract:
This study examined the differences between a 3-D, VRML search interface, similar to Cone Trees, as a front-end to Yahoo on the World Wide Web and a conventional text-based, 1-Dinterface to the same database. The study sought to determine how quickly users could find information using both interfaces, their degree of satisfaction with both search interfaces, and which interface they preferred.
APA, Harvard, Vancouver, ISO, and other styles
41

Phan, Leon L. "A methodology for the efficient integration of transient constraints in the design of aircraft dynamic systems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34750.

Full text
Abstract:
Transient regimes experienced by dynamic systems may have severe impacts on the operation of the aircraft. They are often regulated by dynamic constraints, requiring the dynamic signals to remain within bounds whose values vary with time. The verification of these peculiar types of constraints, which generally requires high-fidelity time-domain simulation, intervenes late in the system development process, thus potentially causing costly design iterations. The research objective of this thesis is to develop a methodology that integrates the verification of dynamic constraints in the early specification of dynamic systems. In order to circumvent the inefficiencies of time-domain simulation, multivariate dynamic surrogate models of the original time-domain simulation models are generated using wavelet neural networks (or wavenets). Concurrently, an alternate approach is formulated, in which the envelope of the dynamic response, extracted via a wavelet-based multiresolution analysis scheme, is subject to transient constraints. Dynamic surrogate models using sigmoid-based neural networks are generated to emulate the transient behavior of the envelope of the time-domain response. The run-time efficiency of the resulting dynamic surrogate models enables the implementation of a data farming approach, in which the full design space is sampled through a Monte-Carlo Simulation. An interactive visualization environment, enabling what-if analyses, is developed; the user can thereby instantaneously comprehend the transient response of the system (or its envelope) and its sensitivities to design and operation variables, as well as filter the design space to have it exhibit only the design scenarios verifying the dynamic constraints. The proposed methodology, along with its foundational hypotheses, is tested on the design and optimization of a 350VDC network, where a generator and its control system are concurrently designed in order to minimize the electrical losses, while ensuring that the transient undervoltage induced by peak demands in the consumption of a motor does not violate transient power quality constraints.
APA, Harvard, Vancouver, ISO, and other styles
42

Anil, Vijay Sankar. "Mission-based Design Space Exploration and Traffic-in-the-Loop Simulation for a Range-Extended Plug-in Hybrid Delivery Vehicle." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587663664531601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Klicker, Laura. "A Method for Standardization within the Payload Interface Definition of a Service-Oriented Spacecraft using a Modified Interface Control Document​." Thesis, KTH, Rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217971.

Full text
Abstract:
With a big picture view of increasing the accessibility of space, standardization is applied within a service-oriented space program. The development of standardized spacecraft interfaces for numerous and varied payloads is examined through the lens of the creation of an Interface Control Document (ICD) within the Peregrine Lunar Lander project of Astrobotic Technologies, Inc. The procedure is simple, transparent, and adaptable; its applicability to other similar projects is assessed.
För en ökad tillgång till rymden finns det behov av standardisering för en förbättrad service. Utvecklingen av standardiserade rymdfarkostgränsytor för flera och olika nyttolaster har undersökts via ett dokumentet för gränssnittskontroll (ICD) inom projektet Peregrine Lunar Lander för Astrobotic Technologies, Inc. Proceduren är enkel, transparent och anpassningbar; dess användning för andra liknande projekt har värderats.
APA, Harvard, Vancouver, ISO, and other styles
44

Specht, Emilena. "An approach for embedded software generation based in declarative alloy models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/22812.

Full text
Abstract:
Este trabalho propõe uma nova abordagem para o desenvolvimento de sistemas embarcados, através da combinação da abstração e propriedades de verificação de modelos da linguagem declarativa Alloy com a ampla aceitação de Java na indústria. A abordagem surge no contexto de que a automação de software no domínio embarcado tornou-se extremamente necessária, uma vez que atualmente a maior parte do tempo de desenvolvimento é gasta no projeto de software de produtos tão restritos em termos de recursos. As ferramentas de automação de software embarcado devem atender a demanda por produtividade e manutenibilidade, mas respeitar restrições naturais deste tipo de sistema, tais como espaço de memória, potência e desempenho. As ferramentas de automação de projeto lidam com produtividade e manutenibilidade ao permitir especificações de alto nível, tarefa difícil de atender no domínio embarcado devido ao comportamento misto de muitas aplicações embarcadas. Abordagens que promovem meios para verificação formal também são atrativas, embora geralmente sejam difíceis de usar, e por este motivo não são de grande auxílio na tarefa de reduzir o tempo de chegada ao mercado do produto. Através do uso de Alloy, baseada em lógica de primeira-ordem, é possível obter especificações em altonível e verificação formal de modelos com uma única linguagem. Este trabalho apresenta a poderosa abstração proporcionada pela linguagem Alloy em aplicações embarcadas, assim como regras para obter automaticamente código Java a partir de modelos Alloy. A geração de código Java a partir de modelos Alloy, combinada a uma ferramenta de estimativa, provê exploração de espaço de projeto, atendendo assim as fortes restrições do projeto de software embarcado, o que normalmente não é contemplado pela engenharia de software tradicional.
This work proposes a new approach for embedded software development, by combining the abstraction and model verification properties of the Alloy declarative language with the broad acceptance in industry of Java. The approach comes into play since software automation in the embedded domain has become a major need, as currently most of the development time is spent designing software for such hardconstrained resources products. Design automation tools for embedded systems must meet the demand for productivity and maintainability, but constraints such as memory, power and performance must still be considered. Design automation tools deal with productivity and maintainability by allowing high-level specifications, which is hard to accomplish on the embedded domain due to the mixed behavior nature of many embedded applications. Approaches that provide means for formal verification are also attractive, but their usage is usually not straightforward, and for this reason they are not that helpful in dealing with time-tomarket constraints. By using Alloy, based in first-order logic, it is possible to obtain high-level specifications and formal model verification with a single language. This work shows the powerful abstraction provided by the Alloy language for embedded applications, as well as rules for obtaining automatically Java code from Alloy models. The Java source code generation from Alloy models, combined with an estimation tool, provides design space exploration to match tight embedded software design constraints, what is usually not taken into account by standard software engineering techniques.
APA, Harvard, Vancouver, ISO, and other styles
45

Alcantara, de Lima Otavio Junior. "Emulation platform synthesis and NoC evaluation for embedded systems : towards next generation networks." Thesis, Saint-Etienne, 2015. http://www.theses.fr/2015STET4001/document.

Full text
Abstract:
La complexité croissante des systèmes embarqués multi-coeur exige des structures de communication flexibles et capables de supporter de nombreuses requêtes de trafics au moment de l’exécution. Les Réseaux sur Puce (NoC) émergent comme la technologie de communication la plus prometteuse pour les SoCs (Systèmes sur Puce), du fait de leur plus grande flexibilité par rapport aux autres solutions comme les bus et les connexions points à points. Les NoCs sont devenus le standard comme support de communication pour les SoC, mais les outils d’évaluation de performances deviennent critiques pour ces systèmes. Les outils d’émulation sur FPGA accélèrent l’analyse comparative de NoC ainsi que l’exploration de l’espace de conception. Ces outils ont une grande précision et un faible temps d’exécution par rapport aux simulateurs de NoC. Un outil d’émulation basé sur FPGA est composé de dizaines ou de centaines de composants distribués. Ces composants doivent être correctement gérés afin d’exécuter différents scénarii d’évaluation de trafic. Pour cela, il faut être à même de re-programmer les composants, en utilisant un protocole standard qui permet alors de piloter l’émulateur de NoC sur FPGA. Ces protocoles facilitent l’intégration des composants d’émulation développés par différents concepteurs et simplifient la configuration des noeuds d’émulation sans resynthèse ainsi que l’extraction des résultats d’émulation. Bien que l’émulation matérielle de NoC soit assez difficile, il est important de valider de nouvelles architectures de NoC avec des trafics basés sur les applications réelles pour permettre d’obtenir des résultats plus précis. La génération de modèles de trafic basés sur des applications est une préoccupation majeure pour l’émulation de NoC. Les traces intégrant des informations de dépendances sont plus précises que les traces ordinaires, ceci pour un large éventail d’architectures de NoC. Cependant, elles ont tendance à être plus grosses que les traces originales et exigent plus de ressources FPGA. L’objectif de cette thèse est la synthèse de plateformes d’émulation de NoC sur FPGA pour les futurs systèmes embarqués multi-noeuds. Une recherche approfondie s’est portée sur les stratégies éventuelles pour la génération des modèles réalistes de trafic pour le NoC émulé sur FPGA, et pour la gestion des plateformes d’émulation en utilisant des protocoles standard inspirés des protocoles de réseaux informatiques. Une première contribution de cette thèse est une structure (« framework ») d’analyse de traces capable d’extraire les dépendances de paquets. La plateforme proposée analyse un ensemble de traces extraites d’une application embarquée basée sur l’échange de messages afin de construire un modèle de calcul (MoC). Un générateur de trafic (TG) intégrant cette dépendance est créé à partir du MoC proposé. Ce TG reproduit le motif de trafic d’une application pour une plateforme d’émulation sur FPGA. Une seconde contribution est une version allégée du protocole SNMP (Simple Network Management Protocol) pour la gestion d’une plateforme d’émulation de NoC sur FPGA. L’architecture de la plateforme d’émulation proposée est basée sur les concepts du protocole SNMP. Elle offre une interface standard de haut niveau pour les composants d’émulation fournis par le protocole SNMP. Ce protocole facilite également l’intégration de composants d’émulation créés par différents concepteurs. Une analyse prospective des futures architectures de NoC constitue également une contribution dans cette thèse. Dans cette analyse, une architecture conceptuelle d’un système embarqué multi-noeuds du futur constitue un modèle pour extraire les contraintes de ces réseaux. Un autre mécanisme présenté est un NoC tolérant aux pannes, basé sur l’utilisation de liens de contournement. Enfin, la dernière contribution repose sur une analyse de base des besoins des futurs NoC pour les outils d’émulation sur FPGA
The ever-increasing complexity of many-core embedded system applications demands a flexible communication structure capable of supporting different traffics requirements at run-time. The Networks-on-Chip (NoCs) emerge as the most promising communication technology for the modern many-cores SoC (System-on-Chip), whereby they have greater scalability than other solutions such as buses and point to point connections. As NoCs become de facto standard for on chip systems, NoC performance evaluation tools become critical for SoCs design. The FPGA based emulation platforms accelerate NoC benchmarking as well as design space exploration. Those platforms have high accuracy and low execution time in relation to NoC simulators. An FPGA-based emulation platform is composed by tens or hundreds of distributed components. These components should be timely managed in order to execute an evaluation scenario. There is a lack of standard protocols to drive FPGA-based NoC emulators. Such protocols could ease the integration of emulation components developed by different designers, as well as they could enable the configuration of the emulation nodes without FPGA re-synthesis and the extraction of emulation results. The NoC hardware emulation is quite challenging. It is important to validate new NoC architectures with realistic workloads, because they provide much more accurate results. The generation of applications traffic patterns is a key concern for NoC emulation. The dependency aware traces are an appealing solution for the generation of realistic traffic workloads. They are more accurate than ordinary traces for a broad range of NoC architectures because they contain packets dependencies information. However, they tend to be bigger than the original ones what demands more FPGA resources. This thesis aims the synthesis of FPGA-based NoC emulation platforms for the future multi-core embedded systems. We are interested in investigating strategies to generate realistic traffic patterns for NoCs emulated on FPGAs, as well as the management of the emulation platform using standard protocols inspired by the computer networks protocols. One contribution of this thesis is a trace analysis framework which addresses the packets dependencies extraction problem. The proposed framework analyzes traces from a message passing application in order to build a Model of Computation (MoC). This MoC reproduces the communicative behavior of an application node. A dependency-aware Traffic Generator (TG) is created from the proposed MoC. This TG generates the application traffic pattern during an FPGA-based NoC emulation. Another contribution is a light version of SNMP (Simple Network Management Protocol) to manage an FPGA-based NoC emulation platform. An FPGA-based emulation platform architecture is proposed based on the principles of SNMP protocol. This platform has a high-level interface to the emulation components provided by that protocol, which also eases the integration of emulation components created by different designers. The emulation platform and the protocol capacities are evaluated during a task mapping and mesh topology design space exploration. A prospective analysis of future NoCs architectures is also a contribution of this thesis. In this analysis, a conceptual architecture of a future multi-core embedded system is used as model to extract these networks requirements. From this analysis, it is proposed some networking mechanisms. The first mechanism is a congestion-aware routing algorithm, which is an adaptive routing algorithm that selects the output path for a given packet based on a simple prioritized scheme of sets of rules. It is also proposed a congestion-control mechanisms for the vertical links interconnecting the layers of a 3D NoC. This mechanism is based upon the diffusion of congestion information by a piggyback protocol
APA, Harvard, Vancouver, ISO, and other styles
46

Souza, Junior Adao Antonio de. "Digital approach for the design of statistical analog data acquisition on SoCs." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2005. http://hdl.handle.net/10183/11491.

Full text
Abstract:
With the current demand for mixed-signal SoCs, an increasing number of designers are looking for ADC architectures that can be easily implemented over digital substrates. Since ADC performance is strongly dependent upon physical and electrical features, it gets more difficult for them to benefit from more recent technologies, where these features are more variable. This way, analog signal acquisition is not allowed to follow an evolutionary trend compatible with Moore’s Law. In fact, such trend shall get worst, since newer technologies are expected to have more variable characteristics. Also, for a matter of economy of scale, many times a mixed-signal SoC presents a good amount of idle processing power. In such systems it is advantageous to employ more costly digital signal processing provided that it allows a reduction in the analog area demanded or the use of less expensive analog blocks, able to cope with process variations and uncertainty. Besides the technological concerns, other factors that impact the cost of the design also advise to transfer problems from the analog to the digital domain whenever possible: design automation and self-test requirements, for instance. Recent surveys indicate that the total cost in designer hours for the analog blocks of a mixed-signal system can be up to three times the cost of the digital ones. This manuscript explores the concept of bottom-up analog acquisition design, using statistical sampling as a way to reduce the analog area demanded in the design of ADCs within mixed-signal systems. More particularly, it investigates the possibility of using digital modeling and digital compensation of non-idealities to ease the design of ADCs. The work is developed around three axes: the definition of target applications, the development of digital compensation algorithms and the exploration of architectural possibilities. New methods and architectures are defined and validated. The main notions behind the proposal are analyzed and it is shown that the approach is feasible, opening new paths of future research. Keywords:
APA, Harvard, Vancouver, ISO, and other styles
47

Marcus, Ventovaara, and Hasanbegović Arman. "A Method for Optimised Allocation of System Architectures with Real-time Constraints." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39492.

Full text
Abstract:
Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications.In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest.This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints.The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account.Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture.This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting.The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.
APA, Harvard, Vancouver, ISO, and other styles
48

Svensson, August. "Range-based Wireless Sensor Network Localization for Planetary Rovers." Thesis, Luleå tekniska universitet, Rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-83213.

Full text
Abstract:
Obstacles faced in planetary surface exploration require innovation in many areas, primarily that of robotics. To be able to study interesting areas that are by current means hard to reach, such as steep slopes, ravines, caves andlava tubes, the surface vehicles of today need to be modified or augmented. Oneaugmentation with such a goal is PHALANX (Projectile Hordes for AdvancedLong-term and Networked eXploration), a prototype system being developed atthe NASA Ames Research Center. PHALANX uses remote deployment of expendablesensor nodes from a lander or rover vehicle. This enables in-situ measurementsin hard-to-reach areas with reduced risk to the rover. The deployed sensornodes are equipped with capabilities to transmit data wirelessly back to therover and to form a network with the rover and other nodes. Knowledge of the location of deployed sensor nodes and the momentary locationof the rover is greatly desired. PHALANX can be of aid in this aspect as well.With the addition of inter-node and rover-to-node range measurements, arange-based network SLAM (Simultaneous Localization and Mapping) system can beimplemented for the rover to use while it is driving within the network. Theresulting SLAM system in PHALANX shares characteristics with others in the SLAM literature, but with some additions that make it unique. One crucial additionis that the rover itself deploys the nodes. Another is the ability for therover to more accurately localize deployed nodes by external sensing, such asby utilizing the rover cameras. In this thesis, the SLAM of PHALANX is studied by means of computer simulation.The simulation software is created using real mission values and valuesresulting from testing of the PHALANX prototype hardware. An overview of issuesthat a SLAM solution has to face as present in the literature is given in thecontext of the PHALANX SLAM system, such as poor connectivity, and highlycollinear placements of nodes. The system performance and sensitivities arethen investigated for the described issues, using predicted typical PHALANXapplication scenarios. The results are presented as errors in estimated positions of the sensor nodesand in the estimated position of the rover. I find that there are relativesensitivities to the investigated parameters, but that in general SLAM inPHALANX is fairly insensitive. This gives mission planners and operatorsgreater flexibility to prioritize other aspects important to the mission athand. The simulation software developed in this thesis work also has thepotential to be expanded on as a tool for mission planners to prepare forspecific mission scenarios using PHALANX.
APA, Harvard, Vancouver, ISO, and other styles
49

Martins, Luiz Gustavo Almeida. "Exploração de sequências de otimização do compilador baseada em técnicas hibridas de mineração de dados complexos." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-28032016-160827/.

Full text
Abstract:
Devido ao grande número de otimizações fornecidas pelos compiladores modernos e à ampla possibilidade de ordenação dessas transformações, uma eficiente Exploração do Espaço de Projeto (DSE) se faz necessária para procurar a melhor sequência de otimização de uma determinada função ou fragmento de código. Como esta exploração é uma tarefa complexa e dispendiosa, apresentamos uma nova abordagem de DSE capaz de reduzir esse tempo de exploração e selecionar sequências de otimização que melhoraram o desempenho dos códigos transformados. Nossa abordagem utiliza um conjunto de funções de referência, para as quais uma representação simbólica do código (DNA) e a melhor sequência de otimização são conhecidas. O DSE de novas funções é baseado em uma abordagem de agrupamento aplicado sobre o código DNA que identifica similaridades entre funções. O agrupamento utiliza três técnicas para a mineração de dados: distância de compressão normalizada, algoritmo de reconstrução de árvores filogenéticas (Neighbor Joining) e identificação de grupos por ambiguidade. As otimizações das funções de referência identificadas como similares formam o espaço que é explorado para encontrar a melhor sequência para a nova função. O DSE pode utilizar o conjunto reduzido de otimizações de duas formas: como o espaço de projeto ou como a configuração inicial do algoritmo. Em ambos os casos, a adoção de uma pré-seleção baseada no agrupamento permite o uso de algoritmos de busca simples e rápidos. Os resultados experimentais revelam que a nova abordagem resulta numa redução significativa no tempo total de exploração, ao mesmo tempo que alcança um desempenho próximo ao obtido através de uma busca mais extensa e dispendiosa baseada em algoritmos genéticos.
Due to the large number of optimizations provided in modern compilers and to compiler optimization specific opportunities, a Design Space Exploration (DSE) is necessary to search for the best sequence of compiler optimizations for a given code fragment (e.g., function). As this exploration is a complex and time consuming task, we present new DSE strategies to reduce the exploration time and still select optimization sequences able to improve the performance of each function. The DSE is based on a clustering approach which groups functions with similarities and then explore the reduced search space provided by the optimizations previously suggested for the functions in each group. The identification of similarities between functions uses a data mining method which is applied to a symbolic representation of the source code. The DSE strategies uses the reduced optimizations set identified by clustering in two ways: as the design space or as the initial configuration of the algorithm. In both ways, the adoption of a pre-selection based on clustering allows the use of simple and fast DSE algorithms. Several experiments for evaluating the effectiveness of the proposed approach address the exploration of compiler optimization sequences. Besides, we investigate the impact of each technique or component employed in the selection process. Experimental results reveal that the use of our new clustering-based DSE approach achieved a significant reduction on the total exploration time of the search space at the same time that obtained performance speedups close to a traditional genetic algorithmbased approach.
APA, Harvard, Vancouver, ISO, and other styles
50

Tuzov, Ilya. "Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/159883.

Full text
Abstract:
[ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7.
[CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7.
[EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA.
Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883
TESIS
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography