Dissertations / Theses on the topic 'SW Engineering'

To see the other types of publications on this topic, follow the link: SW Engineering.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 dissertations / theses for your research on the topic 'SW Engineering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Egerton, David. "Automated generation of SW design constructs from MESA source code /." Online version of thesis, 1993. http://hdl.handle.net/1850/12144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suljevic, Benjamin. "Mapping HW resource usage towards SW performance." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44176.

Full text
Abstract:
With the software applications increasing in complexity, description of hardware is becoming increasingly relevant. To ensure the quality of service for specific applications, it is imperative to have an insight into hardware resources. Cache memory is used for storing data closer to the processor needed for quick access and improves the quality of service of applications. The description of cache memory usually consists of the size of different cache levels, set associativity, or line size. Software applications would benefit more from a more detailed model of cache memory.In this thesis, we offer a way of describing the behavior of cache memory which benefits software performance. Several performance events are tested, including L1 cache misses, L2 cache misses, and L3 cache misses. With the collected information, we develop performance models of cache memory behavior. Goodness of fit is tested for these models and they are used to predict the behavior of the cache memory during future runs of the same application.Our experiments show that L1 cache misses can be modeled to predict the future runs. L2 cache misses model is less accurate but still usable for predictions, and L3 cache misses model is the least accurate and is not feasible to predict the behavior of the future runs.
APA, Harvard, Vancouver, ISO, and other styles
3

Pacourek, Kryštof. "Využití workshopů při analýze požadavků softwarového projektu." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-197824.

Full text
Abstract:
This thesis reacts to the still poorly mastered area of requirements analysis in the development of software, which shows that incorrectly defined requirements are one of the major causes of failure of software projects. The thesis deals with the techniques for gathering requirements, for example the use of requirements workshops as an approach, where a number of various stakeholders under the guidance of a facilitator create a defined output and collectively shape the requirements for SW. Organizing similar workshops is an intensive activity in which it is necessary to consider many aspects; these are discussed in the theoretical part of the thesis. The practical outcome of this work is a methodological framework purposed as a guideline for organizing a specific workshop focused on understanding the problem areas and solutions, identifying stakeholders and the scope of the project, where the problem is viewed from the perspective of the overall context and detachment, which is essential in the early stages of the software project. The workshop was designed and implemented during a university course Software Project and adapted the MMSP methodology, which is used in the course.
APA, Harvard, Vancouver, ISO, and other styles
4

Chee, Kenneth W. "APPLIED HW/SW CO-DESIGN: Using the Kendall Tau Algorithm for Adaptive Pacing." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1038.

Full text
Abstract:
Microcontrollers, the brains of embedded systems, have found their way into every aspect of our lives including medical devices such as pacemakers. Pacemakers provide life supporting functions to people therefore it is critical for these devices to meet their timing constraints. This thesis examines the use of hardware co-processing to accelerate the calculation time associated with the critical tasks of a pacemaker. In particular, we use an FPGA to accelerate a microcontroller’s calculation time of the Kendall Tau Rank Correlation Coefficient algorithm. The Kendall Tau Rank Correlation Coefficient is a statistical measure that determines the pacemaker’s voltage level for heart stimulation. This thesis explores three different hardware distributions of this algorithm between an FPGA and a pacemaker’s microcontroller. The first implementation uses one microcontroller to establish the baseline performance of the system. The next implementation executes the entire Kendall Tau algorithm on an FPGA with varying degrees of parallelism. The final implementation of the Kendall Tau algorithm splits the computational requirements between the microcontroller and FPGA. This thesis uses these implementations to compare system-level issues such as power consumption and other tradeoffs that arise when using an FPGA for co-processing.
APA, Harvard, Vancouver, ISO, and other styles
5

Seidi, Nahid. "Document-Based Databases In Platform SW Architecture For Safety Related Embedded System." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3122.

Full text
Abstract:
The project is about the investigation on Document-Based databases, their evaluation criteria and use cases regarding requirements management, SW architecture and test management to set up an (ESLM) Embedded Systems Lifecycle Management tool. The current database used in the ESLM is a graph database called Neo4j, which meets the needs of the current system. The result of studying Document databases turned to the decision of not using a Document database for the system. Instead regarding the requirements, a combination of Graph database and Document database could be the practical solution in future.
APA, Harvard, Vancouver, ISO, and other styles
6

Nowicki, Lisa Ann. "Engineering geology considerations for realignment of interstate 70/76 across the landslide at New Baltimore, Somerset County, SW Pennsylvania." Kent State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=kent1302399443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Renbi, Abdelghani. "Power and Energy Efficiency Evaluation for HW and SW Implementation of nxn Matrix Multiplication on Altera FPGAs." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-10545.

Full text
Abstract:

In addition to the performance, low power design became an important issue in the design process of mobile embedded systems. Mobile electronics with rich features most often involve complex computation and intensive processing, which result in short battery lifetime and particularly when low power design is not taken in consideration. In addition to mobile computers, thermal design is also calling for low power techniques to avoid components overheat especially with VLSI technology. Low power design has traced a new era. In this thesis we examined several techniques to achieve low power design for FPGAs, ASICs and Processors where ASICs were more flexible to exploit the HW oriented techniques for low power consumption. We surveyed several power estimation methodologies where all of them were prone to at least one disadvantage. We also compared and analyzed the power and energy consumption in three different designs, which perform matrix multiplication within Altera platform and using state-of-the-art FPGA device. We concluded that NIOS II\e is not an energy efficient alternative to multiply nxn matrices compared to HW matrix multipliers on FPGAs and configware is an enormous potential to reduce the energy consumption costs.

APA, Harvard, Vancouver, ISO, and other styles
8

Tiejun, Hu Di Wu. "Design of Single Scalar DSP based H.264/AVC Decoder." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2812.

Full text
Abstract:

H.264/AVC is a new video compression standard designed for future broadband network. Compared with former video coding standards such as MPEG-2 and MPEG-4 part 2, it saves up to 40% in bit rate and provides important characteristics such as error resilience, stream switching etc. However, the improvement in performance also introduces increase in computational complexity, which requires more powerful hardware. At the same time, there are several image and video coding standards currently used such as JPEG and MPEG-4. Although ASIC design meets the performance requirement, it lacks flexibility for heterogeneous standards. Hence reconfigurable DSP processor is more suitable for media processing since it provides both real-time performance and flexibility.

Currently there are several single scalar DSP processors in the market. Compare to media processor, which is generally SIMD or VLIW, single scalar DSP is cheaper and has smaller area while its performance for video processing is limited. In this paper, a method to promote the performance of single scalar DSP by attaching hardware accelerators is proposed. And the bottleneck for performance promotion is investigated and the upper limit of acceleration of a certain single scalar DSP for H.264/AVC decoding is presented.

Behavioral model of H.264/AVC decoder is realized in pure software during the first step. Although real-time performance cannot be achieved with pure software implementation, computational complexity of different parts is investigated and the critical path in decoding was exposed by analyzing the first design of this software solution. Then both functional acceleration and addressing acceleration were investigated and designed to achieve the performance for real-time decoding using available clock frequency within 200MHz.

APA, Harvard, Vancouver, ISO, and other styles
9

Nilsson, Per. "Hardware / Software co-design for JPEG2000." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5796.

Full text
Abstract:

For demanding applications, for example image or video processing, there may be computations that aren’t very suitable for digital signal processors. While a DSP processor is appropriate for some tasks, the instruction set could be extended in order to achieve higher performance for the tasks that such a processor normally isn’t actually design for. The platform used in this project is flexible in the sense that new hardware can be designed to speed up certain computations.

This thesis analyzes the computational complex parts of JPEG2000. In order to achieve sufficient performance for JPEG2000, there may be a need for hardware acceleration.

First, a JPEG2000 decoder was implemented for a DSP processor in assembler. When the firmware had been written, the cycle consumption of the parts was measured and estimated. From this analysis, the bottlenecks of the system were identified. Furthermore, new processor instructions are proposed that could be implemented for this system. Finally the performance improvements are estimated.

APA, Harvard, Vancouver, ISO, and other styles
10

Andersson, Mikael, and Per Karlström. "Parallel JPEG Processing with a Hardware Accelerated DSP Processor." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2615.

Full text
Abstract:

This thesis describes the design of fast JPEG processing accelerators for a DSP processor.

Certain computation tasks are moved from the DSP processor to hardware accelerators. The accelerators are slave co processing machines and are controlled via a new instruction set. The clock cycle and power consumption is reduced by utilizing the custom built hardware. The hardware can perform the tasks in fewer clock cycles and several tasks can run in parallel. This will reduce the total number of clock cycles needed.

First a decoder and an encoder were implemented in DSP assembler. The cycle consumption of the parts was measured and from this the hardware/software partitioning was done. Behavioral models of the accelerators were then written in C++ and the assembly code was modified to work with the new hardware. Finally, the accelerators were implemented using Verilog.

Extension of the accelerator instructions was given following a custom design flow.

APA, Harvard, Vancouver, ISO, and other styles
11

Pizzoleto, Alessandro Viola [UNESP]. "Ontologia empresarial do modelo de referência MPS para software (MR-MPS-SW) com foco nos níveis G e F." Universidade Estadual Paulista (UNESP), 2013. http://hdl.handle.net/11449/98685.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0 Previous issue date: 2013-02-13Bitstream added on 2014-06-13T19:18:04Z : No. of bitstreams: 1 pizzoleto_av_me_sjrp.pdf: 2799978 bytes, checksum: b4ab275a0f312029d44f408ae4ccb8da (MD5)
Este trabalho apresenta uma proposta que objetiva contribuir com a compreensão do Modelo de Referência MPS para Software (MR-MPS-SW), facilitando a sua implantação, principalmente em micros, pequenas e médias empresas (mPME) produtoras de software. Outro objetivo é contribuir com a uniformização do conhecimento do MR-MPS-SW entre todos os envolvidos nos processos de implantação, consultoria e avaliação do modelo. O MR-MPS-SW possui sete níveis de maturidade, de A (maior nível) a G (menor nível). A proposta trata de uma nova forma de organizar o conhecimento do MR-MPS-SW através da definição de uma ontologia empresarial implementada em OWL para os níveis G e F. Esses níveis requerem grandes desafios na mudança da cultura organizacional, bem como no gerenciamento de projetos, garantia da qualidade e medições. Para apoiar o usuário com uniformização dos termos de Gerência de Projetos, foram associados conceitos e terminologia do PMBOK (Project Management Body of Knowledge). Indicadores do modelo BSC (Balanced Scorecard) foram integrados ao modelo MR-MPS-SW para facilitar futuras iniciativas de alinhamento com o planejamento estratégico da empresa e modelo de negócios. Para isso, este trabalho providenciou uma sistemática para avaliação de uma versão alpha da ontologia, através de técnicas usadas em testes de usabilidade na Engenharia de Software. Essa avaliação mostrou como a ontologia facilitou o entendimento de usuários com diferentes níveis de conhecimento no MR-MPS-SW. Também proporcionou recomendações para melhorias na ontologia. Uma versão beta foi disponibilizada em repositórios gratuitos para ser avaliada por mPME e pessoas interessadas no modelo MPS-SW
This work presents a proposal that aims to contribute to the understanding of MPS Reference Model for Software (MPS-SW), facilitating its deployment, especially in micro, small and medium enterprises (MSME) of software development. Another goal is to contribute to the standardization of the knowledge of the MPS-SW among stakeholders in the process of implantation, consulting and evaluation of the model. The MPS-SW has seven levels of maturity, from A (highest level) to G (lower level). This proposal is a new way of organizing knowledge of the MPS-SW through the definition of an enterprise ontology in OWL for G and F levels. These levels require great efforts in changing organizational culture, as well as project management, quality assurance and measurements. . Terminology and concepts of the PMBOK (Project Management Body of Knowledge) were associated to the ontology, in order, to support the user in terms of standardization of project management. Indicators of the BSC Model (Balanced Scorecard) were integrated into the MPS-SW model to facilitate future initiatives for alignment with the strategic planning and business model. For this purpose this work provided a systematic evaluation of an alpha release of the ontology using techniques of usability testing in Software Engineering. The evaluation showed how ontology facilitated the understanding of users with different levels of knowledge on the MRMPS-SW. It also provided the definition of recommendations for improvements in the ontology. A beta version was made available in free ontology repositories to be evaluated by MSME and people interested in the MPS-SW model
APA, Harvard, Vancouver, ISO, and other styles
12

Pizzoleto, Alessandro Viola. "Ontologia empresarial do modelo de referência MPS para software (MR-MPS-SW) com foco nos níveis G e F /." São José do Rio Preto, 2013. http://hdl.handle.net/11449/98685.

Full text
Abstract:
Orientador: Hilda Carvalho de Oliveira
Banca: Kechi Hirama
Banca: João Porto
Resumo: Este trabalho apresenta uma proposta que objetiva contribuir com a compreensão do Modelo de Referência MPS para Software (MR-MPS-SW), facilitando a sua implantação, principalmente em micros, pequenas e médias empresas (mPME) produtoras de software. Outro objetivo é contribuir com a uniformização do conhecimento do MR-MPS-SW entre todos os envolvidos nos processos de implantação, consultoria e avaliação do modelo. O MR-MPS-SW possui sete níveis de maturidade, de A (maior nível) a G (menor nível). A proposta trata de uma nova forma de organizar o conhecimento do MR-MPS-SW através da definição de uma ontologia empresarial implementada em OWL para os níveis G e F. Esses níveis requerem grandes desafios na mudança da cultura organizacional, bem como no gerenciamento de projetos, garantia da qualidade e medições. Para apoiar o usuário com uniformização dos termos de Gerência de Projetos, foram associados conceitos e terminologia do PMBOK (Project Management Body of Knowledge). Indicadores do modelo BSC (Balanced Scorecard) foram integrados ao modelo MR-MPS-SW para facilitar futuras iniciativas de alinhamento com o planejamento estratégico da empresa e modelo de negócios. Para isso, este trabalho providenciou uma sistemática para avaliação de uma versão alpha da ontologia, através de técnicas usadas em testes de usabilidade na Engenharia de Software. Essa avaliação mostrou como a ontologia facilitou o entendimento de usuários com diferentes níveis de conhecimento no MR-MPS-SW. Também proporcionou recomendações para melhorias na ontologia. Uma versão beta foi disponibilizada em repositórios gratuitos para ser avaliada por mPME e pessoas interessadas no modelo MPS-SW
Abstract: This work presents a proposal that aims to contribute to the understanding of MPS Reference Model for Software (MPS-SW), facilitating its deployment, especially in micro, small and medium enterprises (MSME) of software development. Another goal is to contribute to the standardization of the knowledge of the MPS-SW among stakeholders in the process of implantation, consulting and evaluation of the model. The MPS-SW has seven levels of maturity, from A (highest level) to G (lower level). This proposal is a new way of organizing knowledge of the MPS-SW through the definition of an enterprise ontology in OWL for G and F levels. These levels require great efforts in changing organizational culture, as well as project management, quality assurance and measurements. . Terminology and concepts of the PMBOK (Project Management Body of Knowledge) were associated to the ontology, in order, to support the user in terms of standardization of project management. Indicators of the BSC Model (Balanced Scorecard) were integrated into the MPS-SW model to facilitate future initiatives for alignment with the strategic planning and business model. For this purpose this work provided a systematic evaluation of an alpha release of the ontology using techniques of usability testing in Software Engineering. The evaluation showed how ontology facilitated the understanding of users with different levels of knowledge on the MRMPS-SW. It also provided the definition of recommendations for improvements in the ontology. A beta version was made available in free ontology repositories to be evaluated by MSME and people interested in the MPS-SW model
Mestre
APA, Harvard, Vancouver, ISO, and other styles
13

Pasca, Vladimir. "Développement d'architectures HW/SW tolérantes aux fautes et auto-calibrantes pour les technologies Intégrées 3D." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00838677.

Full text
Abstract:
Malgré les avantages de l'intégration 3D, le test, le rendement et la fiabilité des Through-Silicon-Vias (TSVs) restent parmi les plus grands défis pour les systèmes 3D à base de Réseaux-sur-Puce (Network-on-Chip - NoC). Dans cette thèse, une stratégie de test hors-ligne a été proposé pour les interconnections TSV des liens inter-die des NoCs 3D. Pour le TSV Interconnect Built-In Self-Test (TSV-IBIST) on propose une nouvelle stratégie pour générer des vecteurs de test qui permet la détection des fautes structuraux (open et short) et paramétriques (fautes de délaye). Des stratégies de correction des fautes transitoires et permanents sur les TSV sont aussi proposées aux plusieurs niveaux d'abstraction: data link et network. Au niveau data link, des techniques qui utilisent des codes de correction (ECC) et retransmission sont utilisées pour protégé les liens verticales. Des codes de correction sont aussi utilisés pour la protection au niveau network. Les défauts de fabrication ou vieillissement des TSVs sont réparé au niveau data link avec des stratégies à base de redondance et sérialisation. Dans le réseau, les liens inter-die défaillante ne sont pas utilisables et un algorithme de routage tolérant aux fautes est proposé. On peut implémenter des techniques de tolérance aux fautes sur plusieurs niveaux. Les résultats ont montré qu'une stratégie multi-level atteint des très hauts niveaux de fiabilité avec un cout plus bas. Malheureusement, il n'y as pas une solution unique et chaque stratégie a ses avantages et limitations. C'est très difficile d'évaluer tôt dans le design flow les couts et l'impact sur la performance. Donc, une méthodologie d'exploration de la résilience aux fautes est proposée pour les NoC 3D mesh.
APA, Harvard, Vancouver, ISO, and other styles
14

Aburawi, Abdulrahman, and Sarija Salic. "Emergency Communication." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20568.

Full text
Abstract:
Even in the 21st century, modern communication technology is still affected by natural disasters and political turmoil which threaten people’s lives and make the internet or mobile phone networks unavailable for use. This work uses systems theory which resulted in a proof of concept system that uses shortwave radio technology to provide a one-way communication system. A message a user writes on their smartphone, which is connected to a small pre-set transmitter, is sent out to a receiver in another part of the world where the message can then be posted on the internet. This system is a cheaper alternative to other shortwave radio transmitters, and has potential for improvement.
APA, Harvard, Vancouver, ISO, and other styles
15

Khlif, Manel. "Analyse de diagnosticabilité d'architecture de fonctions embarquées - Application aux architectures automobiles." Phd thesis, Université de Technologie de Compiègne, 2010. http://tel.archives-ouvertes.fr/tel-00801608.

Full text
Abstract:
Un système embarqué peut être défini comme un système électronique et informatique autonome, dédié à une tâche bien définie et soumis à des contraintes. Les défaillances des systèmes embarqués sont de plus en plus difficiles à prévoir, comprendre et réparer. Des travaux sur la sûreté de fonctionnement ont mis au point les techniques de vérification et des recommandations de conception pour maîtriser les risques. En même temps d'autres travaux ont entrepris d'améliorer la fiabilité de ces systèmes en rénovant les méthodologies de conception. Les méthodes de diagnostic, à leur tour, ont évolué afin d'améliorer la tolérance des systèmes embarqués aux pannes et leur capacité à s'auto-diagnostiquer. Ainsi, le domaine de l'analyse de la " diagnosticabilité " a vu le jour. Aujourd'hui, le concepteur d'un système doit s'assurer que celui-ci est diagnosticable, c'est-àdire que les fautes qui peuvent y apparaitre sont identifiables, avant de construire ou fabriquer le système. Les méthodes d'analyse de la diagnosticabilité se focalisent sur ce que nous appelons " la diagnosticabilité fonctionnelle " où l'architecture matérielle du système n'était pas directement considérée. Cette thèse contribue à l'analyse de l'impact de l'interaction des fonctions-architecture sur la diagnosticabilité d'un système embarqué. L'approche que nous avons conçue est intégrable dans le cycle de conception des systèmes embarqués ; elle commence par l'analyse de la diagnosticabilité des systèmes à événements discrets (telle qu'elle est présentée dans la littérature). Notre méthode, exige ensuite la vérification d'un ensemble de propriétés que nous avons définies et appelées " propriétés de la diagnosticabilité fonctionnelle-architecturale ". La vérification des propriétés s'effectue en deux étapes : la première étape est la vérification de la description de l'architecture (réalisée en AADL) et la deuxième étape est la vérification de l'interaction fonctions-architecture (réalisée en SystemC-Simulink). Pour l'analyse de l'interaction des fonctions avec l'architecture, réalisée en SystemC-Simulink, nous avons développé un prototype d'outil COSITA basé sur l'analyse des traces de la co-simulation du co-modèle. Nous avons comparé les résultats de l'analyse des traces de co-simulation avec des résultats que nous avons obtenus suite à une émulation sur une plateforme physique automobile dans le laboratoire Heudiasyc. Finalement, nous avons mis au point à travers cette thèse une méthodologie originale d'analyse de la diagnosticabilité qui prend en considération les contraintes de l'architecture matérielle du système.
APA, Harvard, Vancouver, ISO, and other styles
16

BICCHIERAI, IRENE. "An Ontological Approach Supporting the Development of Safety-Critical Software." Doctoral thesis, 2014. http://hdl.handle.net/2158/851497.

Full text
Abstract:
In several application domains, the development of safety-critical software is subject to certification standards which prescribe to perform activities depending on information relative to different stages of development. Data needed in these activities reflects concepts that pertain to three different perspectives: i) structural elements of design and implementation; ii) functional requirements and quality attributes; iii) organization of the overall process. The integration of these concepts may considerably improve the trade-off between reward and effort spent in verification and quality-driven activities. This dissertation proposes a systematic approach for the efficient management of concepts and data involved in the development process of safety critical systems, illustrating how the activities performed during the life cycle can be integrated in a common framework. This thesis addresses the exploitation of ontological modeling and semantic technologies so as to support cohesion across different stages of the development life cycle, attaching a machine-readable semantics to concepts belonging to structural, functional and process perspectives. The formalized conceptualization enables the implementation of a tool leveraging well established technologies aiding the accomplishment of crucial and effort-expensive activities.
APA, Harvard, Vancouver, ISO, and other styles
17

Kandasamy, Santheeban. "Dynamic HW/SW Partitioning: Configuration Scheduling and Design Space Exploration." Thesis, 2007. http://hdl.handle.net/10012/3042.

Full text
Abstract:
Hardware/software partitioning is a process that occurs frequently in embedded system design. It is the procedure of determining whether a part of a system should be implemented in software or hardware. This dissertation is a study of hardware/software partitioning and the use of scheduling algorithms to improve the performance of dynamically reconfigurable computing devices. Reconfigurable computing devices are devices that are adaptable at the logic level to solve specific problems [Tes05]. One example of a reconfigurable computing device is the field programmable gate array (FPGA). The emergence of dynamically reconfigurable FPGAs made it possible to configure FPGAs at runtime. Most current approaches use a simple on demand configuration scheduling algorithm for the FPGA configurations. The on demand configuration scheduling algorithm reconfigures the FPGA at runtime, whenever a configuration is needed and is found not to be configured. The problem with this approach of dynamic reconfiguration is the reconfiguration time overhead, which is the time it takes to reconfigure the FPGA with a new configuration at runtime. Configuration caches and partial configuration have been proposed as possible solutions to this problem, but these techniques suffer from various limitations. The emergence of dynamically reconfigurable FPGAs also made it possible to perform dynamic hardware/software partitioning (DHSP), which is the procedure of determining at runtime whether a computation should be performed using its software or hardware implementation. The drawback of performing DHSP using configurations that are generated at runtime is that the profiling and the dynamic generation of configurations require profiling tool and synthesis tool access at runtime. This study proposes that configuration scheduling algorithms, which perform DHSP using statically generated configurations, can be developed to combine the advantages and reduce the major disadvantages of current approaches. A case study is used to compare and evaluate the tradeoffs between the currently existing approach for dynamic reconfiguration and the DHSP configuration scheduling algorithm based approach proposed in the study. A simulation model is developed to examine the performance of the various configuration scheduling algorithms. First, the difference in the execution time between the different approaches is analyzed. Afterwards, other important design criteria such as power consumption, energy consumption, area requirements and unit cost are analyzed and estimated. Also, business and marketing considerations such as time to market and development cost are considered. The study illustrates how different types of DHSP configuration scheduling algorithms can be implemented and how their performance can be evaluated using a variety of software applications. It is also shown how to evaluate when which of the approaches would be more advantageous by determining the tradeoffs that exist between them. Also the underlying factors that affect when which design alternative is more advantageous are determined and analyzed. The study shows that configuration scheduling algorithms, which perform DHSP using statically generated configurations, can be developed to combine the advantages and reduce some major disadvantages of current approaches. It is shown that there are situations where DHSP configuration scheduling algorithms can be more advantageous than the other approaches.
APA, Harvard, Vancouver, ISO, and other styles
18

Trommelen, Michelle Suzanne. "Quaternary stratigraphy and glacial history of the Fort Nelson (southeast) and Fontas River (southwest) map areas (NTS 094J/SE and 0941/SW), northeastern British Columbia." Thesis, 2006. http://hdl.handle.net/1828/2009.

Full text
Abstract:
The study area in northeast British Columbia extends from the Rocky Mountains in the west to the Fort Nelson Lowland in the east, and includes the westernmost extend of the Laurentide Ice Sheet (LIS) and the easternmost extend of the Cordilleran Ice Sheet (CIS) in the Late Pleistocene. Surficial mapping conducted over portions of the Fontas and Fort Nelson map areas (NTS 094I/SW and 094J/SE, respectively) provides information on sediment distribution and characteristics as well as glacial history. This information has direct implications for geotechnical investigations, aggregate resources and diamond exploration in the region. Non-glacial pre-Late Wisconsinan sediments occur at multiple sites along the Prophet River, providing a pre-glacial or interglacial history for the area. Geochemical analysis and clast lithologies were used to differentiate between sediments derived from the LIS to the east, and Montane/CIS glaciers to the west. The Quaternary stratigraphy of the Prophet River valley indicates the presence of a paleo-Prophet River valley system. Nonglacial deposits in the paleovalley include overbank fines and floodplain sediments interbedded with fluvial gravels. Macrofossils within horizontally laminated organic-rich black clay and silt are interpreted to indicate deposition in the floodplain of the paleo-Prophet River within oxbow-lakes and possibly also sag ponds. The climate is interpreted to be similar to present within a dominantly spruce forest. Wood found at one site provided a radiocarbon date of 49 300±2000 BP, while wood obtained from five other sites provided non-finite radiocarbon ages. In the Late Wisconsinan, the LIS advanced west-southwest into the study area, blocking existing east-flowing regional drainage, and forming an ice-dammed proglacial lake in the Prophet River valley. Ice overrode these sediments and deposited clast-poor clayey-silt till over the entire region. Thicknesses range from less than one metre to greater than twenty metres in the Prophet River valley. In river-cut sections near the Rocky Mountains in the Fort Nelson and Tuchodi Lakes map areas, potassium-feldspar rich granitoid and gneissic clasts, derived from the Canadian Shield, are generally found only east of the foothills, except along the Tetsa River valley. Near the mountain front, in Tuchodi River valley, outwash from Montane/Cordilleran glacial meltwaters was deposited before the LIS advanced and ponded the valley in the Late Wisconsinan. The geochemistry of 303 till samples collected throughout the study area is used to evaluate the regional glacial history inferred from stratigraphic and geomorphic data. Three different geochemical populations are recognized and corroborated by clast lithology (relative percent) from 56 till and glaciofluvial samples. One population, covering the northeast portion of the study area, was likely deposited by the LIS where it extended west into the Rocky Mountain front during the Late Wisconsinan. The second population suggests that the eastern extent of Montane/Cordilleran ice during the Late Wisconsinan was at least to the Rocky Mountain Foothills; however its easternmost position remains unknown. The third population can be attributed to Late Wisconsinan LIS reworking sediment deposited on the Interior Plains by the CIS, either in the Late Wisconsinan or earlier. During early deglaciation, the ice retreated to the east-northeast, impounding local drainage at the ice margin, forming Glacial Lake Prophet in the Fort Nelson map area and Glacial Lake Hay in the Fontas map area. Glacial lakes followed the retreating ice margin and drained through a variety of meltwater channels. The exposed glacial lacustrine plain became a source for sand dunes oriented southeast indicating katabatic paleowinds from the northwest (NTS 094I/SE).
APA, Harvard, Vancouver, ISO, and other styles
19

Juliato, Marcio. "Fault Tolerant Cryptographic Primitives for Space Applications." Thesis, 2011. http://hdl.handle.net/10012/5876.

Full text
Abstract:
Spacecrafts are extensively used by public and private sectors to support a variety of services. Considering the cost and the strategic importance of these spacecrafts, there has been an increasing demand to utilize strong cryptographic primitives to assure their security. Moreover, it is of utmost importance to consider fault tolerance in their designs due to the harsh environment found in space, while keeping low area and power consumption. The problem of recovering spacecrafts from failures or attacks, and bringing them back to an operational and safe state is crucial for reliability. Despite the recent interest in incorporating on-board security, there is limited research in this area. This research proposes a trusted hardware module approach for recovering the spacecrafts subsystems and their cryptographic capabilities after an attack or a major failure has happened. The proposed fault tolerant trusted modules are capable of performing platform restoration as well as recovering the cryptographic capabilities of the spacecraft. This research also proposes efficient fault tolerant architectures for the secure hash (SHA-2) and message authentication code (HMAC) algorithms. The proposed architectures are the first in the literature to detect and correct errors by using Hamming codes to protect the main registers. Furthermore, a quantitative analysis of the probability of failure of the proposed fault tolerance mechanisms is introduced. Based upon an extensive set of experimental results along with probability of failure analysis, it was possible to show that the proposed fault tolerant scheme based on information redundancy leads to a better implementation and provides better SEU resistance than the traditional Triple Modular Redundancy (TMR). The fault tolerant cryptographic primitives introduced in this research are of crucial importance for the implementation of on-board security in spacecrafts.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography