Dissertations / Theses on the topic 'Legacy Systems'

To see the other types of publications on this topic, follow the link: Legacy Systems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Legacy Systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kuipers, Tobias. "Techniques for understanding legacy software systems." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2002. http://dare.uva.nl/document/65858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Newton, Todd A., Myron L. Moodie, Ryan J. Thibodeaux, and Maria S. Araujo. "Network System Integration: Migrating Legacy Systems into Network-Based Architectures." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604308.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
The direction of future data acquisition systems is rapidly moving toward a network-based architecture. There is a handful of these network-based flight test systems already operating, and the current trend is catching on all over the flight test community. As vendors are churning out a whole new product line for networking capabilities, system engineers are left asking, "What do I do with all of this non-networked, legacy equipment?" Before overhauling an entire test system, one should look for a way to incorporate the legacy system components into the modern network architecture. Finding a way to integrate the two generations of systems can provide substantial savings in both cost and application development time. This paper discusses the advantages of integrating legacy equipment into a network-based architecture with examples from systems where this approach was utilized.
APA, Harvard, Vancouver, ISO, and other styles
3

Jakobsson, Rikard, and Jakob Molin. "Tidsfördelning vid vidareutveckling av "legacy" system." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279097.

Full text
Abstract:
Att arbeta med ett äldre så kallat legacy-system är en vanlig uppgift bland dagens programmerare men det saknas data om hur arbetsinsatsen är fördelad. Denna data vore användbar för att utvärdera hur kostsamt det är att vidareutveckla ett system kontra en omskrivning eller migration. För att åtgärda detta bidrar den här undersökningen med data som visar arbetsinsatsfördelningen vid migration av ett mindre legacy-system. Frågan som undersöks är ”Hur fördelas kostnaden i tid när man utvärderar och bygger om ett legacy-system?. Grunden för data i denna undersökning kommer ifrån utvecklingen av ett litet studentutvecklat system som använts på KTH och som var i stort behov av uppdatering. Det fanns mycket dokumentation om systemets krav och design, men den kod som fanns var ej användbar då den ej var dokumenterad och saknade klar struktur. Detta ledde till en omskrivning av systemet enligt de krav som tidigare formulerats. I det här projektet användes en vetenskaplig fallstudie med en kvantitativ metod för att få fram resultat. Tiden som lades ned på de moment som identifierats innan uppstart mättes och användes för att beräkna arbetsinsatsfördelningen. Resultatet av denna undersökning är en samling data som kan användas för uppskattningen av arbetsinsatsfördelningen vid omskrivningen av ett mindre legacy-system. I denna undersökning redovisas arbetsinsatsfördelningen som uppmätts under migrationen av ett legacy-system till en ny teknologi, då det existerande systemet inte betraktades som värt att uppdatera. Undersökningens slutsats är att om det finns ett bra förarbete som går att använda för att bygga om systemet så kommer majoriteten av arbetsarbetsinsatsen att läggas på implementeringen av systemet i kod.
Working with legacy-systems is a common task for programmers, and the development of these requires a great effort, but data regarding the distribution of this effort is scarce. This data would be valuable when evaluating the cost of continued development of a system compared to a rewrite or migration. To rectify this, we aim to provide a datapoint regarding the effort distribution for the migration of a small legacy-system. Our question is “How is the cost in time distributed when a legacy-system is evaluated and rebuilt?”. The data presented in this thesis comes from the development of a legacy-system developed by students at KTH. The system needed an update since it had ceased to function. There was a great amount of documentation with regards to requirement specifications and application design which could be used when redeveloping the system. The code, however, lacked any substantial documentation and structure, so it was decided early on that rewriting the system according to the existing documentation was going to be more efficient than working with the code for the current system. A scientific case study built on quantitative methods was used to collect data. To measure effort the time spent on each predefined moment was counted in minutes, and this was used to calculate the distribution of effort. The result of this thesis is a table of data and a review of the distribution of effort when working on a small legacy-system with clear requirements. The data produced in this thesis is based on the effort spent on rewriting a system that was not worth updating. The conclusion of this thesis is that most of the effort will be spent on implementing the code when a clearly defined system is rewritten from the ground up.
APA, Harvard, Vancouver, ISO, and other styles
4

Beijert, Lotte. "Designating Legacy Status to IT Systems : A framework in relation to a future-oriented perspective on legacy systems." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Informatik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-30337.

Full text
Abstract:
Organizations that have come to depend on legacy systems face quite a paradoxical problem. Maintaining the system might prove ineffective in accommodating necessary changes, but a system migration project is expensive and incurs a high amount of risk. Organizations are therefore hesitant to respond to the legacy system problem by undertaking action. Legacy system are often not causing their organization any problems at present, but a focus on the future with regard to the legacy system problem is lacking. This results in IT systems reaching an end-of-life state. The research therefore set out to explore a future-oriented perspective on legacy systems by means of observation, a literature review and a survey. The researcher found the key concept of a future-oriented perspective to be that any system that is limiting an organization to grow and innovate can be regarded as a legacy system. A framework to designate legacy status to IT systems is proposed in order to guide practitioners to acknowledge a problematic IT system to facilitate appropriate response at the right time. In relation to a future-oriented perspective, when to designate legacy status is best determined according to the system’s flexibility towards change and the alignment of the system with the business. In that regard, IT systems are end-of-life systems when they are too inflexible to change, and as a result become unaligned with either current operations or a future business opportunity or need.
APA, Harvard, Vancouver, ISO, and other styles
5

Narang, Sonia. "Integrating legacy systems with enterprise data store." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0022/MQ51775.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

O’Connell, Tim. "PACKET-BASED TELEMETRY NETWORKS OVER LEGACY SYSTEMS." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604806.

Full text
Abstract:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
The telemetry industry anticipates the tremendous potential value of adding full networking capability to telemetry systems. However, much of this potential can be realized while working with legacy equipment. By adding modules that interface transparently to existing equipment, continuous telemetry data can be encapsulated in discrete packets for over the air transmission. Packet fields can include header, sequence number and bytes for error detection and correction. The RF packet is transmitted without gaps through a standard serial interface and rate adjusted for the packet overhead – effectively making packetization transparent to a legacy system. The receiver unit performs packet synchronization, error correction, extraction of stream quality metrics and re-encapsulation of the payload data into an internet protocol (IP) packet. These standard packets can then be sent over the existing network transport system to the range control center. At the range control center, the extracted stream quality metrics are used to select the best telemetry source in real-time. This paper provides a general discussion of the path to a fully realized, packet-based telemetry network and a brief but comprehensive overview of the Hypernet system architecture as a case study.
APA, Harvard, Vancouver, ISO, and other styles
7

Burstein, Leah. "Legacy Student Information System: Replacement or Enhancement?" Digital Commons at Loyola Marymount University and Loyola Law School, 2016. https://digitalcommons.lmu.edu/etd/376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mickens, Leah M. "Rescuing the legacy project." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28239.

Full text
Abstract:
Thesis (M. S.)--Digital Media, Georgia Institute of Technology, 2009.
Committee Chair: Knoespel, Kenneth; Committee Member: Burnett, Rebecca; Committee Member: Fox Harrell; Committee Member: TyAnna Herrington.
APA, Harvard, Vancouver, ISO, and other styles
9

Onyino, Walter. "Rearchitecting legacy information systems : a service based method." Thesis, Lancaster University, 2001. http://eprints.lancs.ac.uk/11941/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tigerström, Viktor, and John Fahlgren. "Migrering av transaktionstunga legacy-system : En fallstudie hos Handelsbanken." Thesis, Uppsala universitet, Institutionen för informatik och media, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-226766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gee, Karen M. "An architectural framework for integrating COTS/GOTS/legacy systems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA381221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Marburger, André [Verfasser]. "Reverse Engineering of Complex Legacy Telecommunication Systems / André Marburger." Aachen : Shaker, 2005. http://d-nb.info/1186580232/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wampler, Douglas R. "Legacy systems migration in the small liberal arts educational institution." Virtual Press, 2003. http://liblink.bsu.edu/uhtbin/catkey/1273274.

Full text
Abstract:
With new technologies arriving at an ever-increasing rate, legacy systems migration has become a growing research area. However there are few if any studies that analyze a comprehensive actual migration in progress. Legacy system migrations commonly fail and typically fail in the planning phase unbeknownst to their project managers. Possessing information on other successes and problems would aid in mitigating these failures.The Value and Significance of the ProblemThe purpose of this research is to initially document an actual legacy systems migration for a small liberal arts educational institution and analyze the successes and failures to identify their underlying cause in order to enforce or discourage certain practices. There are very few software/hardware migration studies, if any that are based on actual data. For this reason this study provides the academic community with an important data point for analysis.The MethodThe migration under consideration is an Administrative Systems Upgrade at DePauw University. The current system that has had minor upgrades is approximately twenty years old. The planned migration has a three-year scope. The planning phase started on March 2002 and finished on March 2003. Important documents that will be analyzed to form a basis for analysis will be a white paper entitled Employment of Technology to Improve Administrative Operations: Assessment and Recommendations, produced by an external consultant, the resulting actual migration plan, discussions with the project manager and related technical staff who are involved with the migration. RisksSince the research will conclude before the completion of the DePauw Administrative Systems Upgrade, this study will be limited to the tasks completed within the timeframe. As with studying any real system success of the study may be affected by:Completion of assigned milestones within the project itself;Input from end users in the form of interviews or surveys;Input from IT staff involved in the upgrade in the form of interviews or surveys.The portion of the DePauw Administrative Systems Upgrade that falls outside the scope of this research may be the topic of future research.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
14

Gora, Michael Arthur. "Securing Software Intellectual Property on Commodity and Legacy Embedded Systems." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/33473.

Full text
Abstract:
The proliferation of embedded systems into nearly every aspect of modern infrastructure and society has seen their deployment in such diverse roles as monitoring the power grid and processing commercial payments. Software intellectual property (SWIP) is a critical component of these increasingly complex systems and represents a significant investment to its developers. However, deeply immersed in their environment, embedded systems are difficult to secure. As a result, developers want to ensure that their SWIP is protected from being reverse engineered or stolen by unauthorized parties. Many techniques have been proposed to address the issue of SWIP protection for embedded systems. These range from secure memory components to complete shifts in processor architectures. While powerful, these approaches often require the development of systems from the ground up or the application of specialized and often expensive hardware components. As a result they are poorly suited to address the security concerns of legacy embedded systems or systems based on commodity components. This work explores the protection of SWIP on heavily constrained, legacy and commodity embedded systems. We accomplish this by evaluating a generic embedded system to identify the security concerns in the context of SWIP protection. The evaluation is applied to determine the limitations of a software only approach on a real world legacy embedded system that lacks any specialized security hardware features. We improve upon this system by developing a prototype system using only commodity components. Finally we propose a Portable Embedded Software Intellectual Property Security (PESIPS) system that can easily be deployed as a framework on both legacy and commodity systems.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
15

Nahas, Mousaab. "Investigation of different techniques to upgrade legacy WDM communication systems." Thesis, Aston University, 2006. http://publications.aston.ac.uk/15361/.

Full text
Abstract:
This thesis presents experimental investigation of different effects/techniques that can be used to upgrade legacy WDM communication systems. The main issue in upgrading legacy systems is that the fundamental setup, including components settings such as EDFA gains, does not need to be altered thus the improvement must be carried out at the network terminal. A general introduction to optical fibre communications is given at the beginning, including optical communication components and system impairments. Experimental techniques for performing laboratory optical transmission experiments are presented before the experimental work of this thesis. These techniques include optical transmitter and receiver designs as well as the design and operation of the recirculating loop. The main experimental work includes three different studies. The first study involves a development of line monitoring equipment that can be reliably used to monitor the performance of optically amplified long-haul undersea systems. This equipment can provide instant finding of the fault locations along the legacy communication link which in tum enables rapid repair execution to be performed hence upgrading the legacy system. The second study investigates the effect of changing the number of transmitted 1s and Os on the performance of WDM system. This effect can, in reality, be seen in some coding systems, e.g. forward-error correction (FEC) technique, where the proportion of the 1s and Os are changed at the transmitter by adding extra bits to the original bit sequence. The final study presents transmission results after all-optical format conversion from NRZ to CSRZ and from RZ to CSRZ using semiconductor optical amplifier in nonlinear optical loop mirror (SOA-NOLM). This study is mainly based on the fact that the use of all-optical processing, including format conversion, has become attractive for the future data networks that are proposed to be all-optical. The feasibility of the SOA-NOLM device for converting single and WDM signals is described. The optical conversion bandwidth and its limitations for WDM conversion are also investigated. All studies of this thesis employ 10Gbit/s single or WDM signals being transmitted over dispersion managed fibre span in the recirculating loop. The fibre span is composed of single-mode fibres (SMF) whose losses and dispersion are compensated using erbium-doped fibre amplifiers (EDFAs) and dispersion compensating fibres (DCFs), respectively. Different configurations of the fibre span are presented in different parts.
APA, Harvard, Vancouver, ISO, and other styles
16

Al-Qasem, Mohammad. "An agent-based approach to handle interoperability in legacy information systems." Thesis, Keele University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Waters, Robert Lee. "Obtaining Architectural Descriptions from Legacy Systems: The Architectural Synthesis Process (ASP)." Diss., Available online, Georgia Institute of Technology, 2004:, 2004. http://etd.gatech.edu/theses/available/etd-10272004-160115/unrestricted/waters%5Frobert%5Fl%5F200412%5Fphd.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2005.
Rick Kazman, Committee Member ; Colin Potts, Committee Member ; Mike McCracken, Committee Member ; Gregory Abowd, Committee Chair ; Spencer Rugaber, Committee Member. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
18

ROSA, OTÁVIO ARAUJO LEITÃO. "TEST-DRIVEN MAINTENANCE: AN APPROACH FOR THE MAINTENANCE OF LEGACY SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=18385@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Test-Driven Development é uma técnica de desenvolvimento de software baseada em pequenos ciclos que alternam entre a escrita de testes e a implementação da solução para que os testes sejam aprovados. O desenvolvimento orientado a testes vem apresentando excelentes resultados em diversos aspectos da construção de novos sistemas de software. Maior manutenibilidade, melhoria de design, redução da densidade de defeitos, maior documentação e maior cobertura do código são vantagens que contribuem para a diminuição do custo de desenvolvimento e, conseqüentemente, para a maximização do retorno sobre o investimento obtido quando adotamos a técnica. Todos esses benefícios têm contribuído para que Test- Driven Development se torne uma prática cada vez mais crítica na aplicação de metodologias ágeis no processo de desenvolvimento. Quando avaliamos a técnica, sob a ótica dos muitos sistemas legados existentes, nos deparamos com uma clara incompatibilidade para sua adoção neste contexto. Test-Driven Development parte da premissa de que os testes devem ser construídos antes do código e, quando trabalhamos com legados, já possuímos milhares de linhas escritas e funcionando. Diante deste cenário, apresentamos a técnica, que chamamos de Test-Driven Maintenance, resultado da adaptação de Test-Driven Development para o contexto de legados, detalhamos o processo de adaptação necessário para que chegássemos à forma descrita e realizamos uma avaliação das características da técnica original que se estenderam à técnica adaptada. Buscando obter resultados que fossem, de fato, aplicáveis, produzimos uma avaliação empírica baseada nos resultados obtidos na introdução da técnica em um sistema legado, em constante uso e evolução, de uma empresa do Rio de Janeiro.
Test-Driven Development is a software development technique based on quick cycles that switch between writing tests and implementing a solution that assures that tests do pass. Test-Driven Development has produced excellent results in various aspects of building new software systems. Increased maintainability, improved design, reduced defect density, better documentation and increased code test coverage are reported as advantages that contribute to reducing the cost of development and, consequently, to increasing return on investment. All these benefits have contributed for Test-Driven Development to become an increasingly relevant practice while developing software. When evaluating test driven development from the perspective of maintaining legacy systems, we face a clear mismatch when trying to adopt this technique. Test-Driven Development is based on the premise that tests should be written before coding, but when working with legacy code we already have thousands of lines written and running. Considering this context, we discuss in this dissertation a technique, which we call Test-Driven Maintenance, that is a result of adapting Test-Driven Development to the needs of maintaining legacy systems. We describe how we have performed the adaptation that lead us to this new technique. Afterwards, we evaluate the efficacy of the technique applying it to a realistic project. To obtain realistic evaluation results, we have performed an empirical study while introducing the technique in a maintenance team working on a legacy system that is in constant evolution and use by an enterprise in Rio de Janeiro.
APA, Harvard, Vancouver, ISO, and other styles
19

You, Danyu. "Re-engineering the Legacy Software Systems by using Object-Oriented Technologies." Ohio University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1386167451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sinclair, Robert, Russell Beech, Kevin Jones, Scott Mundon, and Charles H. Jones. "LEGACY SYSTEMS’ SENSORS BECOME PLUG-N-PLAY WITH IEEE P1451.3 TEDS." International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/605608.

Full text
Abstract:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California
Replacing and maintaining sensors in existing legacy systems is costly and time consuming since no information beyond voltage or current is supplied by these sensors. When a sensor is replaced or added, information for that sensor has to be incorporated by the software programmer into the main system software – a costly and time-consuming process. A method has been developed to give these old sensors the intelligence to meet the requirements of the proposed IEEE P1451.3 standard. This is accomplished with no changes to the legacy hardware and a minor, one time change to the legacy main system software.
APA, Harvard, Vancouver, ISO, and other styles
21

Tahlawi, Lubna. "Combining Legacy Modernization Approaches For OO and SOA." Thesis, Fredericton: University of New Brunswick, 2012. http://hdl.handle.net/1882/44595.

Full text
Abstract:
Organizations with older legacy systems face a number of challenges, including obsolescent technologies, brittle software, integrating with modern applications, and rarity of properly skilled human resources. An increasingly common strategy for addressing such challenges is application modernization, which transforms legacy applications into (a) newer object-oriented programming languages, and (b) modern Service-Oriented Architecture (SOA). Published approaches to legacy application modernization focus either on technology transformation or SOA transformation, but not both. Given that both types of transformation are desirable, it is valuable to explore how to combine existing approaches to perform both transformations types within a single project. This thesis proposes principles for combining such approaches, and demonstrates how these principles can be applied through an example of a combined approach along with a simulated application of this example. The results of this simulated application leave us with considerable confidence that both transformations can be successfully incorporated into a combined project.
APA, Harvard, Vancouver, ISO, and other styles
22

De, Vergnes Matthieu (Matthieu Arthur). "Impact of Middle East emerging carriers on US and EU legacy airlines." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111244.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 129-130).
Airlines in the Middle East have captured significant attention from governments, media and consumers over the past decade. By building large networks that facilitate international connections at their hubs, Middle East carriers are able to compete in a wide range of origin destination markets around the globe. Three of these carriers stand out with their recent expansion to European, US and Asian destinations: Emirates, Etihad Airways and Qatar Airways, also known as the ME3 carriers. From a capacity perspective, ME3 airlines have grown very rapidly on routes where they compete with US and European airlines. Over the 2010-2015 period, from Europe to the ME, ME3 airlines increased their seat capacity by 97% against a 1% reduction by European legacy carriers. At the same time, ME3 carriers increased the number of seats from the US by 181% while, as of 2017, US carriers have cut all flights to the Middle East, with the exception of Israel. In addition, ME3 capacity to Asia, and in particular to India, grew significantly. From a traffic perspective, ME3 carriers have had a significant impact in markets beyond the Middle East. Passenger traffic in the EU-India and US-India markets grew by 14% and 26% respectively since 2010. Most of the growth was driven by ME3 carriers, allowing them to reach 26% and 37% market share in these markets in 2015. The ME3 capacity growth likely stimulated the overall demand in markets to India but has also caused some diversion of traffic away from nonME3 carriers. In a two-way fixed effect econometric model, we estimated that the presence of ME3 carriers in average EU-India and US-India markets diverted, respectively, 20% and 32% of nonME3 traffic to ME3 carriers. The growing influence of ME3 carriers has led to significant controversy over claims of subsidies and unfair competition from both US and ME3 airlines. Based on a brief review of the various claims, we found that both sides have received government backing. It is difficult to determine whether either of the parties have violated established competition rules while benefiting from this support. Nonetheless, the dispute is likely to continue, if not for legal purposes at least for public relations and political purposes.
by Matthieu de Vergnes.
S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
23

Mohan, Naveen. "Architecting Safe Automated Driving with Legacy Platforms." Licentiate thesis, KTH, Mekatronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223687.

Full text
Abstract:
Modern vehicles have electrical architectures whose complexity grows year after year due to feature growth corresponding to customer expectations. The latest of the expectations, automation of the dynamic driving task however, is poised to bring about some of the largest changes seen so far. In one fell swoop, not only does required functionality for automated driving drastically increase the system complexity, it also removes the fall-back of the human driver who is usually relied upon to handle unanticipated failures after the fact. The need to architect thus requires a greater rigour than ever before, to maintain the level of safety that has been associated with the automotive industry. The work that is part of this thesis has been conducted, in close collaboration with our industrial partner Scania CV AB, within the Vinnova FFI funded project ARCHER. This thesis aims to provide a methodology for architecting during the concept phase of development, using industrial practices and principles including those from safety standards such as ISO 26262. The main contributions of the thesis are in two areas. The first area i.e. Part A contributes, (i) an analysis of the challenges of architecting automated driving, and serves as a motivation for the approach taken in the rest of this thesis, i.e. Part B where the contributions include, (ii) a definition of a viewpoint for functional safety according to the definitions of ISO 42010, (iii) a method to systematically extract information from legacy components and (iv) a process to use legacy information and architect in the presence of uncertainty to provide a work product, the Preliminary Architectural Assumptions (PAA), as required by ISO 26262. The contributions of Part B together comprise a methodology to architect the PAA.   A significant challenge in working with the industry is finding the right fit between idealized principles and practical utility. The methodology in Part B has been judged fit for purpose by different parts of the organization at Scania and multiple case studies have been conducted to assess its usefulness in collaboration with senior architects. The methodology was found to be conducive in both, generating the PAA of a quality that was deemed suitable to the organization and, to find inadequacies in the architecture that had not been found earlier using the previous non-systematic methods. The benefits have led to a commissioning of a prototype tool to support the methodology that has begun to be used in projects related to automation at Scania. The methodology will be refined as the projects progress towards completion using the experiences gained. A further impact of the work is seen in two patent filings that have originated from work on the case studies in Part B. Emanating from needs discovered during the application of the methods, these filed patents (with no prior publications) outline the future directions of research into reference architectures augmented with safety policies, that are safe in the presence of detectable faults and failures. To aid verification of these ideas, work has begun on identifying critical scenarios and their elements in automated driving, and a flexible simulation platform is being designed and developed at KTH to test the chosen critical scenarios.
Efterfrågan på nya funktioner leder till en ständigt ökande komplexitet i moderna fordon, speciellt i de inbyggda datorsystemen. Införande av autonoma fordon utgör inte bara det mest aktuella exemplet på detta, utan medför också en av de största förändringar som fordonsbranschen sett. Föraren, som ”back-up” för att hantera oväntade situationer och fel, finns inte längre där vid höggradig automation, och motsvarande funktioner måste realiseras i de inbyggda system vilket ger en drastisk komplexitetsökning. Detta ställer systemarkitekter för stora utmaningar för att se till att nuvarande nivå av funktionssäkerhet bibehålls. Detta forskningsarbete har utförts i nära samarbete med Scania CV AB i det Vinnova (FFI)-finansierade projektet ARCHER. Denna licentiatavhandling har som mål att ta fram en metodik för konceptutveckling av arkitekturer, förankrat i industriell praxis och principer, omfattande bl.a. de som beskrivs i funktionssäkerhetsstandards som ISO 26262. Avhandlingen presenterar resultat inom två områden. Det första området, del A, redovisar, (i) en analys av utmaningar inom arkitekturutveckling för autonoma fordon, vilket också ger en motivering för resterande del av avhandlingen. Det andra området, del B, redovisar, (ii) en definition av en ”perspektivmodell” (en s.k. ”viewpoint” enligt ISO 42010) för funktionssäkerhet, (iii) en metod för att systematiskt utvinna information från existerande komponenter, och (iv) en process som tar fram en arbetsprodukt för ISO 26262 – Preliminära Arkitektur-Antaganden (PAA). Denna process använder sig av information från existerande komponenter – resultat (iii) och förenklar hantering av avsaknad/osäker information under arkitekturarbetet. Resultaten från del B utgör tillsammans en metodik för att ta fram en PAA. En utmaning i forskning är att finna en balans mellan idealisering och praktisk tillämpbarhet. Metodiken i del B har utvärderats i flertalet industriella fallstudier på Scania i samverkan med seniora arkitekter från industrin, och har av dessa bedömts som relevant och praktiskt tillämpningsbar. Erfarenheterna visar att metodiken stödjer framtagandet av PAA’s av   lämplig kvalitet och ger ett systematiskt sätt att hantera osäkerhet under arkitekturutvecklingen. Specifikt så gjorde metoden det möjligt att identifiera komponent-felmoder där arkitekturen inte var tillräcklig för åstadkomma önskad riskreducering, begränsningar som inte hade upptäckts med tidigare metoder. Ett prototypverktyg för att stödja metodiken har utvecklats och börjat användas på Scania i projekt relaterade till autonoma fordon. Metodiken kommer sannolikt att kunna förfinas ytterligare när dessa projekt går mot sitt slut och mer erfarenheter finns tillgängliga. Arbetet i del B har vidare lett till två patentansökningar avseende koncept som framkommit genom fallstudierna. Dessa koncept relaterar till referensarkitekturer som utökats med policies för personsäkerhet (Eng. ”safety”) för att hantera detekterbara felfall, och pekar ut en riktning för framtida forskning. För att stödja verifiering av dessa koncept har arbete inletts för att identifiera kritiska scenarios för autonom körning. En flexibel simuleringsplattform håller också på att designas för att kunna testa kritiska scenarios.
Vinnova-FFI funded Project ARCHER
APA, Harvard, Vancouver, ISO, and other styles
24

Leandersson, Carl, and Carl Nordefjäll. "Migrering av legacy-system i finanssektorn : En fallstudie hos ett försäkringsbolag." Thesis, Uppsala universitet, Institutionen för informatik och media, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-354732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Al-Khasib, Tariq. "Resource allocation and optimization for multiple-user legacy and cognitive radio systems." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/24588.

Full text
Abstract:
The rapid transition towards user mobility and the increased demand it carries for bandwidth and data rates has been the driver for significant advancements in research and development in wireless communications in the last decade. These advancements materialized through enhancements to the well established legacy systems and conceptual innovations with great potential. Not far from that, in this thesis, we consider a diverse set of tools and techniques that facilitate efficient utilization of system resources in legacy and Cognitive Radio (CR) systems without hindering the integrity and robustness of the system design. First, we introduce the concept of service differentiation at the receiver, which can be realized by means of a new multiple-user Multiple-Input Multiple-Output (MIMO) detector based on the well known V-BLAST algorithm. We devise the DiffSIC algorithm that is able to differentiate between users in service based on their priorities or imminent needs. DiffSIC achieves its goal by determining the optimal order of detection, at the receiver, that best fits the users' profiles. Second, we propose a channel allocation technique for the transmitter of MIMO multiple-user access systems which enhances the system capacity without aggravating the complexity of the receiver. In particular, we allow users to share resources to take full advantage of the degrees of freedom available in the system. Moreover, we show how to realize these enhancements using simple, yet powerful, modulation and detection techniques. Next, we propose new robust system designs for MIMO CR systems under the inevitable reality of imperfect channel state information at the CR transmitter. We apply innovative tools from optimization theory to efficiently and effectively solve design problems that involve multiple secondary users operating over multiple frequency carriers. At last, we observe the effect of primary users' activity on the stability of, and quality of service provided by, CR systems sharing the same frequency resource with the primary system. We propose admission control mechanisms to limit the effect of primary users' activity on the frequency of system outages at the CR system. We also devise pragmatic eviction control measures to overcome periods of system infeasibility with a minimally disruptive approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Baldin, Daniel [Verfasser]. "Reconfiguration of legacy software artifacts in resource constraint embedded systems / Daniel Baldin." Paderborn : Universitätsbibliothek, 2014. http://d-nb.info/1048616053/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nguyen, Thomas M. "Commercial Off-The-Shelf (COTS)/Legacy systems integration architectural design and analysis." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA383533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Zhuo. "A planning approach to migrating domain-specific legacy systems into service oriented architecture." Thesis, De Montfort University, 2012. http://hdl.handle.net/2086/9020.

Full text
Abstract:
The planning work prior to implementing an SOA migration project is very important for its success. Up to now, most of this kind of work has been manual work. An SOA migration planning approach based on intelligent information processing methods is addressed to semi-automate the manual work. This thesis will investigate the principle research question: 'How can we obtain SOA migration planning schemas (semi-) automatically instead of by traditional manual work in order to determine if legacy software systems should be migrated to SOA computation environment?'. The controlled experiment research method has been adopted for directing research throughout the whole thesis. Data mining methods are used to analyse SOA migration source and migration targets. The mined information will be the supplementation of traditional analysis results. Text similarity measurement methods are used to measure the matching relationship between migration sources and migration targets. It implements the quantitative analysis of matching relationships instead of common qualitative analysis. Concretely, an association rule and sequence pattern mining algorithms are proposed to analyse legacy assets and domain logics for establishing a Service model and a Component model. These two algorithms can mine all motifs with any min-support number without assuming any ordering. It is better than the existing algorithms for establishing Service models and Component models in SOA migration situations. Two matching strategies based on keyword level and superficial semantic levels are described, which can calculate the degree of similarity between legacy components and domain services effectively. Two decision-making methods based on similarity matrix and hybrid information are investigated, which are for creating SOA migration planning schemas. Finally a simple evaluation method is depicted. Two case studies on migrating e-learning legacy systems to SOA have been explored. The results show the proposed approach is encouraging and applicable. Therefore, the SOA migration planning schemas can be created semi-automatically instead of by traditional manual work by using data mining and text similarity measurement methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Cosentino, Valerio. "A model-based approach for extracting business rules out of legacy information systems." Phd thesis, Ecole des Mines de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00984763.

Full text
Abstract:
Today's business world is very dynamic and organizations have to quickly adjust their internal policies to follow the market changes. Such adjustments must be propagated to the business logic embedded in the organization's information systems, that are often legacy applications not designed to represent and operationalize the business logic independently from the technical aspects of the programming language employed. Consequently, the business logic buried in the system must be discovered and understood before being modified. Unfortunately, such activities slow down the modification of the system to new requirements settled in the organization policies and threaten the consistency and coherency of the organization business. In order to simplify these activities, we provide amodel-based approach to extract and represent the business logic, expressed as a set of business rules, from the behavioral and structural parts of information systems. We implement such approach for Java, COBOL and relational database management systems. The proposed approach is based on Model Driven Engineering,that provides a generic and modular solution adaptable to different languages by offering an abstract and homogeneous representation of the system.
APA, Harvard, Vancouver, ISO, and other styles
30

Inam, Rafia. "Hierarchical scheduling for predictable execution of real-time software components and legacy systems." Doctoral thesis, Mälardalens högskola, Inbyggda system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-26548.

Full text
Abstract:
This dissertation presents techniques to achieve predictable execution of coarse-grained software components and for preservation of temporal properties of components during their integration and reuse. The dissertation presents a novel concept runnable virtual node (RVN) which interaction with the environment is bounded both by a functional and a temporal interface, and the validity of its internal temporal behaviour is preserved when integrated with other components or when reused in a new environment. The realization of RVN exploits techniques for hierarchical scheduling to achieve temporal isolation, and the principles from component-based software-engineering to achieve functional isolation. The proof-of-concept case studies executed on a micro-controller demonstrate the preserving of real-time properties within software components for predictable integration and reusability in a new environment, in both hierarchical scheduling and RVN contexts. Further, a multi-resource server (MRS) is proposed and implemented to enable predictable execution when composing multiple real-time components on a COTS multicore platform. MRS uses resource reservation for both CPU-bandwidth and memory-bus bandwidth to bound the interferences between tasks running on the same core, as well as, between tasks running on different cores. The later could, without MRS, interfere with each other due to contention on a shared memory-bus and memory. The results indicated that MRS can be used to "encapsulate" legacy systems and to give them enough resources to fulfill their purpose. In the dissertation, the compositional schedulability analysis for MRS is also provided and an experimental study is performed to bring insight on the correlation between the server budgets. We believe that the proposed approaches enable a faster software integration and support legacy reuse and that this work transcend the boundaries of software engineering and real-time systems.
PPMSched
PROGRESS
APA, Harvard, Vancouver, ISO, and other styles
31

Reeder, Joshua A. "Improving legacy aircraft systems through Condition-Based Maintenance: an H-60 case study." Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/43985.

Full text
Abstract:
Approved for public release; distribution is unlimited
Condition-Based Maintenance (CBM) has been the focus of Department of Defense efforts to reduce the cost of maintaining weapons systems for nearly two decades. Through an investigation of the MH-60S helicopter, this paper uses a gap analysis framework to determine the value of increasing CBM usage. The Naval Aviation Maintenance Program (NAMP) has used scheduled inspections as the backbone of aviation maintenance since 1959. The most significant of these inspections is the phase cycle, which provides inspection of aircraft components based on flight hours. This study uses the MH-60S to conduct a capability gap analysis for CBM in naval aviation. Through the use of a JCIDS Capabilities Based Assessment, the capability gap between the CBM enabling IMD-HUMS and the NAMP phase cycle is determined. From this gap analysis, Earned Value Management (EVM) tools determine the value of closing the CBM capability gap between the phase maintenance and IMD-HUMS in terms of cost and safety. Finally, an alternative phase maintenance structure is proposed for MH-60S maintenance which leverages the CBM capabilities of the IMD-HUMS to reduce total lifecycle costs.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Jianzhi. "A novel approach to evolving legacy software systems into a grid computing environment." Thesis, De Montfort University, 2006. http://hdl.handle.net/2086/4103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Henriksson, David, and Eliaz Sundberg. "Evaluation of the NESizer2 method as a means of wrapping embedded legacy systems." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247909.

Full text
Abstract:
Legacy computer systems are systems where several of the main hardware and software components date back several decades. Modernizing these systems is often considered a large monetary and temporal investment with high risk, and to keep maintaining them usually becomes more and more difficult over time, which is why these legacy systems are still being used to this day in many industry sectors. A solution is therefore to try and integrate the legacy system components into modern systems, and there are several ways of achieving this. This bachelor thesis project work aims to analyze one approach known as “wrapping”. More specifically it analyzes NESizer2 Method, a method which utilizes relatively simple hardware and software interfaces to control the Ricoh RP2A03 processor found in the Nintendo Entertainment System, using an Atmega328 microprocessor. During the design and development phases of the project work a literature study was conducted, and experimental research method was utilized. The testing and experimental phases of the project work was focused on examining how identified key variables behaved when modifying certain parameters in the system. While we were able to produce some valid data, the results proved to be somewhat inconclusive, as certain operations such as memory operations did not work, leading to the conclusion that our circuit contained a faulty component.
Legacydatorsystem är system där många av de huvudsakliga hårdvaruoch mjukvarukomponenterna är flera decennier gamla. Att modernisera dessa system ses ofta som en stor monetär och tidsmässig investering, och att fortsätta att underhålla dem blir vanligtvis svårare och svårare med tiden. En lösning är därför att försöka att integrera legacy-systemets komponenter i moderna system, och det finns ett flertal tillvägagångssätt att uppnå detta. Detta kandidatexamensarbete ämnar att analysera ett tillvägagångssätt känt som “wrapping”. Mer specifikt analyseras NESizer2-metoden, en metod som utnyttjar relativt enkla hårdvaruoch mjukvarugränssnitt till att kontrollera Ricoh 2A03-processorn som finns i Nintendo Entertainment System, med hjälp av en Atmega328 mikroprocessor. Under designoch utvecklingsfaserna av projektarbetet utfördes en litteraturstudie, och experimentiell forskningsmetod användes. Testoch experimentfaserna av projektarbetet fokuserade på att undersöka hur identifierade nyckelvariabled betedde säg då man modifierade vissa parametrar i systemet. Även om vi lyckades producera en del riktig data visade sig resultaten vara ofullständiga, då vissa operationer såsom minnesoperationer inte fungerade, vilket ledde till slutsatsen att vår krets innehöll en defekt komponent.
APA, Harvard, Vancouver, ISO, and other styles
34

Alsalamah, Shada. "Achieving a secure collaborative environment in patient-centred healthcare with legacy information systems." Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/72127/.

Full text
Abstract:
Modern healthcare has been shifting from a traditional fragmented disease-centred delivery approach towards a more integrated Patient-Centred (PC) one to support comorbidities, when the patient suffers from more than one condition or disease. In PC delivery the patient is at the heart of its services which are tailored to meet an individual’s needs holistically. Enabling PC care requires the flow of medical information with the patient between different healthcare providers supporting the patient’s treatment plan, and sharing of information across healthcare organisations so that the Care Team (CT) can seamlessly access relevant medical information held in different information systems. In many countries this PC movement is taking an evolutionary approach that involves Legacy Information Systems (LIS) as they are the backbone of the healthcare organisation’s information. However, this collaboration reveals weaknesses in LIS in this role, as they may block a CT from accessing information, as they cannot comply with the information security policies for shared information that is needed in this collaborative environment to support PC. This is mainly because each of these LIS was designed as an autonomous discrete information system that enforces an organisation-driven information security policy protecting only local information resources through an Access Control (AC) model. This creates a single local point-of-control, limited by the system’s physical perimeter, to meet local information sharing and security contexts. This means PC adoption may require incorporation of multiple autonomous discrete information systems which presents four challenges – inconsistent policies, perimeter-bounded AC models, multiple points-of-controls, and heterogeneous LIS. First, such collaborative environments lack collaboration-driven information security policies that best meet the protection needs in the collaboration sharing and security contexts. Second, they deploy incompatible AC models that are not perimeter-transparent, and thus, unable to stretch across the discrete information systems to cover the whole collaborative environment. Third, these environments do not deploy a single obvious point-of-control with authority for policy enforcement. Finally, they need to access heterogeneous LIS that are not compatible with each other, and thus, it is essential that solutions can be integrated and coupled with these LIS to facilitate the utilisation of information stored in these systems. Current solutions addressing this situation fall short of meeting these challenges in establishing secure collaborative environments with LIS because they lack a comprehensive information security approach to meet the information sharing and security contexts driven by the collaboration. This research introduces a roadmap towards achieving a Secure Collaborative Environment (SCE) in collaborative environments using LIS from diverse organisations that addresses the above challenges, and meets the collaboration information sharing and security contexts without interrupting the local contexts of these LIS. An empirical study is used to determine how to create an SCE in modern healthcare which addresses the problems raised by incorporating LIS. This meets the collaborative information sharing context by creating an information layer that manages the information flow between healthcare providers based on treatment points. It also meets the information security context in the treatment pathways by controlling access to information in each treatment point using a Patient-Centred Access Control (PCAC) model. This model creates a PC-driven information security policy at the collaboration level that meets the overall care goal, enforces this balance in a neutral security domain with a single authority point-of-control that stretches across organisations anywhere within the collaboration environment, while retaining the local medical information security of shared information among the CT. Using domain analysis, observations, and interviews, the PC-driven balance of information security in cancer care, threats in LIS currently used in cancer care to attain that balance, and eight information security controls are identified. These controls manage information through an information layer and control access to the information through the novel PCAC model needed by these systems to attain that balance and address the problem. Using Workflow Technology (WfT), a prototype system implementing these controls to achieve a Secure Healthcare collaborative Environment (SHarE) has been fully studied, developed, and assessed. SHarE constructs an independent information layer that is based on treatment and lies on top of the interface of the currently used LISs to formalise and manage a unique treatment journey, while the PCAC model enforces access rules as the patient progresses along their treatment journey. This layer is designed as a loosely coupled wrapper based system with LIS to embrace the local organisation-centred access controls without interruption and sustain the balance of information security. Finally, using interviews, SHarE was assessed based on three criteria: usefulness and acceptance, setup and integration, and information governance. Results show that all interviewees agree that currently information does not always flow with the patient as they go along their treatment journey and nine different causes for this were suggested. All interviewees with no exception agreed that SHarE addresses this problem and helps the information flow with the patient between healthcare providers, and that it would be possible for SHarE to be adopted by a CT in cancer care. Over half the interviewees agreed that it is an easy to use system, useful, and helps locate information. The results also show there is an opportunity for SHarE to be integrated with CaNISC as some interviewees thought it is a much simpler system. However, multiple patient identifiers for a patient, as each system can have its own identifier, is predicted to be the biggest integration challenge. Results also show that SHarE and its controls attain the right balance of information security defined by the Caldicott Guardian and comply with the six Principles of the Caldicott Guardian. Although the assessment of SHarE highlighted a number of challenges and limitations that may hinder its adoption and integration if not carefully considered in the future, this proposal allowed the achievement of creating an SCE required to adopt PC care and attain the security balance necessary to support PC care systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Gulotta, Rebecca. "Digital Systems and the Experience of Legacy: Supporting Meaningful Interactions with Multigenerational Data." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/778.

Full text
Abstract:
People generate vast quantities of digital information as a product of their interactions with digital systems and with other people. As this information grows in scale and becomes increasingly distributed through different accounts, identities, and services, researchers have studied how best to develop tools to help people manage and derive meaning from it. Looking forward, these issues acquire new complexity when considered in the context of the information that is generated across one’s life or across generations. The long-term lens of a multigenerational timeframe elicits new questions about how people can engage with these heterogeneous collections of information and how future generations will manage and make sense of the information left behind by their ancestors. My prior work has examined how people perceive the role that systems will play in the long-term availability, management, and interpretation of digital information. This work demonstrates that while people certainly ascribe meaning to aspects of their digital information and believe that there is value held in their largely uncurated digital materials, it is not clear how or if that digital information will be transmitted, interpreted, or maintained by future generations. Furthermore, this work illustrates that there is a tension between the use of digital systems as ways of archiving content and sharing aspects of one’s life, and an uncertainty about the long term availability of the information shared through those services. Finally, this work illustrates the ways in which existing systems do not meet the needs of current users who are developing archives of their own digital information nor of future users who might try and derive meaning from information left behind by other people. Building on that earlier work, my dissertation work investigates how we can develop systems that foster engagement with lifetimes or generations of digital information in ways that are sensitive to how people define and communicate their identity and how they reflect on their life and experiences. For this work, I built a website that uses people’s Facebook data to ask them to reflect on the ways their life has changed over time. Participants’ experiences using this website illustrate the types of information that are and are not captured by digital systems. In addition, this work highlights the ways in which people engage with memories, artifacts, and experiences of people who have passed away and considers how digital systems and information can support those practices. I also interviewed participants about their experiences researching their family history, the ways in which they remember people who’ve passed away, and unresolved questions they have about the past. The findings from this aspect of the work contribute a better understanding of how digital systems, and the digital information people create over the course of their lives, intersect with the processes of death, dying, and remembrance.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Xiaodong. "Abstraction : a notion for reverse engineering." Thesis, De Montfort University, 1999. http://hdl.handle.net/2086/4214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tetewsky, Avram Ross Jeff Soltz Arnold Vaughn Norman Anszperger Jan O'Brien Chris Graham Dave Craig Doug Lozow Jeff. "Making sense of inter-signal corrections : accounting for GPS satellite calibration parameters in legacy and modernized ionosphere correction algorithms /." [Eugene, Ore. : Gibbons Media & Research], 2009. http://www.insidegnss.com/auto/julyaug09-tetewsky-final.pdf.

Full text
Abstract:
"Author biographies are available in the expanded on-line version of this article [http://www.insidegnss.com/auto/julyaug09-tetewsky-final.pdf]"
"July/August 2009." Web site title: Making Sense of GPS Inter-Signal Corrections : Satellite Calibration Parameters in Legacy and Modernized Ionosphere Correction Algorithms.
APA, Harvard, Vancouver, ISO, and other styles
38

Marburger, André. "Reverse engineering of complex legacy telecommunication systems : [problem domain, concepts, solutions, and tool support] /." Aachen : Shaker, 2005. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=014186048&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

COUTO, RODNEI SILVA. "A META-TOOL FOR GENERATING DIAGRAMS USED IN THE REVERSE ENGINEERING OF LEGACY SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14920@1.

Full text
Abstract:
A recuperação da documentação relativa à estrutura de um sistema legado visa apoiar o seu entendimento e sua manutenção. Com base em diagramas que descrevem a estrutura do sistema tal como implementado, facilita-se entender o sistema e analisar o impacto de pedidos de mudança. Este trabalho apresenta uma metaferramenta que utiliza metadados para a sua instanciação visando representações específicas. Para avaliar a metaferramenta e o processo de engenharia reversa por ela apoiado foi conduzido um estudo experimental visando a recuperação dos modelos de um sistema legado implementado em PL/SQL.
The recovery of the documentation of the structure of a legacy system aims at supporting its understanding and maintenance. Based on diagrams that describe the structure of the system as it was implemented, it is easier to understand the system and analyze the impact changes may have. This dissertation introduces a meta-tool that uses metadata for its instantiation aiming at specific representations. An experimental study on the recovery of models of a legacy system implemented in PL/SQL was conducted to enable the evaluation of the meta-tool and the reverse engineering process that it supports.
APA, Harvard, Vancouver, ISO, and other styles
40

Bodinier, Quentin. "Coexistence of communication systems based on enhanced multi-carrier waveforms with legacy OFDM Networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S091/document.

Full text
Abstract:
Les futurs réseaux sans fil devront être conçus pour répondre aux besoins hétérogènes de systèmes entièrement différents. De nouveaux services soumis à des contraintes variées coexisteront avec les utilisateurs actuels sur la même bande de fréquences. L'OFDM, la couche physique utilisée par les systèmes actuels, souffre d’un mauvais confinement spectral et ne permet pas cette coexistence. De nombreuses nouvelles formes d'onde avec une localisation spectrale améliorée ont donc été proposées. Nous étudions la coexistence de nouveaux systèmes basés sur ces formes d'onde avec des utilisateurs OFDM préexistant. Nous fournissons la première analyse théorique et expérimentale de l'interférence inter-système qui se produit dans ces scenarios. Nous appliquons ensuite cette analyse pour évaluer les performances de différentes formes d'ondes avancées et nous étudions finalement les performances d'un réseau où des utilisateurs cellulaires OFDM coexistent avec des paires D2D utilisant l'une des formes d'ondes améliorées étudiées
Future wireless networks are envisioned to accommodate the heterogeneous needs of entirely different systems. New services obeying various constraints will coexist with legacy cellular users in the same frequency band. This coexistence is hardly achievable with OFDM, the physical layer used by current systems, because of its poor spectral containment. Thus, a myriad of multi-carrier waveforms with enhanced spectral localization have been proposed for future wireless devices. In this thesis, we investigate the coexistence of new systems based on these waveforms with legacy OFDM users. We provide the first theoretical and experimental analysis of the inter-system interference that arises in those scenarii. Then, we apply this analysis to evaluate the merits of different enhanced waveforms and we finally investigate the performance achievable by a network composed of legacy OFDM cellular users and D2D pairs using one of the studied enhanced waveforms
APA, Harvard, Vancouver, ISO, and other styles
41

Dharmapurikar, Abhishek V. "Impact-Driven Regression Test Selection for Mainframe Business Systems." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366294612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Sanjeepan, Vivekananthan. "A service-oriented, scalable, secure framework for Grid-enabling legacy scientific applications." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Saad, Ibrahim Mohamed Mohamed. "Extracting Parallelism from Legacy Sequential Code Using Transactional Memory." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/71861.

Full text
Abstract:
Increasing the number of processors has become the mainstream for the modern chip design approaches. However, most applications are designed or written for single core processors; so they do not benefit from the numerous underlying computation resources. Moreover, there exists a large base of legacy software which requires an immense effort and cost of rewriting and re-engineering to be made parallel. In the past decades, there has been a growing interest in automatic parallelization. This is to relieve programmers from the painful and error-prone manual parallelization process, and to cope with new architecture trend of multi-core and many-core CPUs. Automatic parallelization techniques vary in properties such as: the level of paraellism (e.g., instructions, loops, traces, tasks); the need for custom hardware support; using optimistic execution or relying on conservative decisions; online, offline or both; and the level of source code exposure. Transactional Memory (TM) has emerged as a powerful concurrency control abstraction. TM simplifies parallel programming to the level of coarse-grained locking while achieving fine-grained locking performance. This dissertation exploits TM as an optimistic execution approach for transforming a sequential application into parallel. The design and the implementation of two frameworks that support automatic parallelization: Lerna and HydraVM, are proposed, along with a number of algorithmic optimizations to make the parallelization effective. HydraVM is a virtual machine that automatically extracts parallelism from legacy sequential code (at the bytecode level) through a set of techniques including code profiling, data dependency analysis, and execution analysis. HydraVM is built by extending the Jikes RVM and modifying its baseline compiler. Correctness of the program is preserved through exploiting Software Transactional Memory (STM) to manage concurrent and out-of-order memory accesses. Our experiments show that HydraVM achieves speedup between 2×-5× on a set of benchmark applications. Lerna is a compiler framework that automatically and transparently detects and extracts parallelism from sequential code through a set of techniques including code profiling, instrumentation, and adaptive execution. Lerna is cross-platform and independent of the programming language. The parallel execution exploits memory transactions to manage concurrent and out-of-order memory accesses. This scheme makes Lerna very effective for sequential applications with data sharing. This thesis introduces the general conditions for embedding any transactional memory algorithm into Lerna. In addition, the ordered version of four state-of-art algorithms have been integrated and evaluated using multiple benchmarks including RSTM micro benchmarks, STAMP and PARSEC. Lerna showed great results with average 2.7× (and up to 18×) speedup over the original (sequential) code. While prior research shows that transactions must commit in order to preserve program semantics, placing the ordering enforces scalability constraints at large number of cores. In this dissertation, we eliminates the need for commit transactions sequentially without affecting program consistency. This is achieved by building a cooperation mechanism in which transactions can forward some changes safely. This approach eliminates some of the false conflicts and increases the concurrency level of the parallel application. This thesis proposes a set of commit order algorithms that follow the aforementioned approach. Interestingly, using the proposed commit-order algorithms the peak gain over the sequential non-instrumented execution in RSTM micro benchmarks is 10× and 16.5× in STAMP. Another main contribution is to enhance the concurrency and the performance of TM in general, and its usage for parallelization in particular, by extending TM primitives. The extended TM primitives extracts the embedded low level application semantics without affecting TM abstraction. Furthermore, as the proposed extensions capture common code patterns, it is possible to be handled automatically through the compilation process. In this work, that was done through modifying the GCC compiler to support our TM extensions. Results showed speedups of up to 4× on different applications including micro benchmarks and STAMP. Our final contribution is supporting the commit-order through Hardware Transactional Memory (HTM). HTM contention manager cannot be modified because it is implemented inside the hardware. Given such constraint, we exploit HTM to reduce the transactional execution overhead by proposing two novel commit order algorithms, and a hybrid reduced hardware algorithm. The use of HTM improves the performance by up to 20% speedup.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
44

Huselius, Joel. "Reverse Engineering of Legacy Real-Time Systems : An Automated Approach Based on Execution-Time Recording." Doctoral thesis, Västerås : Department of Computer Science and Electronics, Mälardalen University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Vlizko, Nataliya. "Challenges and success factors in the migration of legacy systems to Service Oriented Architecture (SOA)." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20712.

Full text
Abstract:
Service-Oriented Architecture (SOA) provides a standards-based conceptual framework for flexible and adaptive systems and has become widely used in the recent years because of it. The number of legacy systems has already been migrated to this platform. As there are still many systems under consideration of such migration, we found it relevant to share the existing experience of SOA migration and highlight challenges that companies meet while adopting SOA. As not all of these migrations were successful, we also look into factors that have influence on the success of SOA projects. The research is based on two methods: a literature review and a survey. The results of the thesis include identification and quantitative analysis of the challenges and success factors of SOA projects. We also compare the survey results for different samples (based on the company industry, work area, size, and respondents experience with SOA and respondents job positions). In total, 13 SOA challenges and 18 SOA success factors were identified, analyzed and discussed in this thesis. Based on the survey results, there are three SOA challenges received the highest importance scores: “Communicating SOA Vision”, “Focus on business perspective, and not only IT perspective” and “SOA Governance”. The highest scored SOA success factor is “Business Process of Company”. While comparing different samples of the survey results, the most obvious differences are identified between the results received from people with development related job positions and people with business related job positions, and the results from companies of different sizes.
APA, Harvard, Vancouver, ISO, and other styles
46

Perrotta, S., V. D'Odorico, J. X. Prochaska, S. Cristiani, G. Cupani, S. Ellison, S. López, et al. "Nature and statistical properties of quasar associated absorption systems in the XQ-100 Legacy Survey." OXFORD UNIV PRESS, 2016. http://hdl.handle.net/10150/621733.

Full text
Abstract:
We statistically study the physical properties of a sample of narrow absorption line (NAL) systems looking for empirical evidences to distinguish between intrinsic and intervening NALs without taking into account any a priori definition or velocity cut-off. We analyse the spectra of 100 quasars with 3.5 < z(em) < 4.5, observed with X-shooter/Very Large Telescope in the context of the XQ-100 Legacy Survey. We detect an similar to 8 sigma excess in the CIV number density within 10 000 km s(-1) of the quasar emission redshift with respect to the random occurrence of NALs. This excess does not show a dependence on the quasar bolometric luminosity and it is not due to the redshift evolution of NALs. It extends far beyond the standard 5000 km s(-1) cutoff traditionally defined for associated absorption lines. We propose to modify this definition, extending the threshold to 10 000 km s(-1) when weak absorbers (equivalent width < 0.2 angstrom) are also considered. We infer NV is the ion that better traces the effects of the quasar ionization field, offering the best statistical tool to identify intrinsic systems. Following this criterion, we estimate that the fraction of quasars in our sample hosting an intrinsic NAL system is 33 per cent. Lastly, we compare the properties of the material along the quasar line of sight, derived from our sample, with results based on close quasar pairs investigating the transverse direction. We find a deficiency of cool gas (traced by C II) along the line of sight connected to the quasar host galaxy, in contrast with what is observed in the transverse direction.
APA, Harvard, Vancouver, ISO, and other styles
47

Adeyinka, Oluwaseyi. "Service Oriented Architecture & Web Services : Guidelines for Migrating from Legacy Systems and Financial Consideration." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1297.

Full text
Abstract:
The purpose of this study is to present guidelines that can be followed when introducing Service-oriented architecture through the use of Web services. This guideline will be especially useful for organizations migrating from their existing legacy systems where the need also arises to consider the financial implications of such an investment whether it is worthwhile or not. The proposed implementation guide aims at increasing the chances of IT departments in organizations to ensure a successful integration of SOA into their system and secure strong financial commitment from the executive management. Service oriented architecture technology is a new concept, a new way of looking at a system which has emerged in the IT world and can be implemented by several methods of which Web services is one platform. Since it is a developing technology, organizations need to be cautious on how to implement this technology to obtain maximum benefits. Though a well-designed, service-oriented environment can simplify and streamline many aspects of information technology and business, achieving this state is not an easy task. Traditionally, management finds it very difficult to justify the considerable cost of modernization, let alone shouldering the risk without achieving some benefits in terms of business value. The study identifies some common best practices of implementing SOA and the use of Web services, steps to successfully migrate from legacy systems to componentized or service enabled systems. The study also identified how to present financial return on investment and business benefits to the management in order to secure the necessary funds. This master thesis is based on academic literature study, professional research journals and publications, interview with business organizations currently working on service oriented architecture. I present guidelines that can be of assistance to migrate from legacy systems to service-oriented architecture based on the analysis from comparing information sources mentioned above.
APA, Harvard, Vancouver, ISO, and other styles
48

Rowe, Arthur T. "An analysis of electronic commerce acquisition systems : comparison of a new pure electronic purchasing and exchange system (electronic storefront) and other legacy on-line purchasing systems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Dec%5FRowe.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hameed, Qaisar. "A Behavioral Test Strategy For Board Level Systems." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/31445.

Full text
Abstract:
A digital board typically contains a heterogeneous mixture of chips: microprocessors, memory, control and I/O logic. Different testing techniques are needed for each of these components. To test the whole board, these techniques must be integrated into an overall testing strategy for the board. In this thesis, we have applied a behavioral testing scheme to test the board. Each component chip is tested by observing the behavior of the system in response to the test code, i.e. the component under test is not isolated from the rest of the circuit during test. This obviates the need for the extra hardware used for isolating the chips that is required for structural testing. But this is done at the cost of reduced fault location, although fault detection is still adequate. We have applied the start small approach to behavioral testing. We start by testing a small core of functions. Then, only those functions already tested are used to test the remaining behavior. The grand goal is testing the whole board. This is divided into goals for testing each of the individual chips, which is further subdivided into sub-goals for each of the sub-functions of the board or sub-goals for testing for the most common faults in a component. Each component is tested one by one. Once a component passes, it is put in a passed items set and then can be used in testing the remaining components. Using the start small approach helps isolate the faults to the chip level and thus results in better fault location than the simple behavioral testing scheme in which there is no concept of passed items set and its usage. As an example, this testing approach is applied to a microcontroller based temperature sensor board. This code is run on the VHDL model of the system, and then also on the actual system. For modeling the system in VHDL, Synopsys Smart model library components are used. Faults are injected in the system and then the performance of the strategy is evaluated. This strategy is found to be very effective in detecting internal faults of the chip and locating the faults to the chip level. The interconnection faults are difficult to locate although they are detected in most of the cases. Different scenarios for incorporating this scheme in legacy systems are also discussed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
50

Irvin, Dana, John Otranto, Kirill Lokshin, and Amit Puri. "AUTOMATING VERIFICATION FOR LEGACY SYSTEMS: A CASE STUDY OF TECHNOLOGY SUSTAINMENT WITHIN THE NASA SPACE NETWORK." International Foundation for Telemetering, 2017. http://hdl.handle.net/10150/626958.

Full text
Abstract:
The NASA Space Network (SN), which consists of the geosynchronous Tracking and Data Relay Satellite (TDRS) constellation and its associated ground elements, is a critical national space asset that provides near-continuous, high-bandwidth telemetry, command, and communications services for numerous spacecraft and launch vehicles. The Space Network includes several key ground system elements, one of which is the White Sands Complex Data Interface Service Capability (WDISC). The WDISC has undergone multiple cycles of modification and technology refresh over its lifetime, making test automation an attractive option for reducing system verification and validation cost. This paper considers the implementation of automated testing for the WDISC as a case study in technology sustainment, discusses the principal benefits and challenges of implementing test automation for a legacy system, and presents findings that demonstrate the effectiveness of such automation models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography