To see the other types of publications on this topic, follow the link: Software systems.

Dissertations / Theses on the topic 'Software systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Software systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rodrigues, Filho Roberto Vito. "Emergent software systems." Thesis, Lancaster University, 2018. http://eprints.lancs.ac.uk/126944/.

Full text
Abstract:
Contemporary software systems often have millions of lines of code that interact over complex infrastructures. The development of such systems is very challenging due to the increasing complexity of services and the high level of dynamism of current operating environments. In order to support the development and management of such systems, autonomic computing concepts have gained significant importance. The majority of autonomic computing approaches show significant levels of expert dependency in designing adaptive solutions. These approaches usually rely on human-made models and policies to support and guide software adaptation at runtime. These approaches mainly suffer from: i) a significant upfront effort demanded to create such solutions, which adds to the complexity of creating autonomous systems, and ii) unreliability given the high levels of uncertainty in current operating environments, leading the system to degraded performance and error states when subjected to unpredicted operating conditions and unexpected software interactions. Motivated by the problems and limitations of state-of-the-art autonomic computing solutions, this thesis introduces the concept of Emergent Software Systems. These systems are autonomously composed at runtime from discovered components, and are autonomously optimised based on the operating conditions, being able to build their own understanding of their environment and constituent parts. This thesis defines Emergent Software Systems, presenting the challenges of implementing such approach, and presents a fully functioning emergent systems framework that demonstrates this concept in real-world, fully functioning datacentre-based software.
APA, Harvard, Vancouver, ISO, and other styles
2

Nasir, Muhammad-Iftikhar, and Rizwan Iqbal. "Evolvability of Software Systems." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4053.

Full text
Abstract:
Software evolvability, meeting the future requirements of the customer is one of the emerging challenges which software industry is facing nowadays. Software evolvability is the ability of software system to accommodate future requirements. Studies have shown that software evolvability has large economic benefits but at the same time it’s difficult to assess. Over the time many methods have been derived to assess the software evolvability. Software evolvability depends upon various characteristics of the software system. In this paper we will discuss different characteristics of the software systems on which software evolvability depends. We will also have a look on hierarchy of these characteristics based on their role in the evolvability of software system. Moreover we will find out that what level of qualifications is appropriate for an expert to assess the software evolvability of a software system
Software evolvability plays an important role in the software life cycle. It is ease with which software system can be modified for future requirements. There are different methods for assessing the software evolvability. Mainly, structural measures; expert assessment and combined approach. Structural approach focus on the class level measures i.e. inheritance, modularity, coupling etc. Whereas, the expert assessment approach utilizes experts opinion regarding the software system i.e. how much it is evolvable? Combined approach is a combination of structural measures and expert assessment. According to David E. Peercy software evolvability depends upon six factors i.e. modularity, descriptiveness, consistency, simplicity, expandability and instrumentation. However, David A. Sunday considered five factors which are modularity, descriptiveness, consistency, testability and changeability. Moreover, there are other factors which also influence the software evolvability i.e. skills and qualification of the maintainer, organizational support to evolvability and characteristics of the methods being used for maintenance. The importance of research methodology can't be neglected because it gives us thought about our research before start. It has a positive impact on research. We are able to understand the structure of our work and have rough idea about research procedure. Our research methodology on theme evolvability of software systems is consistent of few steps. These steps are literature review, informal discussions and then development of a questionnaire. Subsequently questionnaire is distributed to the subjects and conclusions are drawn, based on their feedback and analysis of results. We visited different software houses and discussed all the factors related to the survey. Experienced and qualified professionals were selected as subjects. To get the survey feedback we made phone calls, email reminders and personal meetings. Which result in high survey response i.e. 75%. Questionnaire was designed into three parts namely as personal information, characteristics of software evolvability and qualifications required for an expert. Pre-test was also designed to assure that the questions for the survey were properly defined and participants had no difficulty in understanding them. Participants of the survey included software developers, team leads, software testers and research students. Special consideration was given to the ethical issues in design and conduction of survey. We discussed about the response behavior of the participants analysis of the data we collected from survey. Analysis was conducted by different means like standard deviation, mean, medium, mode and variance in survey results. First part of the analysis is about what characteristics of the software which effect software evolvability and their priority. In this part we concluded that there are total eleven characteristics of the software evolvability out of which design and architecture is highly prioritized while technical platform and comments are least prioritized characteristics. In the second part of the analysis we concluded that technical training and quality assurance management experience are most important criteria for an expert while development experience and testing experience is least important In the last part of the thesis we discussed the research work, validity assessment of results and answers to the questions. We used A Lincoln‘s and Cuba’s criterion for validation assessment to support the validity of results. Validity is judged by four aspects credibility, transferability, dependability and conformability.
APA, Harvard, Vancouver, ISO, and other styles
3

Jong, Hayco Alexander de. "Flexible heterogeneous software systems." [S.l : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2007. http://dare.uva.nl/document/39606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Caffall, Dale Scott. "Developing dependable software for a system-of-systems." Diss., Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Mar%5FCaffall.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Caffall, Dale Scott. "Conceptual framework approach for system-of-systems software developments." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FCaffall.pdf.

Full text
Abstract:
Thesis (M.S. in Software Engineering)--Naval Postgraduate School, March 2003.
Thesis advisor(s): James Bret Michael, Man-Tak Shing. Includes bibliographical references (p. 83-84). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
6

Saks, Craig Sheldon. "Expanding software process improvement models beyond the software process itself." Master's thesis, University of Cape Town, 1999. http://hdl.handle.net/11427/16844.

Full text
Abstract:
Bibliography: pages 182-188.
The problems besetting software development and maintenance are well recorded and numerous strategies have been adopted over the years to overcome the so-called "software crisis". One increasingly popular strategy focuses on managing the processes by which software is built, maintained and managed. As such, many software organisations see software process improvement initiatives as an important strategy to help them improve their software development and maintenance performance. Two of the more popular software process improvement (SPI) models used by the software industry to help them in this endeavour are the Capability Maturity Model for Software (SW-CMM) from the Software Engineering Institute and the Software Process Improvement and Capability determination (SPICE) model from the International Standards Organisation. This research begins with the supposition that, although these SPI models have added significant value to many organisations, they have a potential shortcoming in that they tend to focus almost exclusively on the software process itself and seem to neglect other organisational aspects that could contribute to improved software development and maintenance performance. This research is concerned with exploring this potential shortcoming and identifying complementary improvement areas that the SW -CMM and SPICE models fail to address adequately. A theoretical framework for extending the SW-CMM and SPICE models is proposed. Thereafter complementary improvement areas are identified and integrated with the SW-CMM and SPICE models to develop an Extended SPI Model. This Extended SPI Model adopts a systemic view of software process and IS organisational improvement by addressing a wide range of complementary improvement considerations. A case study of an SPI project is described, with the specific objective of testing and refining the Extended SPI Model. The results seem to indicate that the framework and Extended SPI Model are largely valid, although a few changes were made in light of the findings of the case study. Finally, the implications of the research for both theory and practice are discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Inada, Kenichiro. "Analysis of Japanese Software Business." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59244.

Full text
Abstract:
Thesis (S.M. in System Design and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 94-96).
Today, our society is surrounded by information system, computers, and software. It is no exaggeration to say that our daily life depends on software and its function. Accordingly, the business of software has made miraculous growth in the last two decades and is playing a significant role in various industries. In accordance with the growing business needs for effective software and information systems, various firms in various countries have entered the business of software seeking for prosperity. Some have succeeded, some have failed. What distinguishes these firms is its ability to manage and deliver quality products on demand, on time, at a low cost. To achieve such goal, software firms have thought out different methods and tools striving to establish its practice. Nevertheless, many software firms around the globe are struggling to satisfy its clients to achieve business success. With no exception, Japanese software firms are facing difficulties of managing software projects. While its ability to deliver high quality product is well acknowledged among software industry, its high cost structure and schedule delays are thought of as serious problems. Moreover, some of the transitions in the industry are forcing Japanese software firms to seek new opportunities. Therefore, it is important for Japanese software firms to establish more productive ways of developing software products and effective business strategies. Primal objective of this paper is to analyze the present conditions of Japanese software firms and to derive some recommendations which could enhance its current situation. It will also include the discussion of software development practices in US and India firms to better understand strength and weaknesses of Japanese firms and capture some important concepts which can be applied to improve current practice.
by Kenichiro Inada.
S.M.in System Design and Management
APA, Harvard, Vancouver, ISO, and other styles
8

Scott, Randall C. "Reengineering real-time software systems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA273408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Frid, Jonas. "Security Critical Systems in Software." Thesis, Linköpings universitet, Informationskodning, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-61588.

Full text
Abstract:
Sectra Communications is today developing cryptographic products for high assurance environments with rigorous requirements on separation between encrypted and un-encrypted data. This separation has traditionally been achieved through the use of physically distinct hardware components, leading to larger products which require more power and cost more to produce compared to systems where lower assurance is required. An alternative to hardware separation has emerged thanks to a new class of operating systems based on the "separation kernel" concept, which offers verifiable separation between software components running on the same processor comparable to that of physical separation. The purpose of this thesis was to investigate the feasibility in developing a product based on a separation kernel and which possibilities and problems with security evaluation would arise. In the thesis, a literature study was performed covering publications on the separation kernel from a historical and technical perspective, and the development and current status on the subject of software evaluation. Additionally, a software crypto demonstrator was partly implemented in the separation kernel based Green Hills Integrity operating system. The thesis shows that the separation kernel concept has matured significantly and it is indeed feasible to begin using this class of operating systems within a near future. Aside from the obvious advantages with smaller amounts of hardware, it would give greater flexibility in development and potential for more fine-grained division of functions. On the other hand, it puts new demands on developers and there is also a need for additional research about some evaluation aspects, failure resistance and performance.
Sectra Communications utvecklar idag kryptoprodukter med högt ställda krav på separation mellan krypterad och okrypterad data. Traditionellt har denna separation gjorts i hårdvara med fysiskt åtskilda komponenter, vilket lett till större produkter, högre energiförbrukning och högre tillverkningskostnader än motsvarande system för lägre säkerhetsnivåer. Ett alternativ till hårdvaruseparation har framkommit tack vare en ny typ av operativsystem baserat på ett koncept kallat "separationskärna", som erbjuder verifierbar separation mellan mjukvarukomponenter på en processor likvärdig med fysisk separation. Syftet med examensarbetet var att undersöka möjligheten att basera en produkt på ett sådant system samt vilka ytterligare möjligheter och problem med säkerhetsevaluering av produkten som uppstår. I examensarbetet utfördes en litteraturstudie av publikationer om separationskärnan ur ett historiskt och tekniskt perspektiv, samt den historiska utvecklingen inom säkerhetsevaluering av mjukvara och dess nuvarande status. Dessutom implementerades delar av ett mjukvarukrypto som en demonstrationsenhet baserad på Integrity från Green Hills Software, vilket är ett realtidsoperativsystem byggt kring en separationskärna. Arbetet visade att separationskärnan som koncept har nått en hög mognadsgrad och att det är rimligt att börja använda denna typ av operativsystem till produkter med mycket högt ställda säkerhetskrav inom en snar framtid. Det skulle förutom uppenbara vinster med minskad mängd hårdvara även ge större flexibilitet vid utvecklingen och möjlighet till exaktare uppdelning av funktioner. Samtidigt ställer det andra krav på utvecklarna och det behövs ytterligare utredning om vissa aspekter av hur evalueringsförfarandet påverkas, systemens feltolerans samt prestanda.
APA, Harvard, Vancouver, ISO, and other styles
10

Schmidgall, Ralf. "Automotive embedded systems software reprogramming." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/7070.

Full text
Abstract:
The exponential growth of computer power is no longer limited to stand alone computing systems but applies to all areas of commercial embedded computing systems. The ongoing rapid growth in intelligent embedded systems is visible in the commercial automotive area, where a modern car today implements up to 80 different electronic control units (ECUs) and their total memory size has been increased to several hundreds of megabyte. This growth in the commercial mass production world has led to new challenges, even within the automotive industry but also in other business areas where cost pressure is high. The need to drive cost down means that every cent spent on recurring engineering costs needs to be justified. A conflict between functional requirements (functionality, system reliability, production and manufacturing aspects etc.), testing and maintainability aspects is given. Software reprogramming, as a key issue within the automotive industry, solve that given conflict partly in the past. Software Reprogramming for in-field service and maintenance in the after sales markets provides a strong method to fix previously not identified software errors. But the increasing software sizes and therefore the increasing software reprogramming times will reduce the benefits. Especially if ECU’s software size growth faster than vehicle’s onboard infrastructure can be adjusted. The thesis result enables cost prediction of embedded systems’ software reprogramming by generating an effective and reliable model for reprogramming time for different existing and new technologies. This model and additional research results contribute to a timeline for short term, mid term and long term solutions which will solve the currently given problems as well as future challenges, especially for the automotive industry but also for all other business areas where cost pressure is high and software reprogramming is a key issue during products life cycle.
APA, Harvard, Vancouver, ISO, and other styles
11

Visscher, Bart-Floris. "Exploring complexity in software systems." Thesis, University of Portsmouth, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Harmer, T. J. "Pictorial animation of software systems." Thesis, Queen's University Belfast, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kendall, Richard A. "Unique Systems Through Reusable Software." International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614672.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
Computer Sciences Corporation, Realtime Data Systems Center has developed, integrated, tested, and delivered several large telemetry systems to various ranges over the past eight years. One key to the success of these systems has been the ability to build on a software base to meet unique range processing requirements for aircraft, missiles, and related weapons systems. Reusable software means reduced procurement and life cycle costs. The ability to successfully reuse software for new systems with new requirements lies not only in the fundamentals of modular system design, but in the ability of the people to comprehend the design, and adapt the software to new requirements. As advanced telemetry processing needs meet reduced budgets, the successful systems integrator will be relying more and more on an ability to adapt existing systems to meet new challenges.
APA, Harvard, Vancouver, ISO, and other styles
14

Drach, T. O., and O. E. Goloskokov. "Research and development of software and software components of the information system of situational management in the enterprise." Thesis, NTU "KhPI", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/38216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Weiss, Karen L. "Integrating middleware software into open-system client/server systems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA304230.

Full text
Abstract:
Thesis (M.S. in Information Techonlogy Management) Naval Postgraduate School, September 1995.
"September 1995." Thesis advisor(s): Barry Frew, S. Sridhar. Bibliography: p. 91-93. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
16

Chronaki, Kallia. "Exploiting asymmetric multi-core systems with flexible system software." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/664032.

Full text
Abstract:
Asymmetric multi-cores (AMCs) are a successful architectural solution for both mobile devices and supercomputers. These architectures combine different types of processing cores designed at different performance and power optimization points, thus exposing a performance-power trade-off. By maintaining two types of cores, AMCs are able to provide high performance under the facility power budget. However, there are significant challenges when using AMCs such as scheduling and load balancing. This thesis initially explores the potential of AMCs when executing current HPC applications and searches for the most appropriate execution model. Specifically we evaluate several execution models on an Arm big.LITTLE AMC using the PARSEC benchmark suite that includes representative HPC applications. We compare schedulers at the user, OS and runtime system levels, using both static and dynamic options and multiple configurations, and assess the impact of these options on the well-known problem of balancing the load across AMCs. Our results demonstrate that scheduling is more effective when it takes place in the runtime system as it improves the user-level scheduling by 23%, while the heterogeneous-aware OS scheduling solution improves the user-level scheduling by 10%. Following this outcome, this thesis focuses on increasing performance of AMC systems by improving scheduling in the runtime system level. Scheduling in the runtime system level is provided by the use of task-based parallel programming models. These programming models offer programming flexibility as they consist of an interface and a runtime system to manage the underlying resources and threads. In this thesis we improve scheduling with task-based programming models by providing three novel task schedulers for AMCs. These dynamic scheduling policies reduce total execution time either by detecting the longest or the critical path of the dynamic task dependency graph of the application. They use dynamic scheduling and information discoverable during execution, fact that makes them implementable and functional without the need of off-line profiling. In our evaluation we compare these scheduling approaches with an existing state-of the art heterogeneous scheduler and we track their improvement over a FIFO baseline scheduler. We show that the heterogeneous schedulers improve the baseline by up to 1.45x on a real 8-core AMC and up to 2.1x on a simulated 32-core AMC. Another enhancement we provide in task-based programming models is the adaptability to fine grained parallelism. The increasing number of cores on modern CMPs is pushing research towards the use of fine grained workloads, which is an important challenge for task-based programming models. Our study makes the observation that task creation becomes a bottleneck when executing fine grained workloads with task-based programming models. As the number of cores increases, the time spent generating tasks is becoming more critical to the entire execution. To overcome this issue, we propose TaskGenX. TaskGenX minimizes task creation overheads and relies both on the runtime system and a dedicated hardware. On the runtime system side, TaskGenX decouples the task creation from the other runtime activities. It then transfers this part of the runtime to a specialized hardware. From our evaluation using 11 HPC workloads on both symmetric and AMC systems, we obtain performance improvements up to 15x, averaging to 3.1x over the baseline. Finally, this thesis presents a showcase for a real-time CPU scheduler with the goal to increase the frames per second (FPS) of the game-play on mobile devices with AMC systems. We design and implement the RTS scheduler in the Android framework. RTS provides an efficient scheduling policy that takes into account the current temperature of the system to perform task migration. RTS solution increases the median FPS of the baseline mechanisms by up to 7.5% and at the same time it maintains temperature stable.
Los procesadores multinúcleos asimétricos (AMC) son una solución arquitectónica exitosa para dispositivos móviles y supercomputadores. Estas arquitecturas combinan diferentes tipos de núcleos de procesamiento diseñados con diferentes propiedades de rendimiento y potencia. Al mantener dos o más tipos de núcleos, los AMCs pueden proporcionar un alto rendimiento con un consumo bajo de energía de las infraestructuras. Sin embargo, existen importantes desafíos al usar los AMC, como la programación y el equilibrio de carga. Esta tesis explora inicialmente el potencial de los AMC al ejecutar aplicaciones actuales de Computacion de Alto Rendimiento (HPC) y busca el modelo de ejecución más apropiado para ellas. Específicamente evaluamos varios modelos de ejecución en un procesador asimétrico Arm big.LITTLE utilizando las aplicaciones PARSEC que son aplicaciones representativas de HPC. En este trabajo se compara la programación en los niveles de usuario, sistema operativo y librería y evaluamos el impacto de estas opciones en el conocido problema de equilibrar la carga entre los AMCs. Nuestros resultados demuestran que la programación es más efectiva cuando se lleva a cabo en el nivel del runtime, ya que mejora la programación del nivel de usuario en un 23%, mientras que la solución de programación del sistema operativo heterogéneo mejora la programación del nivel de usuario en un 10%. Siguiendo este resultado, esta tesis se centra en aumentar el rendimiento de los sistemas AMC mejorando la programación al nivel de librería. La programación en este nivel se proporciona mediante el uso de Modelos de Programación Paralelos Basados en Tareas (MPBT). Estos modelos de programación ofrecen flexibilidad de programación, ya que consisten en una interfaz y un runtime para administrar los recursos e hilos subyacentes. En esta tesis, mejoramos la programación con MPBT al proporcionar tres nuevos planificadores de tareas para AMCs. Estos planificadores dinámicos reducen el tiempo total de ejecución ya sea detectando la camino más largo o el camino crítico del grafo de dependencia de tareas de la aplicación, que es generado dinámicamente. En nuestra evaluación, comparamos estos planificadores con un planificador heterogéneo existente y demonstramos su mejora sobre un planificador FIFO. Mostramos que los planificadores heterogéneos mejoran el planificador FIFO en hasta 1.45x en un AMC real de 8 núcleos y hasta 2.1x en un AMC simulado de 32 núcleos. Otra contribución en los MPBT es la adaptabilidad al paralelismo de grano fino. El creciente número de núcleos en los chip multinúcleos modernos está empujando la investigación hacia el uso de cargas de trabajo de grano fino, que es un desafío importante para los MPBT. Nuestro estudio observa que la creación de tareas bloquea la ejecución con cargas de trabajo de grano fino con MPBT. Cuando el número de núcleos aumenta, el tiempo empleado en generar tareas pasa a ser más crítico para toda la ejecución. Nuestra solución es TaskGenX, que minimiza los costes de creación de tareas y se basa en una extensión del runtime y en un hardware dedicado. En el runtime, TaskGenX desacopla la creación de tareas de las otras actividades del runtime, ejecutando esta actividad en un hardware especializado. Evaluamos 11 aplicaciones de HPC con TaskGenX en sistemas simétricos y AMC y obtenemos mejoras de rendimiento de hasta 15x, con un promedio de 3.1x sobre la implementación de referencia. Finalmente, esta tesis presenta un planificador de CPU con el objetivo de aumentar los fotogramas por segundo (FPS) para juegos en dispositivos móviles con sistemas AMC. Diseñamos e implementamos el planificador de Real-Time Scheduler (RTS) en Android. El RTS proporciona una política de programación eficiente que tiene en cuenta la temperatura actual del sistema para realizar la migración de tareas. La solución RTS aumenta la FPS mediana de los mecanismos de referencia
APA, Harvard, Vancouver, ISO, and other styles
17

Ramos, Marcelo Augusto. "Bridging software engineering gaps towards system of systems development." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13082014-103931/.

Full text
Abstract:
While there is a growing recognition of the importance of System of Systems (SoS), there is still little agreement on just what they are or on by what principles they should be constructed. Actually, there are numerous SoS definitions in the literature. The difficulty in specifying what are the constituent systems, what they are supposed to do, and how they are going to do it frequently lead SoS initiatives to complete failures. Guided by a sample SoS that comprises all the distinguishing SoS characteristics and a generic SoS Engineering (SoSE) process, this thesis explores the SoS development from different Software Engineering (SE) perspectives that include requirements, analysis, design, and reengineering. For the Requirements Engineering (RE), we propose a scene-based RE approach to describe the SoS progressively as an arrangement of elementary but meaningful related behaviors named scenes. The objective is making easier the description and the understanding of the SoS dynamism. For the analysis, we propose extensions to statecharts to visually improve the modeling of systems interactions. They are symbolic notations that result from an analogy with multi-layer Printed Circuit Boards (PCB). The resulting diagrams are named PCBstatecharts. For the design, we propose an extension to the conventional SPLE process in such a way that SPL can become a natural source of SoS members. Domain engineering is extended to deliver components able to share abilities in SoS environments. Then, application engineers can design families of products that comply with different SoS requirements and still improve their products using the abilities of other SoS members. For the reengineering, we propose an approach extension to evolve legacy systems to SPL and then to SoS members. We demonstrate that when legacy systems are reengineered properly, they can share useful abilities, work cooperatively, and compose SoS
Apesar do crescente reconheciimento da importância de Sistemas de Sistemas (SoS) ainda não há um consenso sobre o que eles são um para que princípios devem ser construídos. De fato, existem várias definições de SoS na literatura. A dificuldade de especificar quais são os sistemas constituintes, as suas tarefas e como eles irão realizá-las frequentemente conduzem iniciativas de SoS ao completo fracasso. Guiados por um exemplo que inclui todas as características distintas de um SoS e um processo genérico de engenharia de SoS (SoSE), esta tese explora o desenvolvimento de SoS a partir de diferentes perspectivas da engenharia de software (SE), que incluem requisitos, análise, projeto e reengenharia. Para a engenharia de requisitos (RE) é proposta uma abordagem para descrever progressivamente um SoS como um arranjo de comportamentos mais simples, porém significativos, denominados \'cenas\'. O objetivo é facilitar a descrição e o entendimento do SoS e seu dinamismo. Para a análise, propõe-se as extensões de statecharts para melhorar a modelagem das interações entre sistemas. Elas são notações simbólicas que resultam de uma analogia com placas de circuito impresso multi camadas (PCB). Os diagramas resultantes são denominados PCB-statecharts. Para o projeto, é proposta uma extensão para o processo convencional de engenharia de linha de produtos (SPLE), de tal forma que linhas de produto (SPL) possam se tornar uma fonte natural de membros para SoS. A engenharia de domínio é estendida para prover componentes capazes de compartilhar habilidades em ambientes de SoS. Desta forma, engenheiros de aplicação podem projetar famílias de produtos complacentes com diferentes requisitos de SoS e ainda melhorar seus produtos usando habilidades de outros membros de um SoS. Para a reengenharia propõe-se extensão de uma abordagem existente para evoluir legados para SPL e depois para membros de um SoS. O objetivo é demonstrar que quando sistemas legados são tratados apropriadamente, eles podem compartilhar habilidades úteis, trabalhar de maneira cooperativa e compor SoS
APA, Harvard, Vancouver, ISO, and other styles
18

Lin, Chia-en. "Performance Engineering of Software Web Services and Distributed Software Systems." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc500103/.

Full text
Abstract:
The promise of service oriented computing, and the availability of Web services promote the delivery and creation of new services based on existing services, in order to meet new demands and new markets. As Web and internet based services move into Clouds, inter-dependency of services and their complexity will increase substantially. There are standards and frameworks for specifying and composing Web Services based on functional properties. However, mechanisms to individually address non-functional properties of services and their compositions have not been well established. Furthermore, the Cloud ontology depicts service layers from a high-level, such as Application and Software, to a low-level, such as Infrastructure and Platform. Each component that resides in one layer can be useful to another layer as a service. It hints at the amount of complexity resulting from not only horizontal but also vertical integrations in building and deploying a composite service. To meet the requirements and facilitate using Web services, we first propose a WSDL extension to permit specification of non-functional or Quality of Service (QoS) properties. On top of the foundation, the QoS-aware framework is established to adapt publicly available tools for Web services, augmented by ontology management tools, along with tools for performance modeling to exemplify how the non-functional properties such as response time, throughput, or utilization of services can be addressed in the service acquisition and composition process. To facilitate Web service composition standards, in this work we extended the framework with additional qualitative information to the service descriptions using Business Process Execution Language (BPEL). Engineers can use BPEL to explore design options, and have the QoS properties analyzed for the composite service. The main issue in our research is performance evaluation in software system and engineering. We researched the Web service computation as the first half of this dissertation, and performance antipattern detection and elimination in the second part. Performance analysis of software system is complex due to large number of components and the interactions among them. Without the knowledge of experienced experts, it is difficult to diagnose performance anomalies and attempt to pinpoint the root causes of the problems. Software performance antipatterns are similar to design patterns in that they provide what to avoid and how to fix performance problems when they appear. Although the idea of applying antipatterns is promising, there are gaps in matching the symptoms and generating feedback solution for redesign. In this work, we analyze performance antipatterns to extract detectable features, influential factors, and resource involvements so that we can lay the foundation to detect their presence. We propose system abstract layering model and suggestive profiling methods for performance antipattern detection and elimination. Solutions proposed can be used during the refactoring phase, and can be included in the software development life cycle. Proposed tools and utilities are implemented and their use is demonstrated with RUBiS benchmark.
APA, Harvard, Vancouver, ISO, and other styles
19

Wong, Ken Chi Ho. "Platform leadership in open source software." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100313.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 125-134).
Industry platforms in the software sector are increasingly being developed in open source. Firms seeking to position themselves as platform leaders with such technologies must find ways of operating within the unique constraints of open source development. This thesis aims to understand those challenges by analyzing the Android and Hadoop ecosystems through an augmented version of Porter's Five Forces framework proposed by Intel's Andrew Grove. The analysis finds that platform contenders in open source behave differently depending on whether they focus on competing against alternative platforms or alternative providers of the same platform as rivals. This focus informs key decisions that the firm takes, including how it interacts with complementors and its approach to innovation. Due to the fact that open source vendors tend to lack unilateral authority over technology decisions, they can only seek to influence the behavior of the ecosystem by securing key relationships in the value network. In particular, they must secure the right engineering talent, access to key complements and superior paths to the customer. The research highlights some of the factors and tactics platform contenders in Hadoop and Android considered in acquiring these relationships. The open nature of FOSS (Free and Open Source Software) also allow new technologies to emerge and change the definition of the platform's boundaries. This creates a further strategic challenge for open source platform contenders. Keywords: platform strategy, platform leadership, open source software, Hadoop, Android.
by Ken Chi Ho Wong.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
20

Wiklander, Jimmie. "Component-based software design of embedded real-time systems." Licentiate thesis, Luleå : Luleå University of Technology, 2009. http://pure.ltu.se/ws/fbspretrieve/3318285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bihari, Jeevan Jyoti. "Software emulation of networking components." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/935942.

Full text
Abstract:
Software emulation of local area and wide area networks provides an alternative method for the design of such networks and for analyzing their performance. Emulation of bridges and routers that link networks together may provide valuable information regarding network congestion, network storms and the like before putting expensive hardware into place. Such an emulation also enables students taking a networking course to develop their own client-server applications and to visualize the basic functioning of the UDP/IP and RIP protocols.This thesis builds on the emulated local area network, Metanet, created by a previous graduate student. It adds the capability of attaching routers and bridges to multiple local and non-local emulated networks so that data may be transferred between two hosts on different segments of the same LAN (via an emulated bridge) or two different networks altogether (via an emulated router). The machines running the Metanet software should support UNIX which has Berkeley's Socket interface as emulated networks on different physical machines utilize this interface for communicating. A comparison of the new networking capabilities of Metanet and other experimental systems like XINU and MINIX is researched.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
22

Quezada, Gomez Juan Manuel. "Model-based guidelines for automotive electronic systems software development." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100383.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 96-98).
The automobile innovation transformed the human life style ever since its introduction to the public, and for over the last one hundred years incumbent technologies have been adopted to improve its performance characteristics. Yet, we need a holistic approach to understand that automobiles shifted from being a mere assembly of mechanical parts to a multidisciplinary system that form the modern automobile. Thanks to the increased use of electronics and software in automobiles, consumers benefit from better gas mileage, more amenities and features, such as comfort, driving assistance, and entertainment. At the same time, stability and performance of automobiles as systems have been facing deterioration, and eventually vehicle owners are finding that features and functions become inoperative over time, causing frustration, loss of time and money. Reports of problems experienced by vehicle owners have stem from casual factors of system defects that model-based systems engineering can reduce or eliminate. This research presents a model-based systems engineering approach to an automobile electronic system design. The work is founded on a comprehensive OPM model and engineering guidelines for electronic control module software design. The purpose of the framework developed in this study is to support development of complex vehicle software that allows flexibility for changing features and creating new ones, and enables software developers to pinpoint systemic faults quicker and at earlier lifecycle phases, reducing rework, increasing safety, and providing for more effective resolution of such problems.
by Juan Manuel Quezada Gomez.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
23

Webster, David D. "Hardware, software, firmware allocation of functions in systems development." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/49907.

Full text
Abstract:
The top-down development methodology is, for the most part, a well defined subject. There is, however, one area of top-down development that lacks structure and definition. The undefined topic is the hardware, software, and firmware allocation of functions. This research addresses this deficiency in top-down system development. The key objective is the restructuring of the hardware, software, and firmware process from a subjective, qualitative decision process to a structured, quantitative one. Factors that affect the hardware, software, and firmware allocation process are identified. Qualitative data on the influence of the factors on the allocation process are systematized into quantitative information. This information is used to develop a model to provide a recommendation for implementing a function in hardware, software, or firmware. The model applies three analytical methods: 1) the analytic hierarchy process, 2) the general linear model, and 3) the second order regression technique. These three methods are applied to the quantified information of the hardware, software, firmware allocation process. A computer-based software tool is developed by this research to aid in the evaluation of the hardware, software, and firmware allocation process. The software support tool assists in data collection. Future application of the support tool will enable the capture and documentation of expert knowledge on the hardware, software, and firmware allocation process. The improved knowledge base can be used to improve the model which in tum will improve the system development process, and resulting system.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
24

Cao, Lan. "Modeling Dynamics in Agile Software Development." Digital Archive @ GSU, 2005. http://digitalarchive.gsu.edu/cis_diss/4.

Full text
Abstract:
Agile software development challenges the traditional way of software development and project management. In rapidly changing environments, changing requirements and tight schedule constraints require software developers to take a different approach toward the process of software development. However, beyond a few case studies, surveys and studies focused on specific practices such as pair programming, the effectiveness and applicability of agile methods have not been established adequately. The objective of my research is to improve the understanding of and gain insights into these issues. For this purpose, I develop a system dynamic simulation model that considers the complex interdependencies among the variety of practices used in agile development. The model is developed on the basis of an extensive review of the literature as well as quantitative and qualitative data collected from real projects in seven organizations. The development of the model was guided by dynamic hypotheses on customer involvement, refactoring and quality of design. The model was refined and validated using data from independent projects. The model helps in answering important questions on the impact of customer behavior, cost of making changes and economics of pair programming. Experimentation with the model suggests that the cost of change is not constant; instead, its value changes cyclically and increases towards the later phase of development. Also, the results of simulation show that with no pair programming, fewer tasks are delivered and it costs more to deliver a task when compared to development with pair programming. Further, customer behavior has a major impact on project performance. The quality of customer feedback is found to be very critical to the successful of an agile software development project. The primary contribution of this research is the simulation model of agile software development that can be used a tool to examine the impact of agile practices and management policies on critical project variables including project scope, schedule, and cost. This research provides a mechanism to study agile development as a dynamic system of practices rather than using a static view and in isolation. The results from this study are expected to be of significant interest to practitioners of agile methods by providing them a simulation environment to examine the impact of their practices, procedures and management policies.
APA, Harvard, Vancouver, ISO, and other styles
25

Rivera, Joey. "Software system architecture modeling methodology for naval gun weapon systems." Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/10504.

Full text
Abstract:
This dissertation describes the development of an architectural modeling methodology that supports the Navy's requirement to evaluate potential changes to gun weapon systems in order to identify potential software safety risks. The modeling methodology includes a tool (Eagle6) that is based on the Monterey Phoenix (MP) modeling methodology, and has the capability to create and verify MP models, execute formal assertions via pre-defined macro commands, and a visualization tool that generates graphical representations of model scenarios. The Eagle6 toolset has two scenario generation modes, Exhaustive Search for model verification within scope, and Random trace generation for statistical estimates of nonfunctional properties, such as performance. The dissertation demonstrates how the Eagle6 tool may improve the SSSTRP evaluation process by including a methodology to use formal assertions to test for software states that are considered unsafe.
APA, Harvard, Vancouver, ISO, and other styles
26

Oliveira, Rafael Alves Paes de. "Test orales for systems with complex outputs: the case of TTS systems." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13092017-085208/.

Full text
Abstract:
Software testing is one of the most important Software Engineering processes, being the primary activity to check the conformance between the software requirements and its actual behavior. The automation of software testing activities is essential to certify productivity and effectiveness in such activities. Test automation leads testing activities to be conducted under systematic and accurate criteria, raising the chance of testers to reveal faults or inconsistencies. Test oracles are elementary members in software testing automation, being the mechanism responsible for indicating the correctness of software outputs. In testing environments, test oracles can be effectively implemented based on several sources of information about the Software Under Testing (SUT): software specifications, assertions, formal methods (Finite State Machines (FSM), formal specifications, etc, machine-learning methods, and metamorphic relations. Regardless of the implementation strategy, test oracles are vulnerable to false positive/negative verdicts, configuring what the literature describes as the oracle problem. Therefore, test oracles are a non-trivial and challenging object of studies of the software engineering research area. SUTs outputs in unusual formats make it harder the oracle problem. Audio, images, three-dimensional objects, virtual reality environments, complex statistical compositions, etc, are examples of non-trivial output formats. In the software testing context, SUTs with unusual outputs can be called complex-output systems. In this doctorate dissertation, we propose and evaluate a novel test oracle approach for complex-output systems called feature-based test oracles. The purpose of feature-based test oracles is the appropriation of a processing image technique called Content-Based Image Retrieval (CBIR) to collect information from features extracted from the SUTs outputs to compose test oracles. Given a query image, CBIR combines feature extraction and similarity functions to alleviate the problem of searching for digital images in large databases. In previous research, we have integrated CBIR concepts in a testing framework to support the automation of testing activities in processing image systems and systems with Graphical User Interfaces (GUI). In this doctorate dissertation, we extended that framework and its concepts to general complex-output systems, addressing the feature-based test oracle approach. We use Text-To-Speech (TTS) systems to validate empirically our test oracle technique. Through the results of five empirical analyses, three of them conducted in line with problems of a real-world industry TTS system, show the proposed technique is a valuable instrument to automate testing activities and alleviate practitioners efforts on testing complex output systems. We conclude the proposed test oracles are effective because they systematically evaluate the SUTs sensorial output rather than produce verdicts based on subjective specifications. As future work, we plan to conduct investigations towards the reduction of false positives/negatives and the association of the test oracles with machine learning techniques and metamorphic relations.
Teste de Software é um dos processos mais importantes da Engenharia de Software, sendo a principal atividade para averiguar a conformidade de requisitos de software e suas saídas. A automatização das atividades de teste é essencial para conferir produtividade e efetividade em tais atividades. A automatização faz com que atividades de teste sejam conduzidas sob critérios sistemáticos e precisos, aumentando a chance dos testadores de revelarem falhas ou inconcistências. Oráculos de teste são membros elementares na automatização do teste de software, sendo o mecanismo responsável por indicar a corretude das saídas do softwre. Em ambientes de teste, oráculos de teste podem ser efetivamente implementados com base em diversos fontes de informação sobre o sistema em teste: especificações de software, assertivas, métodos formais (máquinas de estados finitas, especificações formais, etc), métodos de aprendizagem de máquina e relações metamórficas. Independente da estratégia de implementação, oráculos de teste são vulneráveis a veridictos de falsos positivos/negativos, configurando o que é apresentado na literatura como O problema do Oráculo. Então, na área de engenharia de software, oráculos de teste são objetos de estudo não-triviais e desafiadores. O problema de oráculo é potencializado quando as saídas do sistema em teste são dadas em formatos não triviais como, por exemplo, audio, imagens, objetos tridimensionais, ambientes de realidade virtual, composições estatísticas complexas, etc. No contexto do teste de software, sistemas com saídas não triviais podem ser chamados de sistemas com saídas complexas. Esta tese de doutorado propões e avalia uma nova estratégia de oráculo de teste para sistemas com saídas complexas. O propósito de tal estratégia é a apropriação da técnica de processamento de imagem conhecida como CBIR (Recuperação de Imagem Basead em Conteúdo CBIR) para coletar informações de características extratídas do sistema em teste, compondo oráculos de teste. A partir de uma imagem de busca, o CBIR combina extração de características e funções de similaridade para aliviar problemas de busca em grandes based de imagens digitais. Em pesquisas anteriores, conceitos de CBIR foram integrados em um arcabouço de teste para apoiar a automatização de atividades de teste em systemas de processamento de imagens e sistemas com interfaces gráficas. Esta tese de doutorado estende o arcabouço e seus conceitos para sistemas com saídas complexas em geral. Sistemas Texto-Fala (TTS) foram utlizados para validações empíricas. Os resultados de seis análises empíricas, duas delas condizidas em consonância com problemas de um TTS industrial, revelam que a técnica proposta é um valioso instrumento para automatizar atividaes de teste e aliviar esforços de profissionais da indústria ao teste sistemas com saídas complexas. Conclui-se que a efetividade dos oráculos de teste propostos são devido às sistemáticas análises do conteúdo das saídas dos sistemas em teste, em vez da análises de especificações subjetivas. Os trabalhos futuros vislumbrados devem ser conduzidos no intuito de reduzir número de falsos positivos/negativos e a associação dos oráculos de teste com técnicas de aprendizado de máquina e relações metamórficas.
APA, Harvard, Vancouver, ISO, and other styles
27

Manaf, Afwarman 1962. "Constraint-based software for broadband networks planninga software framework for planning with the holistic approach /." Monash University, Dept. of Electrical and Computer Systems Engineering, 2000. http://arrow.monash.edu.au/hdl/1959.1/8163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Benini, Enrico. "Functional Programming In Modern Software Systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13264/.

Full text
Abstract:
Con questa tesi si vuole studiare come il paradigma funzionale, sebbene sia meno utilizzato rispetto alle controparti imperative ed object oriented, stia influenzando l'industria del software. Durante la trattazione verranno analizzate le motivazioni alla base di questa tendenza e illustrate, attraverso semplici esempi, le principali astrazioni fondanti lo stile funzionale. I risultati delle analisi svolte mostrano come lo stile funzionale sia oramai una conoscenza essenziale e un'alternativa matura e concreta per la produzione del software. Inoltre, questo argomento viene avvalorato dalla crescente importanza che stanno assumendo proprietà software come: concorrenza, scalabilità, correttezza e manutenibilità. Infine, alla luce di queste considerazioni, si presenta un semplice domain-specific language estensibile e integrabile con applicazioni già esistenti. Questo incorpora un sottoinsieme dei concetti trattati e una precisa architettura con lo scopo di astrarre dalle esistenti tecnologie e rendere queste tematiche accessibili.
APA, Harvard, Vancouver, ISO, and other styles
29

Endresen, Vegard Haugen. "Hardware-software intercommunication in reconfigurable systems." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10762.

Full text
Abstract:

In this thesis hardware-software intercommunication in a reconfigurable system has been investigated based on a framework for run time reconfiguration. The goal has been to develop a fast and flexible link between applications running on an embedded processor and reconfigurable accelerator hardware in form of a Xilinx Virtex device. As a start the link was broken down into hardware and software components based on constraints from earlier work and a general literature search. A register architecture for reconfigurable modules, a reconfigurable interface and a backend bridge linking reconfigurable hardware with the system bus were identified as the main hardware components whereas device drivers and a hardware operating system were identified as software components. These components were developed in a bottom-up approach, then deployed, tested and evaluated. Synthesis and simulation results from this thesis suggest that a hybrid register architecture, a mix of shift based and addressable register architecture might be a good solution for a reconfigurable module. Such an architecture enables a reconfigurable interface with full duplex capability with an initially small area overhead compared to a full scale RAM implementation. Although the hybrid architecture might not be very suitable for all types of reconfigurable modules it can be a nice compromise when attempting to achieve a uniform reconfigurable interface. Backend bridge solutions were developed assuming the above hybrid reconfigurable interface. Three main types were researched: a software register backend, a data cache backend and an instruction and data cache backend. Performance evaluation shows that the instruction and data cache outperforms the other two with an average acceleration ratio of roughly 5-10. Surprisingly the data cache backend performs worst of all due to latency ratios and design choices. Aside from the BRAM component required for the cache backends, resource consumption was shown to be only marginally larger than a traditional software register solution. Caching using a controller in the backend-bridge can thus provide good speedup for little cost as far as BRAM resources are not scarce. A software-to-hardware interface has been created has been created through Linux character device driver and a hardware operating system daemon. While the device drivers provide a middleware layer for hardware access the HWOS separates applications from system management through a message queue interface. Performance testing shows a large increase in delay when involving the Linux device drivers and the HWOS as compared to calls directly from the kernel. Although this is natural, the software components are very important when providing a high performance platform. As additional work specialized cell handling for reconfigurable modules has been addressed in the context of a MPEG-4 decoder. Some light has also been shed on design of reconfigurable modules in Xilinx ISE which can radically improve development time and decrease complexity compared to a Xilinx Platform Studio flow. In the process of demonstrating run time reconfigurations it was discovered that a clock signal will resist being piped through bus macros. Also broken functionality has been shown when applying run time reconfiguration to synchronous designs using the framework for self reconfiguration.

APA, Harvard, Vancouver, ISO, and other styles
30

Axelsson, Erik. "Debugging Software for Multi-core Systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-42441.

Full text
Abstract:
The world of computer science has seen a big change the last years. The physical limitations in just increasing frequency when trying to increase the speed in processors has lead to a new era - multi-core. Systems based on multi-core processors brings a much higher level of flexibility to the designer, possibilities to experiment with different frequencies and voltages on on single chip. Unfortunately this flexibility also leads to a more complex system, hard to monitor and debug. The software implemented in multi-core processors needs to be parallelized and distributed very efficiently to take advantage of the architecture of the processor. The way information is exchanged between units in the processors and how the complex memory architecture, often with several levels of cache, is accessed are essential factors for the performance. It is often the case that minor changes in the software lead to big differences in performance. To be able to analyze the software when it is running on the chip it is of utmost importance to have a system that monitor the chip. One drawback with multi-core processors is that the integration of more logic into one chip decreases the external observability of the system. Hardware manufacturers have been trying to develop solutions for this problem and nowadays many processors come with an integrated system with the only purpose to support debugging and monitoring of the chip. The debugging system can be seen as a separate layer integrated on top of the system, only running in the background without affecting the the target system. In the hunt for higher performance and at the same time higher visibility this solution can be of big interest for software tool vendors and software designers. This master thesis is divided into two parts where the first one gives an overview of the concept with multi-core processors and problems with developing efficient software for them. It also addresses why a hardware based debugging and analyzing system can be beneficial during software development. In the second part a design is developed for a hardware based debugging system, implemented in a state of the art multi-core processor from Freescale. The parallel software running on the multicore processor is executed on top of Enea’s real time operating system OSE.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Liqun. "Animated exploring of huge software systems." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/26410.

Full text
Abstract:
There are many software visualization tools available today to help software engineers to explore software systems. However, when a system is huge, some of these tools do not satisfy the exploration requirements. The big problem is that the techniques the tools use do not provide an effective display and access mechanism to handle huge information spaces within the limitations imposed by available screen space. To alleviate the problem, this thesis describes methods that help users to explore huge software systems. In particular, we apply dynamic browsing incorporating such details as an extra result box mechanism, plus pattern based searching to help users to handle large query results. Then the thesis introduces the algorithms we apply to generate the layouts. We propose the radial angle model to visualize the internal structures of rooted trees. Also we apply the spring model to visualize the external structures among rooted trees. Next, the thesis describes various animation methods we use to smooth the transitions, track the focus of exploration, clarify unexpected results, and illustrate complex operations. In addition, we modify traditional camera animation, and propose an animation timing scheme 'slow-in fast-out' to exaggerate the reality. Next, the thesis describes a series of experiments we conducted to assess the effectiveness of the browsing, layout algorithm and animation techniques we implemented. Finally the thesis describes how we use the analysis of the experiment results to guide our future research.
APA, Harvard, Vancouver, ISO, and other styles
32

Dam, Khanh Hoa, and s3007289@student rmit edu au. "Supporting Software Evolution in Agent Systems." RMIT University. Computer Science and Information Technology, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090319.143847.

Full text
Abstract:
Software maintenance and evolution is arguably a lengthy and expensive phase in the life cycle of a software system. A critical issue at this phase is change propagation: given a set of primary changes that have been made to software, what additional secondary changes are needed to maintain consistency between software artefacts? Although many approaches have been proposed, automated change propagation is still a significant technical challenge in software maintenance and evolution. Our objective is to provide tool support for assisting designers in propagating changes during the process of maintaining and evolving models. We propose a novel, agent-oriented, approach that works by repairing violations of desired consistency rules in a design model. Such consistency constraints are specified using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) metamodel, which form the key inputs to our change propagation framework. The underlying change propagation mechanism of our framework is based on the well-known Belief-Desire-Intention (BDI) agent architecture. Our approach represents change options for repairing inconsistencies using event-triggered plans, as is done in BDI agent platforms. This naturally reflects the cascading nature of change propagation, where each change (primary or secondary) can require further changes to be made. We also propose a new method for generating repair plans from OCL consistency constraints. Furthermore, a given inconsistency will typically have a number of repair plans that could be used to restore consistency, and we propose a mechanism for semi-automatically selecting between alternative repair plans. This mechanism, which is based on a notion of cost, takes into account cascades (where fixing the violation of a constraint breaks another constraint), and synergies between constraints (where fixing the violation of a constraint also fixes another violated constraint). Finally, we report on an evaluation of the approach, covering both effectiveness and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
33

Lister, Kendall. "Toward semantic interoperability for software systems." Connect to thesis, 2008. http://repository.unimelb.edu.au/10187/3594.

Full text
Abstract:
“In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.” [57]
In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application.
The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data.
The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed.
Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems.
In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.
APA, Harvard, Vancouver, ISO, and other styles
34

Gungor, Murat Kahraman. "Structural models for large software systems." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2006. http://proquest.umi.com/login?COPT=REJTPTU0NWQmSU5UPTAmVkVSPTI=&clientId=3739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kuipers, Tobias. "Techniques for understanding legacy software systems." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2002. http://dare.uva.nl/document/65858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Takei, Chiharu, Hiroaki Takada, Masaki Yamamoto, and Shinya Honda. "Integrated software platform for automotive systems." IEEE, 2009. http://hdl.handle.net/2237/13982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jackson, David Mark. "Logical verification of reactive software systems." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Withall, Mark S. "The evolution of complete software systems." Thesis, Loughborough University, 2003. https://dspace.lboro.ac.uk/2134/3594.

Full text
Abstract:
This thesis tackles a series of problems related to the evolution of completesoftware systems both in terms of the underlying Genetic Programmingsystem and the application of that system. A new representation is presented that addresses some of the issues withother Genetic Program representations while keeping their advantages. Thiscombines the easy reproduction of the linear representation with the inheritablecharacteristics of the tree representation by using fixed-length blocks ofgenes representing single program statements. This means that each block ofgenes will always map to the same statement in the parent and child unless itis mutated, irrespective of changes to the surrounding blocks. This methodis compared to the variable length gene blocks used by other representationswith a clear improvement in the similarity between parent and child. Traditionally, fitness functions have either been created as a selection ofsample inputs with known outputs or as hand-crafted evaluation functions. Anew method of creating fitness evaluation functions is introduced that takesthe formal specification of the desired function as its basis. This approachensures that the fitness function is complete and concise. The fitness functionscreated from formal specifications are compared to simple input/outputpairs and the results show that the functions created from formal specificationsperform significantly better. A set of list evaluation and manipulation functions was evolved as anapplication of the new Genetic Program components. These functions havethe common feature that they all need to be 100% correct to be useful. Traditional Genetic Programming problems have mainly been optimizationor approximation problems. The list results are good but do highlight theproblem of scalability in that more complex functions lead to a dramaticincrease in the required evolution time. Finally, the evolution of graphical user interfaces is addressed. The representationfor the user interfaces is based on the new representation forprograms. In this case each gene block represents a component of the userinterface. The fitness of the interface is determined by comparing it to a seriesof constraints, which specify the layout, style and functionality requirements. A selection of web-based and desktop-based user interfaces were evolved. With these new approaches to Genetic Programming, the evolution ofcomplete software systems is now a realistic goal.
APA, Harvard, Vancouver, ISO, and other styles
39

Shrestha, shilu. "Software Modeling in Cyber-Physical Systems." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-111435.

Full text
Abstract:
A Cyber-Physical System (CPS) has a tight integration of computation, networking and physicalprocess. It is a heterogeneous system that combines multi-domain consisting of both hardware andsoftware systems. Cyber subsystems in the CPS implement the control strategy that affects the physicalprocess. Therefore, software systems in the CPS are more complex. Visualization of a complex system provides a method of understanding complex systems byaccumulating, grouping, and displaying components of systems in such a manner that they may beunderstood more efficiently just by viewing the model rather than understanding the code. Graphicalrepresentation of complex systems provides an intuitive and comprehensive way to understand thesystem. OpenModelica is the open source development environment based on Modelica modeling andsimulation language that consists of several interconnected subsystems. OMEdit is one of the subsystemintegrated into OpenModelica. It is a graphical user interface for graphical modeling. It consists of toolsthat allow the user to create their own shapes and icons for the model. This thesis presents a methodology that provides an easy way of understanding the structure andexecution of programs written in the imperative language like C through graphical Modelica model.
APA, Harvard, Vancouver, ISO, and other styles
40

Karatasios, Labros G. "Software engineering with database management systems." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

SILVA, EDUARDO TELES DA. "WX2X2: A SOFTWARE FOR NONLINEAR SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10089@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Apresentamos um software para inverter funções suaves genéricas do plano no plano, F(x) = b, bem como a teoria utilizada para implementá-lo. Em princípio, o programa calcula todas as pré-imagens de um ponto. A inversão numérica baseia-se na caracterização do conjunto crítico C = {x pertence R2 : det DF(x) = 0} e sua imagem, e em técnicas de continuação numérica ajustadas para interação controlada com C. A interface gráfica permite o estudo de propriedades geométricas e analíticas, tanto locais quanto globais.
We present a software to invert functions from the plane to the plane F(x) = b, for a generic smooth function F, as well as the theory to implement it. In principle, all points in the preimage of b are computed. The numerical inversion is based on the characterization of the critical set C = {x pertence R2 : detDF(x) = 0} and its image, and in appropriate techniques of numerical continuation in situations of controlled interaction with C. A graphical user interface allows for the study of local and global properties of the function, both of geometric and analytic nature.
APA, Harvard, Vancouver, ISO, and other styles
42

Ait-Ghezala, Ahmed 1976. "Software systems for a DNA sequencer." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/8931.

Full text
Abstract:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaf 49).
The initiative to complete the sequencing of the human genome is bringing the need for high-throughput sequencing capabilities to the forefront. We at the BioMEMS engineering group at the Whitehead Institute are designing and building a new sequencing machine that uses a 384 glass "chip" to dramatically increase sequencing rates. This thesis describes the design and implementation of two of the machine's software components. The first is a prototype application for the control of a robot used to automate sample loading. The second is a software filter that allows us to generate quality scores from data processed by Trout using Phred. I present the algorithm used to perform the filtering and show that the results are comparable to the processing of data with the Plan- Phred processing package.
by Ahmed Ait-Ghezala.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
43

Ajmani, Sameer 1976. "Automatic software upgrades for distributed systems." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28717.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 156-164).
Upgrading the software of long-lived, highly-available distributed systems is difficult. It is not possible to upgrade all the nodes in a system at once, since some nodes may be unavailable and halting the system for an upgrade is unacceptable. Instead, upgrades may happen gradually, and there may be long periods of time when different nodes are running different software versions and need to communicate using incompatible protocols. We present a methodology and infrastructure that address these challenges and make it possible to upgrade distributed systems automatically while limiting service disruption. Our methodology defines how to enable nodes to interoperate across versions, how to preserve the state of a system across upgrades, and how to schedule an upgrade so as to limit service disrup- tion. The approach is modular: defining an upgrade requires understanding only the new software and the version it replaces. The upgrade infrastructure is a generic platform for distributing and installing software while enabling nodes to interoperate across versions. The infrastructure requires no access to the system source code and is transparent: node software is unaware that different versions even exist. We have implemented a prototype of the infrastructure called Upstart that intercepts socket communication using a dynamically-linked C++ library. Experiments show that Upstart has low overhead and works well for both local-area-and Internet systems.
by Sameer Ajmani.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
44

Sinha, Amit 1976. "Energy efficient operating systems and software." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86773.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 153-159).
Energy efficient system design is becoming increasingly important with the proliferation of portable, battery-operated appliances such as laptops, Personal Digital Assistants (PDAs) and cellular phones. Numerous dedicated hardware approaches for energy mini-mization have been proposed while software energy efficiency has been relatively unexplored. Since it is the software that drives the hardware, decisions taken during software design can have a significant impact on system energy consumption. This thesis explores avenues for improving system energy efficiency from application level to the operating system level. The embedded operating system can have a significant impact on system energy by performing dynamic power management both in the active and passive states of the device. Software controlled active power management techniques using dynamic voltage and frequency scaling have been explored. Efficient workload prediction strategies have been developed that enable just-in-time computation. An algorithm for efficient real-time operating system task scheduling has also been developed that minimizes energy consumption. Portable systems spend a lot of time in sleep mode. Idle power management strategies have been developed that consider the effect of leakage and duty-cycle on system lifetime. A hierarchical shutdown approach for systems characterized multiple sleep states has been proposed. Although the proposed techniques are quite general, their applicability and utility have been demonstrated using the MIT [mu]AMPS wireless sensor node an example system wherever possible.
(cont.) To quantify software energy consumption, an estimation framework has been developed based on experiments on the StrongARM and Hitachi processors. The software energy profiling tool is available on-line. Finally, in energy constrained systems, we would like to have the ability to trade-off quality of service for extended battery life. A scalable approach to application development has been demonstrated that allows energy quality trade-offs.
by Amit Sinha.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
45

Minich, Matthias Ernst. "Industrialising software development in systems integration." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/2772.

Full text
Abstract:
Compared to other disciplines, software engineering as of today is still dependent on craftsmanship of highly-skilled workers. However, with constantly increasing complexity and efforts, existing software engineering approaches appear more and more inefficient. A paradigm shift towards industrial production methods seems inevitable. Recent advances in academia and practice have lead to the availability of industrial key principles in software development as well. Specialization is represented in software product lines, standardization and systematic reuse are available with component-based development, and automation has become accessible through model-driven engineering. While each of the above is well researched in theory, only few cases of successful implementation in the industry are known. This becomes even more evident in specialized areas of software engineering such as systems integration. Today’s IT systems need to quickly adapt to new business requirements due to mergers and acquisitions and cooperations between enterprises. This certainly leads to integration efforts, i.e. joining different subsystems into a cohesive whole in order to provide new functionality. In such an environment. the application of industrial methods for software development seems even more important. Unfortunately, software development in this field is a highly complex and heterogeneous undertaking, as IT environments differ from customer to customer. In such settings, existing industrialization concepts would never break even due to one-time projects and thus insufficient economies of scale and scope. This present thesis, therefore, describes a novel approach for a more efficient implementation of prior key principles while considering the characteristics of software development for systems integration. After identifying the characteristics of the field and their affects on currently-known industrialization concepts, an organizational model for industrialized systems integration has thus been developed. It takes software product lines and adapts them in a way feasible for a systems integrator active in several business domains. The result is a three-tiered model consolidating recurring activities and reducing the efforts for individual product lines. For the implementation of component-based development, the present thesis assesses current component approaches and applies an integration metamodel to the most suitable one. This ensures a common understanding of systems integration across different product lines and thus alleviates component reuse, even across product line boundaries. The approach is furthermore aligned with the organizational model to depict in which way component-based development may be applied in industrialized systems integration. Automating software development in systems integration with model-driven engineering was found to be insufficient in its current state. The reason herefore lies in insufficient tool chains and a lack of modelling standards. As an alternative, an XML-based configuration of products within a software product line has been developed. It models a product line and its products with the help of a domain-specific language and utilizes stylesheet transformations to generate compliable artefacts. The approach has been tested for its feasibility within an exemplarily implementation following a real-world scenario. As not all aspects of industrialized systems integration could be simulated in a laboratory environment, the concept was furthermore validated during several expert interviews with industry representatives. Here, it was also possible to assess cultural and economic aspects. The thesis concludes with a detailed summary of the contributions to the field and suggests further areas of research in the context of industrialized systems integration.
APA, Harvard, Vancouver, ISO, and other styles
46

Araújo, Cristiano Werner. "Bug prediction in procedural software systems." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/170023.

Full text
Abstract:
Informação relacionada a concertos de bugs tem sido explorada na construção de preditores de bugs cuja função é o suporte para a verificação de sistemas de software identificando quais elementos, como arquivos, são mais propensos a bugs. Uma grande variedade de métricas estáticas de código e métricas de mudança já foi utilizada para construir tais preditores. Dos muitos preditores de bugs propostos, a grande maioria foca em sistemas orientados à objeto. Apesar de orientação a objetos ser o paradigma de escolha para a maioria das aplicações, o paradigma procedural ainda é usado em várias — muitas vezes cruciais — aplicações, como sistemas operacionais e sistemas embarcados. Portanto, eles também merecem atenção. Essa dissertação extende o trabalho na área de predição de bugs ao avaliar e aprimorar preditores de bugs para sistemas procedurais de software. Nós proporcionamos três principais contribuições: (i) comparação das abordagens existentes de predição de bugs no contexto de sistemas procedurais, (ii) proposta de uso dos atributos de qualidade de software como atributos de predição no contexto estudado e (iii) avaliação dos atributos propostos em conjunto com a melhor abordagem encontrada em (i). Nosso trabalho provê, portanto, fundamentos para melhorar a performance de preditores de bugs no contexto de sistemas procedurais.
Information regarding bug fixes has been explored to build bug predictors, which provide support for the verification of software systems, by identifying fault-prone elements, such as files. A wide range of static and change metrics have been used as features to build such predictors. Many bug predictors have been proposed, and their main target is objectoriented systems. Although object-orientation is currently the choice for most of the software applications, the procedural paradigm is still being used in many—sometimes crucial—applications, such as operating systems and embedded systems. Consequently, they also deserve attention. This dissertation extends work on bug prediction by evaluating and tailoring bug predictors to procedural software systems. We provide three key contributions: (i) comparison of bug prediction approaches in context of procedural software systems, (ii) proposal of the use of software quality features as prediction features in the studied context, and (iii) evaluation of the proposed features in association with the best approach found in (i). Our work thus provides foundations for improving the bug prediction performance in the context of procedural software systems.
APA, Harvard, Vancouver, ISO, and other styles
47

Shirinbab, Sogand. "Performance Aspects in Virtualized Software Systems." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00599.

Full text
Abstract:
Virtualization has significantly improved hardware utilization by allowing IT service providers to create and run several independent virtual machine instances on the same physical hardware. One of the features of virtualization is live migration of the virtual machines while they are active, which requires transfer of memory and storage from the source to the destination during the migration process. This problem is gaining importance since one would like to provide dynamic load balancing in cloud systems where a large number of virtual machines share a number of physical servers. In order to reduce the need for copying files from one physical server to another during a live migration of a virtual machine, one would like all physical servers to share the same storage. Providing a physically shared storage to a relatively large number of physical servers can easily become a performance bottleneck and a single point of failure. This has been a difficult challenge for storage solution providers, and the state-of-the-art solution is to build a so called distributed storage system that provides a virtual shared disk to the outside world; internally a distributed storage system consists of a number of interconnected storage servers, thus avoiding the bottleneck and single point of failure problems. In this study, we have done a performance measurement on different distributed storage solutions and compared their performance during read/write/delete processes as well as their recovery time in case of a storage server going down. In addition, we have studied performance behaviors of various hypervisors and compare them with a base system in terms of application performance, resource consumption and latency. We have also measured the performance implications of changing the number of virtual CPUs, as well as the performance of different hypervisors during live migration in terms of downtime and total migration time. Real-time applications are also increasingly deployed in virtualized environments due to scalability and flexibility benefits. However, cloud computing research has not focused on solutions that provide real-time assurance for these applications in a way that also optimizes resource consumption in data centers. Here one of the critical issues is scheduling virtual machines that contain real-time applications in an efficient way without resulting in deadline misses for the applications inside the virtual machines. In this study, we have proposed an approach for scheduling real-time tasks with hard deadlines that are running inside virtual machines. In addition we have proposed an overhead model which considers the effects of overhead due to switching from one virtual machine to another.
APA, Harvard, Vancouver, ISO, and other styles
48

Nicholas, Charles Kenneth. "Assuring accessibility of complex software systems /." The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487587604132583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Bader, J. L. "Knowledge-based systems and Software Engineering." Thesis, Aston University, 1988. http://publications.aston.ac.uk/15143/.

Full text
Abstract:
The work described was carried out as part of a collaborative Alvey software engineering project (project number SE057). The project collaborators were the Inter-Disciplinary Higher Degrees Scheme of the University of Aston in Birmingham, BIS Applied Systems Ltd. (BIS) and the British Steel Corporation. The aim of the project was to investigate the potential application of knowledge-based systems (KBSs) to the design of commercial data processing (DP) systems. The work was primarily concerned with BIS's Structured Systems Design (SSD) methodology for DP systems development and how users of this methodology could be supported using KBS tools. The problems encountered by users of SSD are discussed and potential forms of computer-based support for inexpert designers are identified. The architecture for a support environment for SSD is proposed based on the integration of KBS and non-KBS tools for individual design tasks within SSD - The Intellipse system. The Intellipse system has two modes of operation - Advisor and Designer. The design, implementation and user-evaluation of Advisor are discussed. The results of a Designer feasibility study, the aim of which was to analyse major design tasks in SSD to assess their suitability for KBS support, are reported. The potential role of KBS tools in the domain of database design is discussed. The project involved extensive knowledge engineering sessions with expert DP systems designers. Some practical lessons in relation to KBS development are derived from this experience. The nature of the expertise possessed by expert designers is discussed. The need for operational KBSs to be built to the same standards as other commercial and industrial software is identified. A comparison between current KBS and conventional DP systems development is made. On the basis of this analysis, a structured development method for KBSs in proposed - the POLITE model. Some initial results of applying this method to KBS development are discussed. Several areas for further research and development are identified.
APA, Harvard, Vancouver, ISO, and other styles
50

Manaf, Afwarman 1962. "Constraint-based software for broadband networks planning : a software framework for planning with the holistic approach." Monash University, Dept. of Electrical and Computer Systems Engineering, 2000. http://arrow.monash.edu.au/hdl/1959.1/7754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography