Dissertations / Theses on the topic 'Custom Hardware and Software Development'

To see the other types of publications on this topic, follow the link: Custom Hardware and Software Development.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Custom Hardware and Software Development.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stoye, W. R. "The implementation of functional languages using custom hardware." Thesis, University of Cambridge, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.355864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Couprie, Dale. "Automated support for a custom personal software development process." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ34952.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Johansson, Hanna. "Interdisciplinary Requirement Engineering for Hardware and Software Development : from a Hardware Development Perspective." Thesis, Linköpings universitet, Industriell miljöteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139097.

Full text
Abstract:
Complexity in products is increasing, and still there is lack of a shared design language ininterdisciplinary development projects. The research questions of the thesis concern differencesand similarities in requirement handling, and integration, current and future. Futureintegration is given more focus with a pair of research questions highlighting obstacles andenablers for increased integration. Interviews were performed at four different companieswith complex development environments whose products originated from different fields;hardware, software, and service. Main conclusions of the thesis are: Time-frames in different development processes are very different and hard to unite. Internal standards exist for overall processes, documentation, and modification handling. Traceability is poorly covered in theory whilst being a big issue in companies. Companies understand that balancing and compromising of requirements is critical fora successful final product. The view on future increased interdisciplinary development is that there are more obstaclesto overcome than enablers supporting it. Dependency is seen as an obstacle inthis regard and certain companies strive to decrease it.The thesis has resulted in general conclusions and further studies is suggested into morespecific areas such as requirement handling tools, requirement types, and traceability.
APA, Harvard, Vancouver, ISO, and other styles
4

Sheikh, Bilal Tahir. "Interdisciplinary Requirement Engineering for Hardware and Software Development - A Software Development Perspective." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-147886.

Full text
Abstract:
The software and hardware industries  are growing day by day, which makes their development environments more complex. This situation has a huge impact on the companies which have interdisciplinary development  environments. To handle this situation, a common platform is required which can be acted as a bridge between hardware and software development to ease their tasks in an organized way. The research questions of the thesis aim to get information about differences and similarities in requirements handling, and their integration in current and future prospectives. The future prospect of integration is considered as a focused area. Interviews were conducted to get feedback from four different companies having complex development environments.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Jian. "An FPGA Based Software/Hardware Codesign for Real Time Video Processing : A Video Interface Software and Contrast Enhancement Hardware Codesign Implementation using Xilinx Virtex II Pro FPGA." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6173.

Full text
Abstract:

Xilinx Virtex II Pro FPGA with integrated PowerPC core offers an opportunity to implementing a software and hardware codesign. The software application executes on the PowerPC processor while the FPGA implementation of hardware cores coprocess with PowerPC to achieve the goals of acceleration. Another benefit of coprocessing with the hardware acceleration core is the release of processor load. This thesis demonstrates such an FPGA based software and hardware codesign by implementing a real time video processing project on Xilinx ML310 development platform which is featured with a Xilinx Virtex II Pro FPGA. The software part in this project performs video and memory interface task which includes image capture from camera, the store of image into on-board memory, and the display of image on a screen. The hardware coprocessing core does a contrast enhancement function on the input image. To ease the software development and make this project flexible for future extension, an Embedded Operating System MontaVista Linux is installed on the ML310 platform. Thus the software video interface application is developed using Linux programming method, for example the use of Video4Linux API. The last but not the least implementation topic is the software and hardware interface, which is the Linux device driver for the hardware core. This thesis report presents all the above topics of Operating System installation, video interface software development, contrast enhancement hardware implementation, and hardware core’s Linux device driver programming. After this, a measurement result is presented to show the performance of hardware acceleration and processor load reduction, by comparing to the results from a software implementation of the same contrast enhancement function. This is followed by a discussion chapter, including the performance analysis, current design’s limitations and proposals for improvements. This report is ended with an outlook from this master thesis.

APA, Harvard, Vancouver, ISO, and other styles
6

Perrett, M. R. "Wireless multi-carrier communication system design and implementation using a custom hardware and software FPGA platform." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1370580/.

Full text
Abstract:
Field Programmable Gate Array (FPGA) devices and high-level hardware development languages represent a new and exciting addition to traditional research tools, where simulation models can be evaluated by the direct implementation of complex algorithms and processes. Signal processing functions that are based on well known and standardised mathematical operations, such as Fast Fourier Transforms (FFTs), are well suited for FPGA implementation. At UCL, research is on-going on the design, modelling and simulation of Frequency Division Multiplexing (FDM) techniques such as Spectrally E - cient Frequency Division Multiplexing (SEFDM) which, for a given data rate, require less bandwidth relative to equivalent Orthogonal Frequency Division Multiplexing (OFDM). SEFDM is based around standard mathematical functions and is an ideal candidate for FPGA implementation. The aim of the research and engineering work reported in this thesis is to design and implement a system that generates SEFDM signals for the purposes of testing and veri cation, in real communication environments. The aim is to use FPGA hardware and Digital to Analogue Converters (DACs) to generate such signals and allow recon gurability using standard interfaces and user friendly software. The thesis details the conceptualisation, design and build of an FPGA-based wireless signal generation platform. The characterisation applied to the system, using the FPGA to drive stimulus signals is reported and the thesis will include details of the FPGA encapsulation of the minimum protocol elements required for communication (of control signals) over Ethernet. Detailed testing of the hardware is reported, together with a newly designed in the loop testing methodology. Veri ed test results are also reported with full details of time and frequency results as well as full FPGA design assessment. Altogether, the thesis describes the engineering design, construction and testing of a new FPGA hardware and software system for use in communication test scenarios, controlled over Ethernet.
APA, Harvard, Vancouver, ISO, and other styles
7

Wee, Sewook. "Atlas : software development environment for hardware transactional memory /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Webster, David D. "Hardware, software, firmware allocation of functions in systems development." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/49907.

Full text
Abstract:
The top-down development methodology is, for the most part, a well defined subject. There is, however, one area of top-down development that lacks structure and definition. The undefined topic is the hardware, software, and firmware allocation of functions. This research addresses this deficiency in top-down system development. The key objective is the restructuring of the hardware, software, and firmware process from a subjective, qualitative decision process to a structured, quantitative one. Factors that affect the hardware, software, and firmware allocation process are identified. Qualitative data on the influence of the factors on the allocation process are systematized into quantitative information. This information is used to develop a model to provide a recommendation for implementing a function in hardware, software, or firmware. The model applies three analytical methods: 1) the analytic hierarchy process, 2) the general linear model, and 3) the second order regression technique. These three methods are applied to the quantified information of the hardware, software, firmware allocation process. A computer-based software tool is developed by this research to aid in the evaluation of the hardware, software, and firmware allocation process. The software support tool assists in data collection. Future application of the support tool will enable the capture and documentation of expert knowledge on the hardware, software, and firmware allocation process. The improved knowledge base can be used to improve the model which in tum will improve the system development process, and resulting system.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
9

Weaver, Robin N. "Hurricane data collection hardware and software improvements, maintenance and development /." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0000945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wells, George James. "Hardware emulation and real-time simulation strategies for the concurrent development of microsatellite hardware and software." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ62899.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhan, Ryan A. "Development of Novel Hardware and Software for Wind Turbine Condition Monitoring." DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2268.

Full text
Abstract:
With the increased use of wind turbines as sources of energy, maintenance of these devices becomes more and more important. Utility scale wind turbines can be time consuming and expensive to repair so an intelligent method of monitoring these devices is important. Commercial solutions for condition monitoring exist but are expensive and can be difficult to implement. In this project a novel condition monitoring system is developed. The priority of this system, dubbed the LifeLine, is to provide reliable condition monitoring through an easy-to-install and low-cost system. This system utilizes a microcontroller to collect acceleration data to detect imbalances on turbines blades. Two graphical user interfaces are created. One improves control with a small wind turbine while the other interfaces with the LifeLine. A custom PCB is designed for the LifeLine and additional rotor speed, current, and voltage sensors are incorporated into the LifeLine system. Future improvements to this system are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
12

Johnsson, Sven. "Hardware and software development of a uClinux Voice over IP telephone platform." Thesis, Linköping University, Department of Science and Technology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9455.

Full text
Abstract:

Voice over IP technology (VoIP) has recently gained popularity among consumers. Many popular VoIP services exist only as software for PCs. The need of taking such services out of the PC, into a stand-alone device has been discovered, and this thesis work deals with the development of such a device. The thesis work is done for Häger Scandinavia AB, a Swedish telephone manufacturer. This thesis work covers the design of a complete prototype of a table-top VoIP telephone running an embedded Linux Operating system. Design areas include product development, hardware design and software design.The result is a working prototype with hardware and corresponding Linux device drivers. The prototype can host a Linux application adapted to it. Conclusions are that the first hardware version has worked well and that using an open-source operating system is very useful. Further work consists of implementing a complete telephony software application in the system, evaluation of system requirements and adapting the prototype for a commercial design.

APA, Harvard, Vancouver, ISO, and other styles
13

Black, Derek J. "Development and feasibility of economical hardware and software in control theory application." Thesis, Kansas State University, 2017. http://hdl.handle.net/2097/38170.

Full text
Abstract:
Master of Science
Department of Mechanical and Nuclear Engineering
Dale E. Schinstock
Control theory is the study of feedback systems, and a methodology investigated by many engineering students throughout most universities. Because of control theory's broad and interdisciplinary nature, it necessitates further study by application through experimental learning and laboratory practice. Typically, the hardware used to connect the theoretical aspects of controls to the practical can be expensive, big, and time consuming to the students and instructors teaching on the equipment. Alternatively, using cheaper sensors and hardware, such as encoders and motor drivers, can obfuscate the collected data in a way that creates a disconnect between developed theoretical models and actual system results. This disconnect can dissuade the idea that systems can and will follow a modeled behavior. This thesis attempts to assess the feasibility of a piece of laboratory apparatus named the NERMLAB. Multiple experiments will be conducted on the NERMLAB system and compared against time-tested hardware to demonstrate the practicality of the NERMLAB system in control theory application.
APA, Harvard, Vancouver, ISO, and other styles
14

Koch, Christine. "Managerial coordination between hardware and software development during complex electronic system design." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq22135.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wei, Hsin-Yu. "Magnetic induction tomography for medical and industrial imaging : hardware and software development." Thesis, University of Bath, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558901.

Full text
Abstract:
The main topics of this dissertation are the hardware and the software developments in magnetic induction tomography imaging techniques. In the hardware sections, all the tomography systems developed by the author will be presented and discussed in detail. The developed systems can be divided into two categories, according to the property of the target imaging materials: high conductivity materials and low conductivity materials. Each system has its own suitable application, and each will thus be tested under different circumstances. In terms of the software development, the forward and inverse problems have been studied, including the eddy current problem modeling, sensitivity map formulae derivation and iterative/non-iterative inverse solvers equations. The Biot-Savart Theory was implemented in the ‘two-potential’ method that was used in the eddy current model in order to improve the system’s flexibility. Many different magnetic induction tomography schemes are proposed for the first time in this field of research, their aim being to improve the spatial and temporal resolution of the final reconstructed images. These novel schemes usually involve some modifications of the system hardware and forward/inverse calculations. For example, the rotational scheme can improve the ill-posedness and edge detectability of the system; the volumetric scheme can provide extra spatial resolution in the axial direction; and the temporal scheme can improve the temporal resolution by using the correlation between the consecutive datasets. Volumetric imaging requires an intensive amount of extra computational resources. To overcome the issue of memory constraints when solving large-scale inverse problems, a matrix-free method was proposed, also for the first time in magnetic induction tomography. All the proposed algorithms are verified by the experimental data obtained from suitable tomography systems developed by the author. Although magnetic induction tomography is a new imaging technique, it is believed that the technique is well developed for real-life applications. Several potential applications for magnetic induction tomography are suggested. The initial proof-of-concept study for a challenging low conductivity two-phase flow imaging process is provided. In this thesis, a range of contributions have been made in the field of magnetic induction tomography, which will help the magnetic induction tomography research to be carried on further.
APA, Harvard, Vancouver, ISO, and other styles
16

Cassirer, Albin, and Erik Hane. "Model-Pipe-Hardware: Method for Test Driven Agile Development in Embedded Software." Thesis, KTH, Maskinkonstruktion (Inst.), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-182670.

Full text
Abstract:
In this thesis, we present development and evaluation of a new test driven design method for embedded systems software development. The problem of development speed is one of major obstacles for transferring Test Driven Development (TDD) methodologies into the domain of embedded software development. More specically, the TDD cycle is disrupted by time delays due to code uploads and transfer of data between the development "host" system and the "target" embedded platform. Furthermore, the use of "mock objects" (that abstract away hardware dependencies and enable host system testing techniques) is problematic since it creates overhead in terms of development time. The proposed model, Model-Pipe-Hardware (MPH), addresses this problem by introducinga strict set of design rules that enable testing on the "host" without the need of the "mock objects". MPH is based on a layer principle, "trigger-event-loop" and supporting "target" architecture. The layer principle provides isolation between hardware dependent/independent code. The trigger-event-loop is simply a proxy between the layers. Finally, the developed testing fixture enables testing of hardware dependent functions and is independent of the target architecture. The MPH model is presented and qualitatively evaluated through an interview study and an industry seminar at the consulting company Sigma Technology in Stockholm. Furthermore, we implement tools required for MPH and apply the model in a small scale industry development project. We construct a system capable of monitoring and visualisation of status in software development projects. The combined results (from interviews and implementation) suggest that the MPH method has a great potential to decrease development time overheads for TDD in embedded software development. We also identify and present obstacles to adaptation of MPH. In particular, MPH could be problematic to implement in software development involving real-time dependencies, legacy code and a high degree of system complexity. We present mitigations for each one of these issues and suggest directions for further investigations of the obstacles as part of future work.
I denna avhandling presenteras utveckling och utvärdering av en ny utvecklingsmetod för mjukvaruutveckling i inbyggda system. Långsam utvecklingshastighet ar ett stort hinder för applicerandet av Test Driven Utveckling (eng. Test-Driven-Development,TDD) inom inbyggda system. Mer specifikt, uppstår flaskhalsar i TDD cykeln på grund av koduppladdningar och dataöverföringar mellan utvecklingsmiljö (host) och plattformen för det inbyggda systemet (target). Vidare är användningen av "mock"-objekt (abstraherar bort hårdvaruberoenden for att möjliggöra tester i hostmiljö) kostsamt då implementatering och design av "mock"-objekten förlänger utvecklingstiden. Den förslagna modellen, Model-Pipe-Hardware (MPH), adresserar detta problem genom att introducera strikta designregler vilket möjliggör tester i hostmiljö utan användning av mocks. MPH bygger på en lagerprincip, en så kallad "trigger-event-loop" och en tillhörande hårdvaruarkitektur. Lagerprincipen möjliggör isolering mellan hårdvaru- beroende/oberoendekod medan trigger-event-loopen fungerar som en proxy mellan lagren. MPH presenteras och utvärderas genom en intervjustudie och ett industriseminarium på konsultbolaget Sigma Technology i Stockholm. Vidare implementeras nödvändig infrastruktur och MPH metoden har applicerarts på ett mindre industriellt utvecklingsprojekt. De kombinerade resultaten från intervjuer, seminarium implementering antyder att MPH har stor potential att öka utvecklingshastighet for TDD i inbyggda system. Vi identierar även möjliga hinder föor applicering av MPH i utveckling av inbyggda system. Mer specifikt kan MPH vara problematisk för inbyggda system som innefattar realtidskrav, så kallad "legacy kod" och för system med hög komplexitet. Vi föreslår möjliga lösningar för dessa problem och hur de bör utredas vidare som en del av framtida arbete.
APA, Harvard, Vancouver, ISO, and other styles
17

Goedde, Todd William. "Decoder Board Hardware/Software Development in Wireless Interactive Video Data Service System." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/35871.

Full text
Abstract:
The Interactive Video Data Service (IVDS) system allows consumers to browse the Internet, request information on products or services, make purchases, indicate preferences, and perform other interactive applications. To provide this service, the IVDS system has three subsystems: Consumer Control (CC), Cell Repeater (CR), and Host subsystem. In the CC subsystem, an IVDS transceiver box is placed near a television set. Once the consumer sends a command to the transceiver box using a standard television/VCR/Cable remote control, the transceiver box receives information embedded in the television audio, and then transmits the information to the CR subsystem as a radio frequency (RF) spread spectrum message. The CR subsystem decodes the spread spectrum message and forwards it to the Host subsystem for processing. Located in the CR subsystem, a custom designed circuit board, called the decoder board, uses surface mounted components to decode and packetize the spread spectrum message for transfer to the CR main processor. This paper provides a functional description of the hardware components on the decoder board, and describes the hardware/software developed for interfacing the decoder board to the radio receiver and to the CR main processor. Hardware modifications were needed to correct timing problems between components. Software was developed to initialize the components for downconverting, despreading, and demodulating spread spectrum messages, and to packetize them for transfer to the CR main processor. This paper also discusses the tests used to verify both the performance of the decoder board software and the operation of the hardware components.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

You, Yi. "Development of Software and Hardware Tools to Improve Direct Mass-Spectrometric Analyses." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1493053646961652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Koch, Christine Carleton University Dissertation Management Studies. "Managerial coordination between hardware and software development during complex electronic system design." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
20

Taylor, Shelley Louise. "Quantitative bioluminescence tomography : hardware and software development for a multi-modal imaging system." Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8180/.

Full text
Abstract:
Bioluminescence imaging (BLI) is widely used in pre-clinical research to monitor the location and migration of different cell types, and the growth of cancerous tumours and response to treatments within murine models. However, the quantitative accuracy of the technique is limited. The position of the animal is known to affect the measured bioluminescence, with a change in position causing a change in measurement. Work presented here will address this problem, validating a free space model in a murine model to produce surface bioluminescence measurements which are independent of the position of the animal. The position of the source within the animal and the underlying tissue attenuation also affect the quantitative accuracy of bioluminescence measurements. An extension to bioluminescence imaging, bioluminescence tomography (BLT), aims to overcome these problems by recovering the three-dimensional bioluminescent source distribution within the animal. However, there are limitations to the quantitative accuracy of BLT. Current reconstruction algorithms ignore the bandwidth of band-pass filters used for multi-spectral data collection for BLT. This work develops a model which accounts for filter bandwidth in the BLT reconstruction, improving the quantitative accuracy of the technique. An additional limitation to the quantitative accuracy of BLT is that accurate knowledge of the optical properties of the animal are required but are difficult to acquire. Work to improve the quantitative accuracy by obtaining subject-specific optical properties via a spectral derivative reconstruction method for diffuse optical tomography (DOT) is presented. The initial results are promising for the application of the method in vivo.
APA, Harvard, Vancouver, ISO, and other styles
21

Tarantino, Davide. "Hardware and software development of LGV system for logistics in the ceramic sector." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
This work of thesis is the result of a curricular internship carried out for System Ceramics, where I contributed to the hardware and PLC software development of new Automated Guided Vehicles (AGV), in particular the design of a new automated forklift truck. After a brief introduction dealing with the concept of what is an AGV system, this thesis outlines the hardware setup of the vehicle prototypes and the software used in order to perform all the needed vehicle functionalities. In particular, the general structure of the PLC program implemented to manage the forks compartment, which consists in three main blocks: the 'Command Manager, which identifies the motion command the forks needs to be performing, the Forks Actuation Manger, which manages the forks actuators during operations and the Motion Control, which regulates the forks motion profile performed during loading and unloading missions. Two main criteria have been followed throughout the writing of the code: modularity and reusability.
APA, Harvard, Vancouver, ISO, and other styles
22

Hauff, Martin Anthony, and marty@extendabilities com au. "Compiler Directed Codesign for FPGA-based Embedded Systems." RMIT University. Electrical and Computer Engineering, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20081202.141333.

Full text
Abstract:
As embedded systems designers increasingly turn to programmable logic technologies in place of off-the-shelf microprocessors, there is a growing interest in the development of optimised custom processing cores that can be designed on a per-application basis. FPGAs blur the traditional distinction between hardware and software and offer the promise of application specific hardware acceleration. But realizing this in a general sense requires a significant departure from traditional embedded systems development flows. Whereas off-the-shelf processors have a fixed architecture, the same cannot be said of purpose-built FPGA-based processors. With this freedom comes the challenge of empirically determining the optimal boundary point between hardware and software. The fluidity of the hardware/software partition also poses an interesting challenge for compiler developers. This thesis presents a tool and methodology that addresses these codesign challenges in a new way. Described as 'compiler-directed codesign', it makes use of a suitably modified compiler to help direct the development of a custom processor core on a per-application basis. By exposing the compiler's internal representation of a compiled target program, visibility into those instructions, and hardware resources, that are most sought after by the compiler can be gained. This information is then used to inform further processor development and to determine the optimal partition between hardware and software. At each design iteration, the machine model is updated to reflect the available hardware resources, the compiler is rebuilt, and the target application is compiled once again. By including the compiler 'in-the-loop' of custom processor design, developers can accurately quantify the impact on performance caused by the addition or removal of specific hardware resources and iteratively converge on an optimal solution. Compiler Directed Codesign has advantages over existing codesign methodologies because it offers both a concrete point from which to begin the partitioning process as well as providing quantifiable and rapid feedback of the merits of different partitioning choices. When applied to an Adaptive PCM Encoder/Decoder case study, the Compiler Directed Codesign technique yielded a custom processor core that was between 36% and 73% smaller, consumed between 11% to 19% less memory, and performed up to 10X faster than comparable general-purpose FPGA-based processor cores. The conclusion of this work is that a suitably modified compiler can serve a valuable role in directing hardware/software partitioning on a per-application basis.
APA, Harvard, Vancouver, ISO, and other styles
23

OVERLY, TIMOTHY G. S. "DEVELOPMENT AND INTEGRATION OF HARDWARE AND SOFTWARE FOR ACTIVE-SENSORS IN STRUCTURAL HEALTH MONITORING." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1178813386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Obeidat, Nawar H. "The Design and Development Process for Hardware/Software Embedded Systems: Example Systems and Tutorials." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1416233177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Grobbelaar, Leon A. "A study on creating a custom South Sotho spellchecking and correcting software desktop application." Thesis, [Bloemfontein] : Central University of Technology, Free State, 2007. http://hdl.handle.net/11462/43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sjöberg, Alexander. "Real-time implementation of PMSM software model on external hardware." Thesis, KTH, Elkraftteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214394.

Full text
Abstract:
When developing three phase motor drives, the best way to validate the desiredfunctionality is to connect the inverter to an actual electrical motor. However, when developingfunctions which are not directly involved in controlling the motor, it could bemore efficient to use a real-time software model of the motor. In this master thesis, the developmentand implementation of a software model of a permanent magnet synchronousmotor (PMSM) is presented. This model was based on general dynamic equations forPMSM in a rotating reference frame (dq-frame). The model was simulated and convertedto C code using model based software development in Mathworks Simulink. To providemore realistic performance of the model, a finite element analysis (FEA) was done of anactual PMSM using the software tool FEMM. This analysis resulted in data describingthe relation between flux linkage and current which, when added into to software model,limits the produced torque due to magnetic saturation. Both the FEMM model and thefinal software model was compared to a corresponding actual motor for validation andperformance testing. All this resulted in a fully functional software model which was executableon the inverter. In the comparison of FEMM model to the real motor, a deviationin produced torque was discovered. This led to the conclusion that the model needed to beimproved to perform more alike the real motor. However, for this application the modelwas considered good enough to be used in future software development projects.
N¨ar kontrollsystem till trefasmotorer utvecklas s°a ¨ar det mest vanliga och troligendet b¨asta s¨attet f¨or funktionsvalidering att k¨ora drivenheten kopplad mot en riktig elektriskmotor. D¨aremot, om funktioner som ej ¨ar direkt kopplade till sj¨alva drivningen av motornutvecklas, s°a kan det vara mer effektivt att ist¨allet anv¨anda en mjukvarumodell. I det h¨arexamensarbetet s°a presenteras en mjukvarumodell av en permanentmagnetiserad synkronmotor(PMSM). Modellen baserades p°a de generella ekvationerna f¨or PMSM och simuleradessamt kodgenererades i Mathworks verktyg Simulink. F¨or att g¨ora modellen mer realistisks°a kompletterades den med data som beskriver relationen mellan det l¨ankade fl¨odetoch str¨om f¨or att ¨aven ta h¨ansyn till magnetisk m¨attnad. Den informationen simuleradesfram i verktyget FEMMgenom fl¨odesber¨akningar p°a en specifik motor typ. Samma motortyp har ocks°a j¨amf¨orts med den slutgiltiga mjukvarumodellen med avseende p°a utvecklatvridmoment vilket resulterade i n°agot st¨orre skillnader ¨an f¨orv¨antat. Slutsatsen blevs°aledes att modellen beh¨over f¨orb¨attras f¨or att p°a ett b¨attre s¨att st¨amma ¨overens med verklighetenmen att den fungerar tillr¨ackligt bra f¨or den ¨amnade applikationen.
APA, Harvard, Vancouver, ISO, and other styles
27

Costa, Filippo <1978&gt. "Hardware and software development of a multichannel readout board named CARLOSrx for the ALICE experiment." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1728/.

Full text
Abstract:
ALICE, that is an experiment held at CERN using the LHC, is specialized in analyzing lead-ion collisions. ALICE will study the properties of quarkgluon plasma, a state of matter where quarks and gluons, under conditions of very high temperatures and densities, are no longer confined inside hadrons. Such a state of matter probably existed just after the Big Bang, before particles such as protons and neutrons were formed. The SDD detector, one of the ALICE subdetectors, is part of the ITS that is composed by 6 cylindrical layers with the innermost one attached to the beam pipe. The ITS tracks and identifies particles near the interaction point, it also aligns the tracks of the articles detected by more external detectors. The two ITS middle layers contain the whole 260 SDD detectors. A multichannel readout board, called CARLOSrx, receives at the same time the data coming from 12 SDD detectors. In total there are 24 CARLOSrx boards needed to read data coming from all the SDD modules (detector plus front end electronics). CARLOSrx packs data coming from the front end electronics through optical link connections, it stores them in a large data FIFO and then it sends them to the DAQ system. Each CARLOSrx is composed by two boards. One is called CARLOSrx data, that reads data coming from the SDD detectors and configures the FEE; the other one is called CARLOSrx clock, that sends the clock signal to all the FEE. This thesis contains a description of the hardware design and firmware features of both CARLOSrx data and CARLOSrx clock boards, which deal with all the SDD readout chain. A description of the software tools necessary to test and configure the front end electronics will be presented at the end of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
28

Falkenberg, Andreas. "Development and implementation of wireless telecommunication systems : a collection of relevant hardware and software patents." Thesis, University of South Wales, 2007. https://pure.southwales.ac.uk/en/studentthesis/development-and-implementation-of-wireless-telecommunication-systems-a-collection-of-relevant-hardware-and-software-patents(d04ec5e4-49bf-49a4-8615-42fcc1356e39).html.

Full text
Abstract:
Modern telecommunication systems and standards are mainly dependent on the availability of digital signal processing capabilities of appropriate hardware components. Two main categories can be distinguished in the development of digital signal processing units. On the one hand a number of general purpose digital signal processors are available on the market, which can be programmed through programming languages like C or C++ or - for higher performance purpose - directly in assembly code. The advantage of such devices is the ability of high flexibility and short time to market since there is no further hardware development, on the integrated circuit level, required. On the other hand hardware components are specifically developed for signal processing tasks, which are mainly application specific integrated circuits (ASIC). They are usually only programmable to a certain degree, always considering the area of application, i.e. wireless telecommunication systems. Although they do not offer the flexibility of general purpose digital signal processors, they offer the big advantage of less required hardware (measured as chip area or die size), lower power consumption and higher speeds. Usually hybrids are found on the market, which combine freely programmable Digital Signal Processor (DSP) with very specific hardware modules to support the specific application needs. This thesis describes the development of a Wireless Telecommunication System, describing the relevant development methodologies, regarding aspects of hardware and software split and actual implementations of components in hardware as well as in software. This is done specifically for the example of a wideband code division multiple access (WCDMA) wireless mobile system. The actual state of the art is described in detail, according to the relevant literature in the area of WCDMA systems. Programmable hardware is presented, which is covered through a portfolio of patents. The purpose and the application of each patent are described in detail as well as the area of application. Finally a classification of each patent is given, which aims to give an objective measure about the value of a patent. The presented patents show a significant contribution to knowledge enabling the development of low power mobile wireless telecommunication systems.
APA, Harvard, Vancouver, ISO, and other styles
29

Castellar, Anderson. "Proposta de metodologia para utilização em hardware reconfigurável para aplicações aeroespaciais." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/18/18152/tde-12022009-092709/.

Full text
Abstract:
O programa CBERS é uma parceria entre o governo Brasileiro e o governo Chinês para desenvolvimento de satélites para sensoriamento remoto. A metodologia proposta será aplicada na Câmera Multi Espectral (MUXCAM) dos satélites CBERS-3 e 4, a primeira deste gênero a ser totalmente produzida no Brasil. Devido à alta confiabilidade exigida, principalmente devido ao custo elevado, as aplicações aeroespaciais que envolvem hardware reconfigurável devem possuir uma metodologia de desenvolvimento, desde a definição dos requisitos até o processo de verificação e validação. A utilização da linguagem VHDL e da ferramenta de síntese, processo este chamado de metodologia clássica, produzem um circuito final não otimizado, eliminando redundâncias e alterando a arquitetura proposta. Este trabalho propõe uma metodologia que busca garantir a utilização de uma única arquitetura desde o início do ciclo de desenvolvimento até sua finalização. Esta metodologia torna o processo de desenvolvimento mais confiável e determinístico.
The CBERS program is a partnership between Brazil and China to produce satellites for remote sensing, producing images of the Earth for studies in several areas, mainly the ones related to the sustainable exploitation of natural resourses. The methodology proposed in this work will be applied on the satellite CBERS-3 e 4\'s Multispectral Camera (MUXCAM), the first of its gender fully produced in Brazil. Because the high reliability involved in aerospace applications, a methodology is necessary from software specification until the verification and validation process to guarantee the high reliability. The use of the synthesis tool and VHDL produce a poor circuit, eliminating redundance and making architectural changes. This work proposes a methodology to keep the architectural the same all development cycle, make the development process more trustful for aerospace applications.
APA, Harvard, Vancouver, ISO, and other styles
30

Lin, Hsiang-Ling Jamie. "Evaluating hardware/software partitioning and an embedded Linux port of the Virtex-II pro development system." Online access for everyone, 2006. http://www.dissertations.wsu.edu/Thesis/Spring2006/h%5Flin%5F050106.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ryd, Jonatan, and Jeffrey Persson. "Development of a pipeline to allow continuous development of software onto hardware : Implementation on a Raspberry Pi to simulate a physical pedal using the Hardware In the Loop method." Thesis, KTH, Hälsoinformatik och logistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296952.

Full text
Abstract:
Saab want to examine Hardware In the Loop method as a concept, and how an infrastructure of Hardware In the Loop would look like. Hardware In the Loop is based upon continuously testing hardware, which is simulated. The software Saab wants to use for the Hardware In the Loop method is Jenkins, which is a Continuous Integration, and Continuous Delivery tool. To simulate the hardware, they want to examine the use of an Application Programming Interface between a Raspberry Pi, and the programming language Robot Framework. The reason Saab wants this examined, is because they believe that this method can improve the rate of testing, the quality of the tests, and thereby the quality of their products.The theory behind Hardware In the Loop, Continuous Integration, and Continuous Delivery will be explained in this thesis. The Hardware In the Loop method was implemented upon the Continuous Integration and Continuous Delivery tool Jenkins. An Application Programming Interface between the General Purpose Input/Output pins on a Raspberry Pi and Robot Framework, was developed. With these implementations done, the Hardware In the Loop method was successfully integrated, where a Raspberry Pi was used to simulate the hardware.
Saab vill undersöka metoden Hardware In the Loop som ett koncept, dessutom hur en infrastruktur av Hardware In the Loop skulle se ut. Hardware In the Loop baseras på att kontinuerligt testa hårdvara som är simulerad. Mjukvaran Saab vill använda sig av för Hardware In the Loop metoden är Jenkins, vilket är ett Continuous Integration och Continuous Delivery verktyg. För attsimulera hårdvaran vill Saab undersöka användningen av ett Application Programming Interface mellan en Raspberry Pi och programmeringsspråket Robot Framework. Anledning till att Saab vill undersöka allt det här, är för att de tror att det kan förbättra frekvensen av testning och kvaliteten av testning, vilket skulle leda till en förbättring av deras produkter. Teorin bakom Hardware In the Loop, Continuous Integration och Continuous Delivery kommer att förklaras i den här rapporten. Hardware In the Loop metoden blev implementerad med Continuous Integration och Continuous Delivery verktyget Jenkins. Ett Application Programming Interface mellan General Purpose Input/output pinnarna på en Raspberry Pi och Robot Framework blev utvecklat. Med de här implementationerna utförda, så blev Hardware Inthe Loop metoden slutligen integrerat, där Raspberry Pis användes för att simulera hårdvaran.
APA, Harvard, Vancouver, ISO, and other styles
32

Rafeeq, Akhil Ahmed. "A Development Platform to Evaluate UAV Runtime Verification Through Hardware-in-the-loop Simulation." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99041.

Full text
Abstract:
The popularity and demand for safe autonomous vehicles are on the rise. Advances in semiconductor technology have led to the integration of a wide range of sensors with high-performance computers, all onboard the autonomous vehicles. The complexity of the software controlling the vehicles has also seen steady growth in recent years. Verifying the control software using traditional verification techniques is difficult and thus increases their safety concerns. Runtime verification is an efficient technique to ensure the autonomous vehicle's actions are limited to a set of acceptable behaviors that are deemed safe. The acceptable behaviors are formally described in linear temporal logic (LTL) specifications. The sensor data is actively monitored to verify its adherence to the LTL specifications using monitors. Corrective action is taken if a violation of a specification is found. An unmanned aerial vehicle (UAV) development platform is proposed for the validation of monitors on configurable hardware. A high-fidelity simulator is used to emulate the UAV and the virtual environment, thereby eliminating the need for a real UAV. The platform interfaces the emulated UAV with monitors implemented on configurable hardware and autopilot software running on a flight controller. The proposed platform allows the implementation of monitors in an isolated and scalable manner. Scenarios violating the LTL specifications can be generated in the simulator to validate the functioning of the monitors.
Master of Science
Safety is one of the most crucial factors considered when designing an autonomous vehicle. Modern vehicles that use a machine learning-based control algorithm can have unpredictable behavior in real-world scenarios that were not anticipated while training the algorithm. Verifying the underlying software code with all possible scenarios is a difficult task. Runtime verification is an efficient solution where a relatively simple set of monitors validate the decisions made by the sophisticated control software against a set of predefined rules. If the monitors detect an erroneous behavior, they initiate a predetermined corrective action. Unmanned aerial vehicles (UAVs), like drones, are a class of autonomous vehicles that use complex software to control their flight. This thesis proposes a platform that allows the development and validation of monitors for UAVs using configurable hardware. The UAV is emulated on a high-fidelity simulator, thereby eliminating the time-consuming process of flying and validating monitors on a real UAV. The platform supports the implementation of multiple monitors that can execute in parallel. Scenarios to violate rules and cause the monitors to trigger corrective actions can easily be generated on the simulator.
APA, Harvard, Vancouver, ISO, and other styles
33

Naqvi, Karim J. "Hardware and software development of an automated semi-quantitative test for coliforms using gas evolution as an indicator." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq39149.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ozturk, Can. "Software Tool Development For The Automated Configuration Of Flexray Networks For In-vehicle Communication." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615568/index.pdf.

Full text
Abstract:
The increasing use of electronic components in today&rsquo
s automobiles demands more powerful in-vehicle network communication protocols. FlexRay protocol, which is expected to be the de-facto standard in the near future, is a deterministic, fault tolerant and fast protocol designed for in vehicle communication. For proper operation of a FlexRay network the communication schedule needs to be computed and the nodes need to be configured before startup. Current software tools that are geared towards FlexRay only deal with the configuration process. The schedule needs to be computed by a network designer manually and it is necessary to input the designed schedule and the configurable parameters by hand. This thesis improves upon a previous scheduling software to automatically compute the network schedule, and then generate a universally acceptable FIBEX file that can be imported to available software tools to produce the necessary FlexRay node configuration files.
APA, Harvard, Vancouver, ISO, and other styles
35

Peters, Eduardo. "Coprocessador para aceleração de aplicações desenvolvidas utilizando paradigma orientado a notificações." Universidade Tecnológica Federal do Paraná, 2012. http://repositorio.utfpr.edu.br/jspui/handle/1/325.

Full text
Abstract:
Este trabalho apresenta um novo hardware coprocessador para acelerar aplicações desenvolvidas utilizando-se o Paradigma Orientado a Notificações (PON), cuja essência se constitui em uma nova forma de influência causal baseada na colaboração pontual entre entidades granulares e notificantes. Uma aplicação PON apresenta as vantagens da programação baseada em eventos e da programação declarativa, possibilitando um desenvolvimento de alto nível, auxiliando o reuso de código e reduzindo o processamento desnecessário existente das aplicações desenvolvidas com os paradigmas atuais. Como uma aplicação PON é composta de uma cadeia de pequenas entidades computacionais, comunicando-se somente quando necessário, é um bom candidato a implementação direta em hardware. Para investigar este pressuposto, criou-se um coprocessador capaz de executar aplicações PON existentes. O coprocessador foi desenvolvido utilizando-se linguagem VHDL e testado em FPGAs, mostrando um decréscimo de 96% do número de ciclos de clock utilizados por um programa se comparado a implementação puramente em software da mesma aplicação, considerando uma dada materialização em um framework em PON.
This work presents a new hardware coprocessor to accelerate applications developed using the Notification-Oriented Paradigm (NOP). A NOP application has the advantages of both event-based programming and declarative programming, enabling higher level software development, improving code reuse, and reducing the number of unnecessary computations. Because a NOP application is composed of a network of small computational entities communicating only when needed, it is a good candidate for a direct hardware implementation. In order to investigate this assumption, a coprocessor that is able to run existing NOP applications was created. The coprocessor was developed in VHDL and tested in FPGAs, providing a decrease of 96% in the number of clock cycles compared to a purely software implementation.
APA, Harvard, Vancouver, ISO, and other styles
36

Mullane, Sarah. "Development of a user-centred design methodology to accommodate changing hardware and software user requirements in the sports domain." Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/10191.

Full text
Abstract:
The research presented in this thesis focuses on the development of wireless, real time performance monitoring technology within the resistance training domain using a user-centred design methodology. The functionality of current performance monitoring technology and differences in monitoring ability is investigated through comparative force platform, video and accelerometer testing and analysis. Determining the complexity of resistance training exercises and whether performance variable profiles such as acceleration, velocity and power can be used to characterise lifts is also investigated. A structured user-centred design process suitable for the sporting domain is proposed and followed throughout the research to consider the collection, analysis and communication of performance data. Identifying the user requirements and developing both hardware and software to meet the requirements also forms a major part of the research. The results indicate that as the exercise complexity increases, the requirement for sophisticated technology increases. A simple tri-axial accelerometer can be used to monitor simple linear exercises at the recreational level. Gyroscope technology is required to monitor complex exercises in which rotation of the bar occurs. Force platform technology is required at the elite level to monitor the distribution of force and resultant balance throughout a lift (bilateral difference). An integrated system consisting of an Inertial Measurement Unit (both accelerometer and gyroscope technology) and a double plate force platform is required to accurately monitor performance in the resistance training domain at the elite level.
APA, Harvard, Vancouver, ISO, and other styles
37

Cemin, Paulo Roberto. "Plataforma de medição de consumo para comparação entre software e hardware em projetos energeticamente eficientes." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1310.

Full text
Abstract:
A popularização dos dispositivos móveis impulsionou a pesquisa e o desenvolvimento de soluções de baixo consumo. A evolução destas aplicações demanda ferramentas que permitam avaliar diferentes alternativas de implementação, fornecendo, aos desenvolvedores, informações valiosas para a criação de soluções energeticamente eficientes. Este trabalho desenvolveu uma nova plataforma de medição de consumo que permite comparar a eficiência energética de diferentes algoritmos implementados em software e em hardware. A plataforma é capaz de medir o consumo energético de um processo específico em execução em um processador de propósito geral com um sistema operacional padrão, além de comparar o resultado obtido com algoritmos equivalentes implementados em uma FPGA. Isto permite ao desenvolvedor dividir o processamento da aplicação entre software e hardware de forma a obter a solução mais energeticamente eficiente. Comparada com o estado da arte, a plataforma de medição criada possui três característica inovadoras: suporte a medição de consumo de software e hardware; medição de trechos de código específicos executados pelo processador; e suporte a alteração dinâmica do clock. Também é mostrado neste trabalho como a plataforma desenvolvida tem sido utilizada para analisar o consumo energético de algoritmos de detecção de intrusão de rede para ataques do tipo probing.
The large number of mobile devices increased the interest in low-power designs. Tools that allow the evaluation of alternative implementations give the designer actionable information to create energy-efficient designs. This paper presents a new power measurement platform able to compare the energy consumption of different algorithms implemented in software and in hardware. The proposed platform is able to measure the energy consumption of a specific process running in a general-purpose CPU with a standard operating system, and to compare the results with equivalent algorithms running in an FPGA. This allows the designer to choose the most energy-efficient software vs. hardware partitioning for a given application. Compared with the current state-of-the-art, the presented platform has four distinguishing features: (i) support for both software and hardware power measurements, (ii) measurement of individual code sections in the CPU, (iii) support for dynamic clock frequencies, and (iv) improvement of measurement precision. We also demonstrate how the developed platform has been used to analyze the energy consumption of network intrusion detection algorithms aimed at detecting probing attacks.
APA, Harvard, Vancouver, ISO, and other styles
38

Gafurzade, Elchin. "Development of an automation tool for data configuration of signaling systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
This thesis work, which was realized at ALSTOM S. p. A., focuses on the communication between trackside equipment and the train, which is performed via various protocols to have information in advance about the state of the railway. The object is developing, after a careful analysis of the communication system, the software for the configuration parameters of the database files. Chapter 1, after a brief review of the European regulatory environment in railroad, explains the European Railway Traffic Management System (ERTMS), the European Train Control System (ETCS), and the radio-based telecommunications standard used (GSM-R). Chapter 2 outlines the software and hardware architecture of Radio Block Center (RBC), in concordance with the specifications furnished by the European Railway Agency (ERA). Furthermore, it provides descriptions of the utilized peripheral interfaces. Finally, chapter 3, which is the core of the thesis work, illustrates an algorithm and software solution for data configuration implemented in Python. Moreover, it examines the structure of parameters, conversion rules, and data types.
APA, Harvard, Vancouver, ISO, and other styles
39

Zeltner, Wolff Johannes. "Development of software for MALTE, a system for automated testing of line current supervision andinterference monitoring devices." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-243639.

Full text
Abstract:
The aim of the project is to develop software to automatically test line current supervision and interference monitoring devices for Bombardier trains. The software, called MALTE, it to replace the manual testing done by an engineer, thereby freeing up the tester to do other tasks, and increasing the test rigorousness. The test software, written in LabView, was developed in tandem with a hardware rack, with interfaces to the train hardware enabling communication between the two, to set test conditions and simulate the environment encountered by the hardware when on the train. When completed, MALTE was found to be an order of magnitude faster than a test engineer performing the tests, meaning a large save in time and cost for the engineering team.
APA, Harvard, Vancouver, ISO, and other styles
40

Ispir, Mustafa. "Test Driven Development Of Embedded Systems." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605630/index.pdf.

Full text
Abstract:
In this thesis, the Test Driven Development method (TDD) is studied for use in developing embedded software. The required framework is written for the development environment Rhapsody. Integration of TDD into a classical development cycle, without necessitating a transition to agile methodologies of software development and required unit test framework to apply TDD to an object oriented embedded software development project with a specific development environment and specific project conditions are done in this thesis. A software tool for unit testing is developed specifically for this purpose, both to support the proposed approach and to illustrate its application. The results show that RhapUnit supplies the required testing functionality for developing embedded software in Rhapsody with TDD. Also, development of RhapUnit is a successful example of the application of TDD.
APA, Harvard, Vancouver, ISO, and other styles
41

BHADRI, PRASHANT R. "DEVELOPMENT OF AN INTEGRATED SOFTWARE/HARDWARE PLATFORM FOR THE DETECTION OF CEREBRAL ANEURYSM BY QUANTIFYING BILIRUBIN IN CEREBRAL SPINAL FLUID." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1126815429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Genster, C. [Verfasser], Livia [Akademischer Betreuer] Ludhová, and Achim [Akademischer Betreuer] Stahl. "Software and hardware development for the next-generation liquid scintillator detectors JUNO and OSIRIS / C. Genster ; Livia Ludhová, Achim Stahl." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1221372300/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Koimtzis, Theodoros. "Development of Innovative Hardware and Software Concepts for Ion Mobility Spectrometry; Polymeric Instrumentation and Three-Dimensional Visualisation of Compensated Ion Mobility Data." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.516343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Oselame, Gleidson Brandão. "Desenvolvimento de software e hardware para diagnóstico e acompanhamento de lesões dermatológicas suspeitas para câncer de pele." Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/973.

Full text
Abstract:
O câncer é responsável por cerca de 7 milhões de óbitos anuais em todo o mundo. Estima-se que 25% de todos os cânceres são de pele, sendo no Brasil o tipo mais incidente em todas as regiões geográficas. Entre eles, o tipo melanoma, responsável por 4% dos cânceres de pele, cuja incidência dobrou mundialmente nos últimos dez anos. Entre os métodos diagnósticos empregados, cita-se a regra ABCD, que leva em consideração assimetria (A), bordas (B), cor (C) e diâmetro (D) de manchas ou nevos. O processamento digital de imagens tem mostrado um bom potencial para auxiliar no diagnóstico precoce de melanomas. Neste sentido, o objetivo do presente estudo foi desenvolver um software, na plataforma MATLAB®, associado a um hardware para padronizar a aquisição de imagens, visando realizar o diagnóstico e acompanhamento de lesões cutâneas suspeitas de malignidade (melanoma). Utilizou-se como norteador a regra ABCD para o desenvolvimento de métodos de análise computacional. Empregou-se o MATLAB como ambiente de programação para o desenvolvimento de um software para o processamento digital de imagens. As imagens utilizadas foram adquiridas de dois bancos de imagens de acesso livre. Foram inclusas imagens de melanomas (n=15) e imagens nevos (não câncer) (n=15). Utilizaram-se imagens no canal de cor RGB, as quais foram convertidas para escala de cinza, aplicação de filtro de mediana 8x8 e técnica de aproximação por vizinhança 3x3. Após, procedeu-se a binarização e inversão de preto e branco para posterior extração das características do contorno da lesão. Para a aquisição padronizada de imagens foi desenvolvido um protótipo de hardware, o qual não foi empregado neste estudo (que utilizou imagens com diagnóstico fechado, de bancos de imagem), mas foi validado para a avaliação do diâmetro das lesões (D). Utilizou-se a estatística descritiva onde os grupos foram submetidos ao teste não paramétrico para duas amostras independentes de Mann-Whitney U. Ainda, para avaliar a sensibilidade (SE) e especificidade (SP) de cada variável, empregou-se a curva ROC. O classificador utilizado foi uma rede neural artificial de base radial, obtendo acerto diagnóstico para as imagens melanomas de 100% e para imagens não câncer de 90,9%. Desta forma, o acerto global para predição diagnóstica foi de 95,5%. Em relação a SE e SP do método proposto, obteve uma área sob a curva ROC de 0,967, o que sugere uma excelente capacidade de predição diagnóstica, sobretudo, com baixo custo de utilização, visto que o software pode ser executado na grande maioria dos sistemas operacionais hoje utilizados.
Cancer is responsible for about 7 million deaths annually worldwide. It is estimated that 25% of all cancers are skin, and in Brazil the most frequent in all geographic regions type. Among them, the melanoma type, accounting for 4% of skin cancers, whose incidence has doubled worldwide in the past decade. Among the diagnostic methods employed, it is cited ABCD rule which considers asymmetry (A), edges (B), color (C) and diameter (D) stains or nevi. The digital image processing has shown good potential to aid in early diagnosis of melanoma. In this sense, the objective of this study was to develop software in MATLAB® platform, associated with hardware to standardize image acquisition aiming at performing the diagnosis and monitoring of suspected malignancy (melanoma) skin lesions. Was used as the ABCD rule for guiding the development of methods of computational analysis. We used MATLAB as a programming environment for the development of software for digital image processing. The images used were acquired two banks pictures free access. Images of melanomas (n = 15) and pictures nevi (not cancer) (n = 15) were included. We used the image in RGB color channel, which were converted to grayscale, application of 8x8 median filter and approximation technique for 3x3 neighborhood. After we preceded binarization and reversing black and white for subsequent feature extraction contours of the lesion. For the standardized image acquisition was developed a prototype hardware, which was not used in this study (that used with enclosed diagnostic images of image banks), but has been validated for evaluation of lesion diameter (D). We used descriptive statistics where the groups were subjected to non-parametric test for two independent samples Mann-Whitney U test yet, to evaluate the sensitivity (SE) and specificity (SP) of each variable, we used the ROC curve. The classifier used was an artificial neural network with radial basis function, obtaining diagnostic accuracy for melanoma images and 100% for images not cancer of 90.9%. Thus, the overall diagnostic accuracy for prediction was 95.5%. Regarding the SE and SP of the proposed method, obtained an area under the ROC curve of 0.967, which suggests an excellent diagnostic ability to predict, especially with low costs, since the software can be run in most systems operational use today.
APA, Harvard, Vancouver, ISO, and other styles
45

Kagerin, Anders, and Michael Karlsson. "Development of a low power hand-held device in a low budget manner." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7114.

Full text
Abstract:

The market of portable digital audio players (DAPs) have literally exploded the last couple of years. Other markets has grown as well. PDAs, GPS receivers, mobile phones, and so on. This resulted in more advanced ICs and SoCs becoming publically available, eliminating the need for in-house ASICs, thus enableing smaller actors to enter the markets.

This thesis explores the possibilities of developing a low power, hand-held device on a very limited budget and strict time scale.

This thesis report also covers all the steps taken in the development procedure.

APA, Harvard, Vancouver, ISO, and other styles
46

Altintas, Nesip Ilker. "Feature-based Software Asset Modeling With Domain Specific Kits." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608682/index.pdf.

Full text
Abstract:
This study proposes an industrialization model, Software Factory Automation, for establishing software product lines. Major contributions of this thesis are the conceptualization of Domain Specific Kits (DSKs) and a domain design model for software product lines based on DSKs. The concept of DSK has been inspired by the way other industries have been successfully realizing factory automation for decades. DSKs, as fundamental building blocks, have been deeply elaborated with their characteristic properties and with several examples. The constructed domain design model has two major activities: first, building the product line reference architecture using DSK abstraction
and second, constructing reusable asset model again based on DSK concept. Both activities depend on outputs of feature-oriented analysis of product line domain. The outcome of these coupled modeling activities is the reference architecture and asset model of the product line. The approach has been validated by constructing software product lines for two product families. The reusability of DSKs and software assets has also been discussed with examples. Finally, the constructed model has been evaluated in terms of quality improvements, and it has been compared with other software product line engineering approaches.
APA, Harvard, Vancouver, ISO, and other styles
47

França, André Luiz Pereira de. "Estudo, desenvolvimento e implementação de algoritmos de aprendizagem de máquina, em software e hardware, para detecção de intrusão de rede: uma análise de eficiência energética." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1166.

Full text
Abstract:
CAPES; CNPq
O constante aumento na velocidade da rede, o número de ataques e a necessidade de eficiência energética estão fazendo com que a segurança de rede baseada em software chegue ao seu limite. Um tipo comum de ameaça são os ataques do tipo probing, nos quais um atacante procura vulnerabilidades a partir do envio de pacotes de sondagem a uma máquina-alvo. Este trabalho apresenta o estudo, o desenvolvimento e a implementação de um algoritmo de extração de características dos pacotes da rede em hardware e de três classificadores de aprendizagem de máquina (Árvore de Decisão, Naive Bayes e k-vizinhos mais próximos), em software e hardware, para a detecção de ataques do tipo probing. O trabalho apresenta, ainda resultados detalhados de acurácia de classificação, taxa de transferência e consumo de energia para cada implementação.
The increasing network speeds, number of attacks, and need for energy efficiency are pushing software-based network security to its limits. A common kind of threat is probing attacks, in which an attacker tries to find vulnerabilities by sending a series of probe packets to a target machine. This work presents the study, development, and implementation of a network packets feature extraction algorithm in hardware and three machine learning classifiers (Decision Tree, Naive Bayes, and k-nearest neighbors), in software and hardware, for the detection of probing attacks. The work also presents detailed results of classification accuracy, throughput, and energy consumption for each implementation.
APA, Harvard, Vancouver, ISO, and other styles
48

Armstrong, Janell. "State of Secure Application Development for 802.15.4." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1776.

Full text
Abstract:
A wireless sensor network consists of small, limited-resource embedded systems exchanging environment data and activating controls. These networks can be deployed in hostile environments to monitor wildlife habitats, implemented in factories to locate mobile equipment, and installed in home environments to optimize the use of utilities. Each of these scenarios requires network security to protect the network data. The IEEE 802.15.4 standard is designed for WSN communication, yet the standard states that it is not responsible for defining the initialization, distribution, updating, or management of network public keys. Individuals seeking to research security topics will find that there are many 802.15.4-compliant development hardware kits available to purchase. However, these kits are not easily compared to each other without first-hand experience. Further, not all available kits are suitable for research in WSN security. This thesis evaluates a broad spectrum of 802.15.4 development kits for security studies. Three promising kits are examined in detail: Crossbow MICAz, Freescale MC1321x, and the Sun SPOT. These kits are evaluated based on their hardware, software, development environment, additional libraries, additional tools, and cost. Recommendations are made to security researchers advising which kits to use depending on their design needs and priorities. Suggestions are made to each company on how to further improve their kits for security research.
APA, Harvard, Vancouver, ISO, and other styles
49

Hong, Chuan. "Towards the development of a reliable reconfigurable real-time operating system on FPGAs." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8948.

Full text
Abstract:
In the last two decades, Field Programmable Gate Arrays (FPGAs) have been rapidly developed from simple “glue-logic” to a powerful platform capable of implementing a System on Chip (SoC). Modern FPGAs achieve not only the high performance compared with General Purpose Processors (GPPs), thanks to hardware parallelism and dedication, but also better programming flexibility, in comparison to Application Specific Integrated Circuits (ASICs). Moreover, the hardware programming flexibility of FPGAs is further harnessed for both performance and manipulability, which makes Dynamic Partial Reconfiguration (DPR) possible. DPR allows a part or parts of a circuit to be reconfigured at run-time, without interrupting the rest of the chip’s operation. As a result, hardware resources can be more efficiently exploited since the chip resources can be reused by swapping in or out hardware tasks to or from the chip in a time-multiplexed fashion. In addition, DPR improves fault tolerance against transient errors and permanent damage, such as Single Event Upsets (SEUs) can be mitigated by reconfiguring the FPGA to avoid error accumulation. Furthermore, power and heat can be reduced by removing finished or idle tasks from the chip. For all these reasons above, DPR has significantly promoted Reconfigurable Computing (RC) and has become a very hot topic. However, since hardware integration is increasing at an exponential rate, and applications are becoming more complex with the growth of user demands, highlevel application design and low-level hardware implementation are increasingly separated and layered. As a consequence, users can obtain little advantage from DPR without the support of system-level middleware. To bridge the gap between the high-level application and the low-level hardware implementation, this thesis presents the important contributions towards a Reliable, Reconfigurable and Real-Time Operating System (R3TOS), which facilitates the user exploitation of DPR from the application level, by managing the complex hardware in the background. In R3TOS, hardware tasks behave just like software tasks, which can be created, scheduled, and mapped to different computing resources on the fly. The novel contributions of this work are: 1) a novel implementation of an efficient task scheduler and allocator; 2) implementation of a novel real-time scheduling algorithm (FAEDF) and two efficacious allocating algorithms (EAC and EVC), which schedule tasks in real-time and circumvent emerging faults while maintaining more compact empty areas. 3) Design and implementation of a faulttolerant microprocessor by harnessing the existing FPGA resources, such as Error Correction Code (ECC) and configuration primitives. 4) A novel symmetric multiprocessing (SMP)-based architectures that supports shared memory programing interface. 5) Two demonstrations of the integrated system, including a) the K-Nearest Neighbour classifier, which is a non-parametric classification algorithm widely used in various fields of data mining; and b) pairwise sequence alignment, namely the Smith Waterman algorithm, used for identifying similarities between two biological sequences. R3TOS gives considerably higher flexibility to support scalable multi-user, multitasking applications, whereby resources can be dynamically managed in respect of user requirements and hardware availability. Benefiting from this, not only the hardware resources can be more efficiently used, but also the system performance can be significantly increased. Results show that the scheduling and allocating efficiencies have been improved up to 2x, and the overall system performance is further improved by ~2.5x. Future work includes the development of Network on Chip (NoC), which is expected to further increase the communication throughput; as well as the standardization and automation of our system design, which will be carried out in line with the enablement of other high-level synthesis tools, to allow application developers to benefit from the system in a more efficient manner.
APA, Harvard, Vancouver, ISO, and other styles
50

Blanc, Mickael Francois Henri. "Open source innovation in physical products : advantages and disadvantages, a corporate perspective." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/46952/1/Mickael_Blanc_Thesis.pdf.

Full text
Abstract:
A better understanding of Open Source Innovation in Physical Product (OSIP) might allow project managers to mitigate risks associated with this innovation model and process, while developing the right strategies to maximise OSIP outputs. In the software industry, firms have been highly successful using Open Source Innovation (OSI) strategies. However, OSI in the physical world has not been studied leading to the research question: What advantages and disadvantages do organisations incur from using OSI in physical products? An exploratory research methodology supported by thirteen semi-structured interviews helped us build a seven-theme framework to categorise advantages and disadvantages elements linked with the use of OSIP. In addition, factors impacting advantage and disadvantage elements for firms using OSIP were identified as: „h Degree of openness in OSIP projects; „h Time of release of OSIP in the public domain; „h Use of Open Source Innovation in Software (OSIS) in conjunction with OSIP; „h Project management elements (Project oversight, scope and modularity); „h Firms. Corporate Social Responsibility (CSR) values; „h Value of the OSIP project to the community. This thesis makes a contribution to the body of innovation theory by identifying advantages and disadvantages elements of OSIP. Then, from a contingency perspective it identifies factors which enhance or decrease advantages, or mitigate/ or increase disadvantages of OSIP. In the end, the research clarifies the understanding of OSI by clearly setting OSIP apart from OSIS. The main practical contribution of this paper is to provide manager with a framework to better understand OSIP as well as providing a model, which identifies contingency factors increasing advantage and decreasing disadvantage. Overall, the research allows managers to make informed decisions about when they can use OSIP and how they can develop strategies to make OSIP a viable proposition. In addition, this paper demonstrates that advantages identified in OSIS cannot all be transferred to OSIP, thus OSIP decisions should not be based upon OSIS knowledge.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography