Dissertations / Theses on the topic 'Hardware identification'

To see the other types of publications on this topic, follow the link: Hardware identification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 dissertations / theses for your research on the topic 'Hardware identification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Krantz, Elias. "Experiment Design for System Identification on Satellite Hardware Demonstrator." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-71351.

Full text
Abstract:
The subject of this thesis covers the process of online parameter estimation of agile satellites. Accurate knowledge of parameters such as moment of inertia and centre of mass play a crucial role in satellite attitude control and pointing performance. Typically, identification of parameters such as these is performed on-ground using post-processing algorithms. This thesis investigates the potential of performing the identification procedures in real-time on-board operating satellites, using only measurements available from typical satellite attitude sensors.    The thesis covers the areas of system identification and modelling of spacecraft attitude dynamics. An algorithm based on the Unscented Kalman Filter is developed for online parameter estimation of spacecraft moment of inertia parameters. The proposed method is successfully validated, both through simulation environments, and in practice using Airbus’ satellite hardware demonstrator INTREPID, a three-axis air-bearing table equipped with CMG actuators and typical attitude sensors.
APA, Harvard, Vancouver, ISO, and other styles
2

Linvåg, Elisabeth. "Co-design implementation of FPGA hardware acceleration of DNA motif identification." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8874.

Full text
Abstract:

Pattern matching in bio-informatics is a discipline in sturdy growth, and has a great need for searching through large amounts of data. At NTNU, a prototype specified in VHDL has been developed for an FPGA-solution identifying short motifs or patterns in genetic data using a Position-Weight Matrix (PWM). But programming FPGAs using VHDL is a complicated and time consuming process that requires intimate knowledge of how hardware works, and the prototype is not yet complete in terms of required functionality. Consequently, a desirable alternative is to make use of co-design languages to facilitate the use of hardware for a software developer, as well as to integrate the environment for development of soft- and hardware. This thesis deal with specification and implementation of a co-design based alternative to the existing VHDL based solution, as well as an evaluation of productivity vs final performance of the newly developed solution compared to the VHDL based solution. The chosen co-design language is Impulse-C, created by Impulse Accelerated Technologies Inc., which is a co-design language designed for data-flow oriented applications, but with the flexibility to support other programming models as well. The programming model simplifies the expression of highly parallel algorithms through the use of well-defined data communication, message passing and synchronization mechanisms. The affiliated development environment, CoDeveloper, contains tools that allow the FPGA system to be developed and debugged using Impulse-C. The software-to-hardware compiler and optimizer translates C-language processes to (RTL) VHDL code, while optimizing the generated logic and identifying opportunities for parallelism. Ease-of-use for the CoDeveloper environment is evaluated in this thesis, based on the authors experiences with the tools. In total, four variations of the Impulse-C solution has been implemented; a basic solution and a multicore solution, both implemented in a floating-point and a 'fixed-point' version. The implemented solutions are analyzed through various experiments described in this thesis, done during simulation using CoDeveloper. Attempts were made to get the solutions to run on the target platform, the Cray XD1 supercomputer Musculus, but these were unsuccessful. A wrong choice of properties and constraints in Xilinx ISE are believed to have caused the FPGA programming file to be generated faulty. There was no time to confirm and correct this. Some information about device utilization and performance could still be extracted from the Xilinx ISE 'Static timing' and 'Place and route' reports.

APA, Harvard, Vancouver, ISO, and other styles
3

Rask, Ulf, and Pontus Mannestig. "Improvement of hardware basic testing : Identification and development of a scripted automation tool that will support hardware basic testing." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3392.

Full text
Abstract:
In the ever-increasing development pace, circuits and hardware are no exception. Hardware designs grow and circuits gets more complex at the same time as the market pressure lowers the expected time-to-market. In this rush, verification methods often lag behind. Hardware manufacturers must be aware of the importance of total verification if they want to avoid quality flaws and broken deadlines which in the long run will lead to delayed time-to-market, bad publicity and a decreasing market share. This paper discusses how a basic testing team may use an automated test environment in order to establish intellectual control regarding the testing and verification in a large hardware project. Company-specific factors that influence the design of an automated test environment are analyzed and a suggestion of a suitable environment is made. A prototype of the environment is constructed so that the project results may be evaluated in the real world. The thesis support the academic field in stating that large chips are hard to verify and that script-automation tools are one way to make verification of larger chips possible. Hardware verification should be made without complicated and untested software so that the debugging process only has the hardware to deal with. The thesis also indicates that an automated test tool increases the test rate, provides better test coverage and make regression testing feasible.
APA, Harvard, Vancouver, ISO, and other styles
4

Harder, Timothy A. "Identification of computer hardware and software used by the printing and publishing industry." Menomonie, WI : University of Wisconsin--Stout, 2005. http://www.uwstout.edu/lib/thesis/2005/2005hardert.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Black, Derek J. "Development and feasibility of economical hardware and software in control theory application." Thesis, Kansas State University, 2017. http://hdl.handle.net/2097/38170.

Full text
Abstract:
Master of Science
Department of Mechanical and Nuclear Engineering
Dale E. Schinstock
Control theory is the study of feedback systems, and a methodology investigated by many engineering students throughout most universities. Because of control theory's broad and interdisciplinary nature, it necessitates further study by application through experimental learning and laboratory practice. Typically, the hardware used to connect the theoretical aspects of controls to the practical can be expensive, big, and time consuming to the students and instructors teaching on the equipment. Alternatively, using cheaper sensors and hardware, such as encoders and motor drivers, can obfuscate the collected data in a way that creates a disconnect between developed theoretical models and actual system results. This disconnect can dissuade the idea that systems can and will follow a modeled behavior. This thesis attempts to assess the feasibility of a piece of laboratory apparatus named the NERMLAB. Multiple experiments will be conducted on the NERMLAB system and compared against time-tested hardware to demonstrate the practicality of the NERMLAB system in control theory application.
APA, Harvard, Vancouver, ISO, and other styles
6

Farias, Marcos Santana. "Hardware reconfigurável para identificação de radionuclídeos utilizando o método de agrupamento subtrativo." Universidade do Estado do Rio de Janeiro, 2012. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7451.

Full text
Abstract:
Fontes radioativas possuem radionuclídeos. Um radionuclídeo é um átomo com um núcleo instável, ou seja, um núcleo caracterizado pelo excesso de energia que está disponível para ser emitida. Neste processo, o radionuclídeo sofre o decaimento radioativo e emite raios gama e partículas subatômicas, constituindo-se na radiação ionizante. Então, a radioatividade é a emissão espontânea de energia a partir de átomos instáveis. A identificação correta de radionuclídeos pode ser crucial para o planejamento de medidas de proteção, especialmente em situações de emergência, definindo o tipo de fonte de radiação e seu perigo radiológico. Esta dissertação apresenta a aplicação do método de agrupamento subtrativo, implementada em hardware, para um sistema de identificação de elementos radioativos com uma resposta rápida e eficiente. Quando implementados em software, os algoritmos de agrupamento consumem muito tempo de processamento. Assim, uma implementação dedicada para hardware reconfigurável é uma boa opção em sistemas embarcados, que requerem execução em tempo real, bem como baixo consumo de energia. A arquitetura proposta para o hardware de cálculo do agrupamento subtrativo é escalável, permitindo a inclusão de mais unidades de agrupamento subtrativo para operarem em paralelo. Isso proporciona maior flexibilidade para acelerar o processo de acordo com as restrições de tempo e de área. Os resultados mostram que o centro do agrupamento pode ser identificado com uma boa eficiência. A identificação desses pontos pode classificar os elementos radioativos presentes em uma amostra. Utilizando este hardware foi possível identificar mais do que um centro de agrupamento, o que permite reconhecer mais de um radionuclídeo em fontes radioativas. Estes resultados revelam que o hardware proposto pode ser usado para desenvolver um sistema portátil para identificação radionuclídeos.
Radioactive sources include radionuclides. A radionuclide is an atom with an unstable nucleus, i.e. a nucleus characterized by excess of energy, which is available to be imparted. In this process, the radionuclide undergoes radioactive decay and emits gamma rays and subatomic particles, constituting the ionizing radiation. So, radioactivity is the spontaneous emission of energy from unstable atoms. Correct radionuclide identification can be crucial to planning protective measures, especially in emergency situations, by defining the type of radiation source and its radiological hazard. This project introduces the application of subtractive clustering method, in a hardware implemnetation, for an identification system of radioactive elements that allows a rapid and efficient identification. In software implementations, clustering algorithms, usually, are demanding in terms of processing time. Thus, a custom implementation on reconfigurable hardware is a viable choice in embedded systems, so as to achieve real-time execution as well as low power consumption. The proposed architecture for the hardware of subtractive clustering is scalable, allowing for the inclusion of more of subtractive clustering unit that operate in parallel. This provides greater flexibility to accelerate the hardware with respect to the time and area requirements. The results show that the expected cluster center can be identified with efficiently. The identification of these points can classify the radioactive elements present in a sample. Using the designed hardware, it is possible to identify more than one cluster center, which would lead to the recognition of more than one radionuclide in radioactive sources. These results reveal that the proposed hardware to subtractive cluster can be used to design a portable system for radionuclides identification.
APA, Harvard, Vancouver, ISO, and other styles
7

Maki, Phyllis A. "Identification of entry-level clerical/secretarial skills and competencies and utilization of hardware and software applications in Clark County businesses." PDXScholar, 1990. https://pdxscholar.library.pdx.edu/open_access_etds/3496.

Full text
Abstract:
Business educators need to provide relevant career education and train students adequately for entry-level work and success in a dynamic and changing society. It is imperative, then, we identify those skills, knowledge, and attitudes necessary for success not only in today's office but also in the office of the future. To determine the competencies and skills required, a survey of businesses in the Clark County area was completed. The questionnaire was designed to assess current computer usage and technical and nontechnical skill requirements.
APA, Harvard, Vancouver, ISO, and other styles
8

Aykin, Murat Deniz. "Efficient Calibration Of A Multi-camera Measurement System Using A Target With Known Dynamics." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609798/index.pdf.

Full text
Abstract:
Multi camera measurement systems are widely used to extract information about the 3D configuration or &ldquo
state&rdquo
of one or more real world objects. Camera calibration is the process of pre-determining all the remaining optical and geometric parameters of the measurement system which are either static or slowly varying. For a single camera, this consist of the internal parameters of the camera device optics and construction while for a multiple camera system, it also includes the geometric positioning of the individual cameras, namely &ldquo
external&rdquo
parameters. The calibration is a necessary step before any actual state measurements can be made from the system. In this thesis, such a multi-camera state measurement system and in particular the problem of procedurally effective and high performance calibration of such a system is considered. This thesis presents a novel calibration algorithm which uses the known dynamics of a ballistically thrown target object and employs the Extended Kalman Filter (EKF) to calibrate the multi-camera system. The state-space representation of the target state is augmented with the unknown calibration parameters which are assumed to be static or slowly varying with respect to the state. This results in a &ldquo
super-state&rdquo
vector. The EKF algorithm is used to recursively estimate this super-state hence resulting in the estimates of the static camera parameters. It is demonstrated by both simulation studies as well as actual experiments that when the ballistic path of the target is processed by the improved versions of the EKF algorithm, the camera calibration parameter estimates asymptotically converge to their actual values. Since the image frames of the target trajectory can be acquired first and then processed off-line, subsequent improvements of the EKF algorithm include repeated and bidirectional versions where the same calibration images are repeatedly used. Repeated EKF (R-EKF) provides convergence with a limited number of image frames when the initial target state is accurately provided while its bidirectional version (RB-EKF) improves calibration accuracy by also estimating the initial target state. The primary contribution of the approach is that it provides a fast calibration procedure where there is no need for any standard or custom made calibration target plates covering the majority of camera field-of-view. Also, human assistance is minimized since all frame data is processed automatically and assistance is limited to making the target throws. The speed of convergence and accuracy of the results promise a field-applicable calibration procedure.
APA, Harvard, Vancouver, ISO, and other styles
9

Senses, Engin Utku. "Blur Estimation And Superresolution From Multiple Registered Images." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609929/index.pdf.

Full text
Abstract:
Resolution is the most important criterion for the clarity of details on an image. Therefore, high resolution images are required in numerous areas. However, obtaining high resolution images has an evident technological cost and the value of these costs change with the quality of used optical systems. Image processing methods are used to obtain high resolution images with low costs. This kind of image improvement is named as superresolution image reconstruction. This thesis focuses on two main titles, one of which is the identification methods of blur parameters, one of the degradation operators, and the stochastic SR image reconstruction methods. The performances of different stochastic SR image reconstruction methods and blur identification methods are shown and compared. Then the identified blur parameters are used in superresolution algorithms and the results are shown.
APA, Harvard, Vancouver, ISO, and other styles
10

Seyed, Saboonchi Nima. "Hardware Security Module Performance Optimization by Using a "Key Pool" : Generating keys when the load is low and saving in the external storage to use when the load is high." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-158122.

Full text
Abstract:
This thesis project examines the performance limitations of Hardware Security Module (HSM) devices with respect to fulfilling the needs of security services in a rapidly growing security market in a cost-effective way. In particular, the needs due to the introduction of a new electronic ID system in Sweden (the Federation of Swedish eID) and how signatures are created and managed. SafeNet Luna SA 1700 is a high performance HSM's available in the current market. In this thesis the Luna SA 1700 capabilities are stated and a comprehensive analysis of its performance shows a performance gap between what HSMs are currently able to do and what they need to do to address the expected demands. A case study focused on new security services needed to address Sweden's e Identification organization is presented. Based upon the expected performance demands, this thesis project proposes an optimized HSM solution to address the identified performance gap between what is required and what current HSMs can provide. A series of tests were conducted to measure an existing HSM's performance. An analysis of these measurements was used to optimize a proposed solution for selected HSM or similar HSMs. One of the main requirements of the new signing service is the capability to perform fifty digital signatures within the acceptable response time which is 300 ms during normal hours and 3000 ms during peak hours. The proposed solution enables the HSM to meet the expected demands of 50 signing request per second in the assumed two hours of peak rate at a cost that is 1/9 of the cost of simply scaling up the number of HSMs. The target audience of this thesis project is Security Service Providers who use HSMs and need a high volume of key generation and storing. Also HSM vendors consider this solution and add similar functionality to their devices in order to meet the desired demands and to ensure a better future in this very rapidly growing market.
Detta examensarbete undersöker prestandabegränsningar för Hardware Security Module (HSM) enheter med avseende på att uppfylla behov av säkerhetstjänster i en snabbt växande marknad och på ett kostnadseffektivt sätt. I synnerhet på grund av de säkerhetskrav som nu existerar/tillkommit efter införandet av ett nytt elektroniskt ID-system i Sverige (Federationen för Svensk eID) och hur underskrifter skapas och hanteras. SafeNet Luna SA 1700 är en högpresterande HSM enhet tillgänglig på marknaden. I den här avhandlingen presenteras nuvarande HSM kapacitet och en omfattande analys av resultatet visar ett prestanda gap mellan vad HSMS för närvarande kan göra och vad som behöver förbättras för att ta itu med de förväntade kraven. En fallstudie fokuserad på nya säkerhetstjänster som krävs i och med Sveriges nya e-Identifiering presenteras. Baserat på resultatet i den här avhandlingen föreslås en optimerad HSM lösning för att tillgodose prestanda gapet mellan vad HSM presterar och de nya krav som ställs. Ett flertal tester genomfördes för att mäta en befintlig HSM prestanda. En analys av dessa mätningar användes för att föreslå en optimerad lösning för HSMS (eller liknande) enheter. Ett av de huvudsakliga kraven för den nya signeringstjänsten är att ha en kapacitet av 50 digitala signaturer inom en accepterad svarstidsintervall, vilket är 300ms vid ordinarie trafik och 3000ms vid högtrafik. Förslagen i avhandlingen möjliggör HSM enheten att tillgodose kraven på 50 signeringen per sekund under två timmars högtrafik, och till en 1/9 kostnad genom att skala upp antalet HSMs. Målgruppen i den här avhandlingen är användare av HSMs och där behovet av lagring och generering av nycklar i höga volymer är stort. Även HSM leverantörer som kan implementera den här optimeringen/lösningen i befintlig funktionalitet för att tillgodose det här behovet i en alltmer växande marknad.
APA, Harvard, Vancouver, ISO, and other styles
11

Signorini, Matteo. "Towards an internet of trust: issues and solutions for identification and authentication in the internet of things." Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/350029.

Full text
Abstract:
La Internet de las Cosas está avanzando lentamente debido a la falta de confianza en dispositivos que puedan interactuar de manera autónoma. Además, se requieren nuevos enfoques para mitigar o al menos paliar ataques de atacantes omnipresentes cada vez más poderosos. Para hacer frente a estas cuestiones, esta tesis investiga los conceptos de identidad y autenticidad. En cuanto a la identidad, se propone un novedoso enfoque sensible al contexto basado en la tecnología de cadenas de bloques donde el paradigma estándar se sustituye por un enfoque basado en la identificación de atributos. Referente a la authentication, nuevas propuestas permiten validar mensajes en escenarios en línea y fuera de línea. Además, se introduce un nuevo enfoque basado en software para escenarios en línea que proporciona propiedades intrínsecas de hardware independientemente de elementos físicos. Por último, la tecnología PUF permite el diseño novel de protocolos de autenticación en escenarios donde sin conexión.
The Internet of Things is advancing slowly due to the lack of trust in devices that can autonomously interact. Standard solutions and new technologies have strengthened its security, but ubiquitous and powerful attackers are still an open problem that requires novel approaches. To address the above issues, this thesis investigates the concepts of identity and authenticity. As regards identity, a new context-aware and self-enforced approach based on the blockchain technology is proposed. With this solution, the standard paradigm focused on fixed identifiers is replaced with an attribute-based identification approach that delineates democratically approved names. With respect to authentication, new approaches are analyzed from both the online and offline perspective to enable smart things in the validation of exchanged messages. Further, a new software approach for online scenarios is introduced which provides hardware-intrinsic properties without relying on any physical element. Finally, PUF technology is leveraged to design novel offline disposable authentication protocols.
APA, Harvard, Vancouver, ISO, and other styles
12

Шевчук, Ю. В. "Програмно – апаратний контур для знаходження моделі об’єкта керування саеп та визначення оптимальних характеристик регу-лятора." Thesis, ВНТУ, 2016. http://conferences.vntu.edu.ua/index.php/all-feeem/all-feeem-2016/paper/view/274.

Full text
Abstract:
Запропоновано підхід до знаходження лінеаризованої моделі системи електропривода на прикладі ШІП-ДПС з заданим ступенем адекватності за вибірками вхідних/вихідних даних в SystemIdentificationToolbox. Здійснено налагодження контуру положення САК з двигуном RS-385SH-2270 засобами Simulink.
The approach of the electric drive linearized model determination with the chosen level of the adequacy on the
APA, Harvard, Vancouver, ISO, and other styles
13

Uzer, Ferit. "Camera Motion Blur And Its Effect On Feature Detectors." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612475/index.pdf.

Full text
Abstract:
Perception, hence the usage of visual sensors is indispensable in mobile and autonomous robotics. Visual sensors such as cameras, rigidly mounted on a robot frame are the most common usage scenario. In this case, the motion of the camera due to the motion of the moving platform as well as the resulting shocks or vibrations causes a number of distortions on video frame sequences. Two most important ones are the frame-to-frame changes of the line-of-sight (LOS) and the presence of motion blur in individual frames. The latter of these two, namely motion blur plays a particularly dominant role in determining the performance of many vision algorithms used in mobile robotics. It is caused by the relative motion between the vision sensor and the scene during the exposure time of the frame. Motion blur is clearly an undesirable phenomenon in computer vision not only because it degrades the quality of images but also causes other feature extraction procedures to degrade or fail. Although there are many studies on feature based tracking, navigation, object recognition algorithms in the computer vision and robotics literature, there is no comprehensive work on the effects of motion blur on different image features and their extraction. In this thesis, a survey of existing models of motion blur and approaches to motion deblurring is presented. We review recent literature on motion blur and deblurring and we focus our attention on motion blur induced degradation of a number of popular feature detectors. We investigate and characterize this degradation using video sequences captured by the vision system of a mobile legged robot platform. Harris Corner detector, Canny Edge detector and Scale Invariant Feature Transform (SIFT) are chosen as the popular feature detectors that are most commonly used for mobile robotics applications. The performance degradation of these feature detectors due to motion blur are categorized to analyze the effect of legged locomotion on feature performance for perception. These analysis results are obtained as a first step towards the stabilization and restoration of video sequences captured by our experimental legged robotic platform and towards the development of motion blur robust vision system.
APA, Harvard, Vancouver, ISO, and other styles
14

Bolatli, Yurtseven. "Utility Analysis And Computer Simulation Of Rfid Technologies In The Supply Chain Applications Of Production Systems." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611261/index.pdf.

Full text
Abstract:
In this thesis, the feasibility of deploying RFID technologies in the case of &ldquo
lowvolume high-value&rdquo
products is considered by focusing on the production processes of a real company. First, the processes of the company are examined and associated problems are determined. Accordingly, a simulation of the current situation is constructed by using the discrete event simulation technique, in order to obtain an accurate model. In addition to modeling the current situation, this simulation model provides a flexible platform to analyze different scenarios and their effects on the company production. Next, various scenarios including RFID technology deployment are examined, and their results are compared with respect to profitanalysis which takes into consideration the changes in the production, work in process (WIP) inventory, stockouts, transportation and initial investment. Finally, the analysis of the results and conclusions are given in order to provide guidance for companies with &ldquo
low-volume high-value&rdquo
product portfolios.
APA, Harvard, Vancouver, ISO, and other styles
15

Hentati, Raïda. "Implémentation d'algorithmes de reconnaissance biométrique par l'iris sur des architectures dédiées." Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-00917955.

Full text
Abstract:
Dans cette thèse, nous avons adapté trois versions d'une chaine d'algorithmes de reconnaissance biométrique par l'iris appelés OSIRIS V2, V3, V4 qui correspondent à différentes implémentations de l'approche de J. Daugman pour les besoins d'une implémentation logicielle / matérielle. Les résultats expérimentaux sur la base de données ICE2005 montrent que OSIRIS_V4 est le système le plus fiable alors qu'OSIRIS_V2 est le plus rapide. Nous avons proposé une mesure de qualité de l'image segmentée pour optimiser en terme de compromis coût / performance un système de référence basé sur OSIRIS V2 et V4. Nous nous sommes ensuite intéressés à l'implémentation de ces algorithmes sur des plateformes reconfigurables. Les résultats expérimentaux montrent que l'implémentation matériel / logiciel est plus rapide que l'implémentation purement logicielle. Nous proposons aussi une nouvelle méthode pour le partitionnement matériel / logiciel de l'application. Nous avons utilisé la programmation linéaire pour trouver la partition optimale pour les différentes tâches prenant en compte les trois contraintes : la surface occupée, le temps d'exécution et la consommation d'énergie
APA, Harvard, Vancouver, ISO, and other styles
16

Шатний, Сергій В'ячеславович. "Інформаційна технологія обробки та аналізу кардіосигналів з використанням нейронної мережі." Diss., Національний університет "Львівська політехніка", 2021. https://ena.lpnu.ua/handle/ntb/56259.

Full text
Abstract:
Дисертацію присвячено розробці та вдосконаленні моделей, методів та засобів інформаційної технології обробки електрокардіограми, підвищенні швидкодії та точності обробки кардіосигналів, зменшенні розміру системи, призначеної для такої обробки, зниження її енергоспоживання і реалізації системи в аналоговій та цифровій елементних базах. Сформульовано актуальність теми дисертації, мету та основні задачі досліджень, визначено наукову новизну роботи і практичне значення отриманих результатів, показано зв'язок роботи з науковими темами. Подано відомості про апробацію результатів роботи, особистий внесок автора та його публікації. Виявлено, що ефективність обробки та аналізу залежить від якості попередньої обробки сигналів та природи самого сигналу. Аналіз підходів до побудови систем обробки біомедичних сигналів показав необхідність підвищення їх ефективності. Результати аналізу існуючих систем обробки кардіосигналів дали змогу стверджувати, що в більшості з них недостатньо висока точність класифікації (не вище 75 %), низька швидкодія та висока вартість обладнання, пов’язана з монополією компаній-виробників. Представлено розроблений метод аналізу електрокардіограми шляхом визначення амлітуди та тривалості кожного з P, Q, R, S, T-сегментів. Удосконалено метод попередньої обробки кардіосигналів за рахунок використання для ідентифікації та фільтрування нейронних мереж. Покращено метод класифікації кардіосигналів за допомогою використання частковорозпаралеленої нейронної мережі. Розроблені програмні та апаратні реалізації інформаційної технології обробки кардіосигналів, структурно-функціональні схеми обробки вхідних сигналів на основі мікроконтролерів та програмованих логічних інтегральних схем. Проведено моделювання та оптимізацію засобів обміну даними між структурними елементами системи. Розроблені спеціалізовані програмні продукти, призначені для попередньої обробки та аналізу ЕКГ. Розроблено серверні засоби для функціонування віддаленої web-системи для взаємодії логічної моделі «лікар-пацієнт». The dissertation is prepared to development and improvement of models, methods and means of information technology of electrocardiogram processing, increase of speed and accuracy of processing of cardio signals, reduction of the size of the system intended for such processing, reduction of its power consumption and realization of system in analog and digital element bases. The relevance of the topic of the dissertation is substantiated, the purpose and main tasks of research are formulated, the scientific novelty of the work and the practical significance of the obtained results are determined, the connection of the work with scientific topics is shown. Information on approbation of work results, personal contribution of the author and his publication is given. It was found that the efficiency of processing and analysis depends on the quality of signal pre-processing and the nature of the signal itself. Analysis of approaches to the construction of biomedical signal processing systems has shown the need to increase their efficiency. The results of the analysis of the existing cardio signal processing systems allowed to state that in most of them the classification accuracy is not high enough (not higher than 75%), low speed and high cost of equipment due to the monopoly of the manufacturing companies. The developed method of analysis of the electrocardiogram by determination of amplitude and duration of each of P, Q, R, S, T-segments is presented. The method of pre-processing of cardiac signals has been improved due to the use of neural networks for identification and filtering of cardiac signals. The method of classification of cardio signals by means of use of a partially-parallel fuzzy neural network is improved. Software and hardware implementations of information technology of cardiac signal processing, structural-functional and basic schemes of input signal processing on the basis. of microcontrollers and programmable logic integrated circuits are developed. Modeling and optimization of means of data exchange between structural elements of the system are carried out. The system of processing and analysis of cardio signals is developed with use of open, free and conditionally free software, in particular programming language and environment of development of GCC, system of visual programming and carrying out of simulations of NI Labviev. Means based on programmable logic integrated circuits and programmable valve arrays were selected as the hardware platform. The NIO RIO platform was used to conduct the software and hardware simulation, and a platform based on microchip microcontrollers and Altera programmable valve arrays was selected to create, design and implement the layout. Developed specialized software products for ECG pre-processing and analysis. Server tools have been developed for the operation of a remote web-system for the interaction of the logical model "doctor-patient". Comparative analyzes were performed with existing software and hardware platforms for cardiac signal processing, in particular with the Holter device. The data show a decrease in energy consumption, increase the accuracy of cardio signal analysis, reduce the infraction of readings and increase the compactness of the system. In general, the proposed and used tools allow for a full range of medical research and implement the developed system in medical and scientific institutions.
APA, Harvard, Vancouver, ISO, and other styles
17

"Hardware Trojans: design and identification." 2014. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291713.

Full text
Abstract:
Zhang, Jie.
Thesis Ph.D. Chinese University of Hong Kong 2014.
Includes bibliographical references (leaves 179-192).
Abstracts also in Chinese.
Title from PDF title page (viewed on 07, November, 2016).
APA, Harvard, Vancouver, ISO, and other styles
18

LIAO, CHEN-TUNG, and 廖振東. "Design of Garlic Identification System using CNN Hardware Acceleration." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/62r7s4.

Full text
Abstract:
碩士
國立高雄科技大學
電子工程系
107
In the traditional way for processing garlic cloves, the garlic cloves are placed on the conveyor belt and several labors use their eyes to inspect and pick up bad cloves. The process needs more labors to work together, but achieves lower performance and lower identification accuracy. Therefore, the thesis presents a identification system that can screen the garlic cloves automatically. The major function of the system can identify the bad garlic cloves through the image recognition technology and spray them off through the control valve. The system uses a black box on the conveyor belt to isolate the ambient light and install LED (Light Emitting Diode) light as lighting source. Meanwhile, a camera is mounted upon the black box and it transmits the image of garlic cloves to the server. The system utilizes CNN (Convolution Neural Network) algorithm to learn and construct a model with high identification rate. It also uses FPGA (Field Programmable Gate Array) to accelerate the processing speed of identify the quality of garlic clove. The bad cloves can be picked up through the image recognition model and sends the recognition results through the wired network to the client. If bad cloves are identified, the control system of air valve in the client spray them off the conveyor belt. The screening system using CNN algorithm and FPGA acceleration can improve the identification accuracy of garlic cloves and also increase the production efficiency of processing line. It reduces the processing cost of agricultural products and makes the market price of the products more stably. The system achieves high accurate rates of 98% for identifying garlic cloves and 91% for sorting them separately.
APA, Harvard, Vancouver, ISO, and other styles
19

Hsieh, Shao-Chien, and 謝紹乾. "Hardware-based Fast Connection Identification Architecturefor TCP/IP Offload Engine." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/89114580573946878984.

Full text
Abstract:
碩士
國立雲林科技大學
資訊工程研究所
93
In recent years, the rapid evolution of the Internet and technological advances in VLSI such that the bandwidth of Ethernet from 10Mbps improve to Gigabits. If under Gigabits network transition environment, the popular protocol suit in the Internet which the TCP/IP still use conventional software process method, the process performance of a packet is much less than the network bandwidth. Therefore, the protocol process will is the major bottleneck of network transition system. As the result of the above, A new technology which the TCP/IP Offload Engine(TOE) be proposed. In common architecture of TOE, besides amount of offload protocol, the support maximum numbers of connection even more to decide the architecture is bad or good. Furthermore, the support maximum number be depend on the identity ability of connection. This paper besides discuss what is the connection identity in the TOE, and will proposed a faster identification architecture of connection for TOE. Finally, we have evaluates the performance of the proposed architecture through simulation.
APA, Harvard, Vancouver, ISO, and other styles
20

Chu, Shu-Han, and 朱書漢. "Hardware/Software Co-Design of a Fuzzy Moving Direction Identification System." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/37138530706189465877.

Full text
Abstract:
碩士
國立臺灣師範大學
應用電子科技學系
100
In this thesis, a hardware and software co-design approach is proposed to develop a fuzzy moving direction identification system using the Altera DE2-70 board and fuzzy logic theory. Under the system architecture of System on the Program Chip (SOPC), we take advantages of the framework of hardware and software co-design, where hardware circuits by Field Programming Gate Array (FPGA) are designed to accelerate the system performance of the historical trajectories of the target image, while direction counts are calculated by the Nios II CPU. A hardware circuit is also design to identify the moving direction of the target object. Experimental results show that the proposed method is able to identify the movement direction of the target direction, providing an interactive man-machine interface to control the operation of the machine. The contents of this thesis can be divided into: (1) proposed algorithm and its implementation by software (2) hardware and software co-design of the proposed algorithm on the Altera DE2-70 development board to accelerate the execution of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Cheng Lin, and 陳鉦霖. "Software and Hardware Co-design of Computer Aided Impedimetric Epidural Identification Techniques." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ag7f3h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Shih-Ching, and 陳世卿. "The Relationships Among Dynamic Capabilities, Organization Identification and Competitive Advantages in Hardware Industry." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/g8bwuz.

Full text
Abstract:
碩士
國立臺中科技大學
企業管理系碩士班
107
The firms of hardware manufacturing industries in Taiwan have been operating in a rapid changing international environment and most of them are lack of supports of resources and talented people compared with domestic electronics manufacturing industry. Thus, the industry has to switch to cultivate unique capabilities and unite organizaton perception so as to hold a portion of the international arena. Therefore, this research is for the Hardware industry based on the theory of David J. Teece, Gary Pisano and Amy Shuen (1997) to verify the firms have competitive advantages to cope with uncertain environment through dynamic capabilities. Besides, this research is to identify the relationships among dynamic capabilities, organization identification and competitive advantages in Hardware Industry and also to verify if mediation effect of organization identification is existed between dynamic capabilities and competitive advantage so as to prove dynamic capabilities affect competitive advantages through the effect of organization identification. The population of sampling in this research is from 425 firms exhibited in the Taiwan 2018 Hardware Show. 308 questionatires were given out to related exhibitors. 203pcs were collected for analysis. 12pcs were useless for excluded industry. Valid questionairs are 191pcs with the response rate of 44.94%. Statistic analyses such as descriptive system analysis, reliability analysis, factor analysis, correlation analysis, regression analysis, Logistic regression, cluster analysis and ANOVA were used to verify the relationships among factors and dimensions. The study results indicated that: 1) The 3 dimensions of dynamic capabilities, processes, positions and path have positive and significant impact on competitive advantage that is composed of quality, delivery and cost. 2) The Position dimension in dynamic capabilities is more significantly related to the Cost advantage than to the other comprtitive advantage. 3) Dynamic capabilities have positive and significant impact on organization identification. 4) Organization identification has positive and significant impact on competitive advantage. 5) Organization identitificaiton positively but partially mediated the relationship between dynamic capabilities and competitive advantage. From these statistical analyzing results, the research suggests that the companies in hardware industry in Taiwan should reconfigure their dynamic capabilities and emphasize organization identification so as to build up layers of competitive advantage in quality, cost and delivery.
APA, Harvard, Vancouver, ISO, and other styles
23

Reineking, Tracy. "Methods and hardware for high speed real time processing for circuit board optical identification." 1994. http://catalog.hathitrust.org/api/volumes/oclc/32832444.html.

Full text
Abstract:
Thesis (M.S.)--University of Wisconsin--Madison, 1994.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 136-138).
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Lien-tang, and 吳連堂. "Stability Derivatives Identification and Hardware-in-the-Loop Real-time Simulation of Unmanned Air Vehicle." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/79509639712233471729.

Full text
Abstract:
碩士
正修科技大學
機電工程研究所
94
The thesis explores three subjects, including the optimal estimation of dimensional stability derivatives in linear model of unmanned air vehicle (UAV) longitudinal and lateral dynamics, the control law design of UAV autopilot, and the hardware-in-the-loop simulation (HILS) system setup for real-time control experiment. In stability derivatives identification, at first the raw data from UAV test flight were carefully preprocessed to obtain the fine input-output data in 6 flight sections for longitudinal motion and 8 flight sections for lateral motion. Second, the dimensional stability derivatives were optimal estimated by applying the nonlinear least squares method. In addition, the linear least squares method was also used to figure out the initial guess for the nonlinear least squares method. As a result, 5 longitudinal models and 7 lateral models with root mean square error (RMSE) less than 5 were identified. Third, the impulse responses of the identified models were checked. Those models which did not have proper cause-effect relations between the input and output were eliminated. At last the union optimization procedure was proposed to obtain the final linear state-space model of UAV longitudinal and lateral motion. The resulting model not only met the RMSE criterion for different flight sections but also had reasonable input-output relationship. In UAV autopilot design, four feedback loops, such as altitude (pitch angle) loop, velocity loop, attitude (roll angle) loop, and heading (yaw angle) loop were considered. The optimal PID (proportional-integral-derivative) control laws were designed to satisfy the time-domain specifications by using the Nonlinear Control Design Blockset in the computer-aided control system design software MATLAB/Simulink. In real-time control experiment, the HILS system consisted of one PC, four data acquisition cards, and one PXI (PCI eXtensions for Instrumentation) embedded real-time controller was setup successfully. The graphical programming language LabVIEW providing convenient and tight integration with software and hardware was applied for the measurement and control in HILS system. The individual simulations of controller and plant were running independently and the communications between controller and plant were under a fixed sampling frequency. The results of HILS revealed that the inevitable time delay resulting from program computation would seriously destroy the performance and stability of real-time control system especially when the sampling frequency is rather low; however, the non-real-time computer simulation cannot reveal this critical time-delay problem. The importance of HILS is definitely confirmed.
APA, Harvard, Vancouver, ISO, and other styles
25

Chi-ChungLiao and 廖啟仲. "A Electrical and Optogenetic Stimulation System for Epileptic Depression and Epilepsy Identification Algorithm Hardware Design." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ph883p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

THANH, NGUYEN PHAN, and 阮潘青. "Digital Hardware Implementation of Radial Basis Function Neural Network and its Applications to System Identification and Control." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/88075108372817301237.

Full text
Abstract:
博士
南臺科技大學
電機工程系
104
A digital hardware implementation of a radial basis function neural network (RBF NN) is studied in this dissertation. Firstly, the architecture of the RBF NN, which consists of an input layer, a hidden layer of nonlinear processing neurons with Gaussian function and an output layer, is presented. Meanwhile, a supervising learning mechanism based on the stochastic gradient descent method is applied to update the parameters of RBF NN according to a specific cost function. Secondly, a very high speed integrated circuit hardware description language (VHDL) is adopted to describe the behavior of the overall RBF NN and its related learning algorithm. The detailed analysis of VHDL in performing Gaussian function, training mechanism and the whole neural network are illustrated. The data type applies 32bit length Q24 format and 2’s complement operation. Additionally, finite state machine (FSM) is applied for reducing the hardware resource usage. Furthermore, an implementation of mixed neural - microprocessor is supposed to enhance the flexibility of network design and reconfigurable RBF NN architecture. Thirdly, based on electronic design automation simulator link, a co-simulation work constructed by Simulink and ModelSim is applied to verify the proposed VHDL code for performing RBF NN function. In this co-simulation architecture, the input stimuli and output responses are run in Simulink and the function of the RBF NN is executed in ModelSim. Finally, the applications of RBF NN to identify and control in a linear/nonlinear system and in a permanent magnet synchronous motor (PMSM) drive system are taken as the application cases to validate the effectiveness and the correctness of the proposed digital hardware implementation of the RBF NN.
APA, Harvard, Vancouver, ISO, and other styles
27

Lin, Yu-sheng, and 林祐聖. "Roller Bearing Defect Identification under Variable Rotating Speed Using Hilbert-Huang Transform and Amplitude Normalization via Hardware Implemented Order-Tracking Technique." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/43778807721053405346.

Full text
Abstract:
碩士
國立中央大學
機械工程學系
101
In this study, Hilbert-Huang transform (HHT) is utilized for diagnosing the roller bearing faults, such as outer race defect, inner race defect, roller defect and multi-fault, under variable rotation speed. The vibration signals are first measure through the order tracking technique, so that the vibration signals are detected with identical angle increment and thus the vibration signals are stationary without the factor of variable shaft rotation speed. The envelope signals of the measurements are analyzed by Hilbert-Huang transform approach. The features of the faulted bearings are extracted by investigating the intrinsic mode functions (IMFs) as well as the marginal Hilbert spectra. The extracted features of the faulted bearing are then re-scaled through the amplitude normalization, so that the vibration energy are not affected by the variable rotation speed. Finally, the support vector machine is employed to identify the individual defect of bearing. The same SVM structure is also used to diagnose the occurrence of multi-fault in bearings.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography