Siga este enlace para ver otros tipos de publicaciones sobre el tema: Automates à registres.

Tesis sobre el tema "Automates à registres"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 24 mejores tesis para su investigación sobre el tema "Automates à registres".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Exibard, Léo. "Automatic synthesis of systems with data". Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0312.

Texto completo
Resumen
Nous interagissons régulièrement avec des machines qui réagissent en temps réel à nos actions (robots, sites web etc). Celles-ci sont modélisées par des systèmes réactifs, caractérisés par une interaction constante avec leur environnement. L'objectif de la synthèse réactive est de générer automatiquement un tel système à partir de la description de son comportement afin de remplacer la phase de développement bas-niveau, sujette aux erreurs, par l'élaboration d'une spécification haut-niveau.Classiquement, on suppose que les signaux d'entrée de la machine sont en nombre fini. Un tel cadre échoue à modéliser les systèmes qui traitent des données issues d'un ensemble infini (un identifiant unique, la valeur d'un capteur, etc). Cette thèse se propose d'étendre la synthèse réactive au cas des mots de données. Nous étudions un modèle adapté à ce cadre plus général, et examinons la faisabilité des problèmes de synthèse associés. Nous explorons également les systèmes non réactifs, où l'on n'impose pas à la machine de réagir en temps réel
We often interact with machines that react in real time to our actions (robots, websites etc). They are modelled as reactive systems, that continuously interact with their environment. The goal of reactive synthesis is to automatically generate a system from the specification of its behaviour so as to replace the error-prone low-level development phase by a high-level specification design.In the classical setting, the set of signals available to the machine is assumed to be finite. However, this assumption is not realistic to model systems which process data from a possibly infinite set (e.g. a client id, a sensor value, etc.). The goal of this thesis is to extend reactive synthesis to the case of data words. We study a model that is well-suited for this more general setting, and examine the feasibility of its synthesis problem(s). We also explore the case of non-reactive systems, where the machine does not have to react immediately to its inputs
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kuriakose, R. B. y F. Aghdasi. "Automatic student attendance register using RFID". Interim : Interdisciplinary Journal, Vol 6, Issue 2: Central University of Technology Free State Bloemfontein, 2007. http://hdl.handle.net/11462/406.

Texto completo
Resumen
Published Article
The purpose of this project is to investigate the application of Radio Frequency Identification, RFID, to automatic student attendance register. The aim is that the students in any class can be recorded when they carry their student cards with them without having to individually swipe the card or allocate special interaction time. The successful implementation of this proposal will facilitate such record keeping in a non-intrusive and efficient manner and will provide the platform for further research on the correlation between attendance and performance of the students. The opportunity for related research is identified regarding the range of the parameters involved, ensuring that individual identifications do not clash and interfacing challenges with the central record keeping are overcome.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rueda, Cebollero Guillem. "Learning Cache Replacement Policies using Register Automata". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-212677.

Texto completo
Resumen
Processors are a basic unit of the computer which accomplish the mission of processing data stored in the memory. Large memories are required to process a big amount of data. Not all data is required at the same time, few data is required faster than other. For this reason, the memory is structured  in a hierarchy, from smaller and faster to bigger and slower. The cache memory is one of the fastest elements and closest to the processor in the memory hierarchy. The processor design companies hides its characteristics, usually under a confidential documentation that can not be accessed by the software developers. One of the most important characteristics kept in secret in this documentation is the replacement policy. The most famous replacement policies are known but the hardware designers can apply modifications for performance, cost or design reasons. The obfuscation of a part of the processor implies many developers to avoid problems with, for example, the runtime. If a task must be executed always in a certain time, the developer will take always the case requiring more time to execute (also called "Worst Case Execution Time") implying an underutilisation of the processor. This project will be focused on a new method to represent  and infer the replacement policy: modelling the replacement policies with automaton and using a learning process framework called LearnLib to guess them. This is not the first project trying to match the cache memory characteristics, actually a previous project is the basis to find a more general model to define the replacement policies. The results of LearnLib are modelled as an automaton. In order to test the effectiveness of this framework, different replacement policies will be simulated and verified. To provide a interface with a real cache memories is developed a program called hwquery. This program will interface a real cache request for using it in Learnlib.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jouhet, Vianney. "Automated adaptation of Electronic Heath Record for secondary use in oncology". Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0373/document.

Texto completo
Resumen
Avec la montée en charge de l’informatisation des systèmes d’information hospitaliers, une quantité croissante de données est produite tout au long de la prise en charge des patients. L’utilisation secondaire de ces données constitue un enjeu essentiel pour la recherche ou l’évaluation en santé. Dans le cadre de cette thèse, nous discutons les verrous liés à la représentation et à la sémantique des données, qui limitent leur utilisation secondaire en cancérologie. Nous proposons des méthodes basées sur des ontologies pour l’intégration sémantique des données de diagnostics. En effet, ces données sont représentées par des terminologies hétérogènes. Nous étendons les modèles obtenus pour la représentation de la maladie tumorale, et les liens qui existent avec les diagnostics. Enfin, nous proposons une architecture combinant entrepôts de données, registres de métadonnées et web sémantique. L’architecture proposée permet l’intégration syntaxique et sémantique d’un grand nombre d’observations. Par ailleurs, l’intégration de données et de connaissances (sous la forme d’ontologies) a été utilisée pour construire un algorithme d’identification de la maladie tumorale en fonction des diagnostics présents dans les données de prise en charge. Cet algorithme basé sur les classes de l’ontologie est indépendant des données effectivement enregistrées. Ainsi, il fait abstraction du caractère hétérogène des données diagnostiques initialement disponibles. L’approche basée sur une ontologie pour l’identification de la maladie tumorale, permet une adaptation rapide des règles d’agrégation en fonction des besoins spécifiques d’identification. Ainsi, plusieurs versions du modèle d’identification peuvent être utilisées avec des granularités différentes
With the increasing adoption of Electronic Health Records (EHR), the amount of data produced at the patient bedside is rapidly increasing. Secondary use is there by an important field to investigate in order facilitate research and evaluation. In these work we discussed issues related to data representation and semantics within EHR that need to be address in order to facilitate secondary of structured data in oncology. We propose and evaluate ontology based methods for heterogeneous diagnosis terminologies integration in oncology. We then extend obtained model to enable tumoral disease representation and links with diagnosis as recorded in EHR. We then propose and implement a complete architecture combining a clinical data warehouse, a metadata registry and web semantic technologies and standards. This architecture enables syntactic and semantic integration of a broad range of hospital information System observation. Our approach links data with external knowledge (ontology), in order to provide a knowledge resource for an algorithm for tumoral disease identification based on diagnosis recorded within EHRs. As it based on the ontology classes, the identification algorithm is uses an integrated view of diagnosis (avoiding semantic heterogeneity). The proposed architecture leading to algorithm on the top of an ontology offers a flexible solution. Adapting the ontology, modifying for instance the granularity provide a way for adapting aggregation depending on specific needs
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Elrod, JoAnn Broeckel, Raina Merchant, Mohamud Daya, Scott Youngquist, David Salcido, Terence Valenzuela y Graham Nichol. "Public health surveillance of automated external defibrillators in the USA: protocol for the dynamic automated external defibrillator registry study". BMJ PUBLISHING GROUP, 2017. http://hdl.handle.net/10150/623946.

Texto completo
Resumen
Introduction: Lay use of automated external defibrillators (AEDs) before the arrival of emergency medical services (EMS) providers on scene increases survival after out-of-hospital cardiac arrest (OHCA). AEDs have been placed in public locations may be not ready for use when needed. We describe a protocol for AED surveillance that tracks these devices through time and space to improve public health, and survival as well as facilitate research. Methods and analysis: Included AEDs are installed in public locations for use by laypersons to treat patients with OHCA before the arrival of EMS providers on scene. Included cases of OHCA are patients evaluated by organised EMS personnel and treated for OHCA. Enrolment of 10 000 AEDs annually will yield precision of 0.4% in the estimate of readiness for use. Enrolment of 2500 patients annually will yield precision of 1.9% in the estimate of survival to hospital discharge. Recruitment began on 21 Mar 2014 and is ongoing. AEDs are found by using multiple methods. Each AED is then tagged with a label which is a unique two-dimensional (2D) matrix code; the 2D matrix code is recorded and the location and status of the AED tracked using a smartphone; these elements are automatically passed via the internet to a secure and confidential database in real time. Whenever the 2D matrix code is rescanned for any non-clinical or clinical use of an AED, the user is queried to answer a finite set of questions about the device status. The primary outcome of any clinical use of an AED is survival to hospital discharge. Results are summarised descriptively. Ethics and dissemination: These activities are conducted under a grant of authority for public health surveillance from the Food and Drug Administration. Results are provided periodically to participating sites and sponsors to improve public health and quality of care.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Petersson, Håkan. "On information quality in primary health care registries /". Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/tek805s.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hauck, Shahram. "Automated CtP Calibration for Offset Printing : Dot gain compensation, register variation and trapping evaluation". Doctoral thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119366.

Texto completo
Resumen
Although offset printing has been and still is the most common printing technology for color print productions, its print productions are subject to variations due to environmental and process parameters. Therefore, it is very important to frequently control the print production quality criteria in order to make the process predictable, reproducible and stable. One of the most important parts in a modern industrial offset printing is Computer to Plate (CtP), which exposes the printing plate. One of the most important quality criteria for printing is to control the dot gain level. Dot gain refers to an important phenomenon that causes the printed elements to appear larger than their reference size sent to the CtP. It is crucial to have the dot gain level within an acceptable range, defined by ISO 12647-2 for offset printing. This is done by dot gain compensation methods in the Raster Image Processor (RIP). Dot gain compensation is however a complicated task in offset printing because of the huge number of parameters affecting dot gain. Another important quality criterion affecting the print quality in offset is the register variation caused by the misplacement of printing sheet in the printing unit. Register variation causes tone value variations, gray balance variation and blurred image details. Trapping is another important print quality criterion that should be measured in an offset printing process. Trapping occurs when the inks in different printing units are printed wet-on-wet in a multi-color offset printing machine. Trapping affects the gray balance and makes the resulting colors of overlapped inks pale. In this dissertation three different dot gain compensation methods are discussed. The most accurate and efficient dot gain compensation method, which is noniterative, has been tested, evaluated and applied using many offset printing workflows. To further increase the accuracy of this method, an approach to effectively select the correction points of a RIP with limited number of correction points, has also been proposed. Correction points are the tone values needed to be set in the RIP to define a dot gain compensation curve. To fulfill the requirement of having the register variation within the allowed range, it has to be measured and quantified. There have been two novel models proposed in this dissertation that determine the register variation value. One of the models is based on spectrophotometry and the other one on densitometry. The proposed methods have been evaluated by comparison to the industrial image processing based register variation model, which is expensive and not available in most printing companies. The results of all models were comparable, verifying that the proposed models are good  alternatives to the image processing based model. The existing models determining the trapping values are based on densitometric measurements and quantify the trapping effect by a percentage value. In this dissertation, a novel trapping model is proposed that quantifies the trapping effect by a color difference metric, i.e. , which is more useful and understandable for print machine operators. The comparison between the proposed trapping model and the existing models has shown very good correlations and verified that the proposed model has a bigger dynamic range. The proposed trapping model has also been extended to take into account the effect of ink penetration and gloss. The extended model has been tested using a  high glossy coated paper and the results have shown that the gloss and ink penetration can be neglected for this type of paper. An automated CtP calibration system for offset printing workflow has been introduced and described in this dissertation. This method is a good solution to generate the needed huge numbers of dot gain compensation curves to have an accurate CtP calibration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

MANSOURI, NAZANIN. "AUTOMATED CORRECTNESS CONDITION GENERATION FOR FORMAL VERIFICATION OF SYNTHESIZED RTL DESIGNS". University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin982064542.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Tabani, Hamid. "Low-power architectures for automatic speech recognition". Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/462249.

Texto completo
Resumen
Automatic Speech Recognition (ASR) is one of the most important applications in the area of cognitive computing. Fast and accurate ASR is emerging as a key application for mobile and wearable devices. These devices, such as smartphones, have incorporated speech recognition as one of the main interfaces for user interaction. This trend towards voice-based user interfaces is likely to continue in the next years which is changing the way of human-machine interaction. Effective speech recognition systems require real-time recognition, which is challenging for mobile devices due to the compute-intensive nature of the problem and the power constraints of such systems and involves a huge effort for CPU architectures to reach it. GPU architectures offer parallelization capabilities which can be exploited to increase the performance of speech recognition systems. However, efficiently utilizing the GPU resources for speech recognition is also challenging, as the software implementations exhibit irregular and unpredictable memory accesses and poor temporal locality. The purpose of this thesis is to study the characteristics of ASR systems running on low-power mobile devices in order to propose different techniques to improve performance and energy consumption. We propose several software-level optimizations driven by the power/performance analysis. Unlike previous proposals that trade accuracy for performance by reducing the number of Gaussians evaluated, we maintain accuracy and improve performance by effectively using the underlying CPU microarchitecture. We use a refactored implementation of the GMM evaluation code to ameliorate the impact of branches. Then, we exploit the vector unit available on most modern CPUs to boost GMM computation, introducing a novel memory layout for storing the means and variances of the Gaussians in order to maximize the effectiveness of vectorization. In addition, we compute the Gaussians for multiple frames in parallel, significantly reducing memory bandwidth usage. Our experimental results show that the proposed optimizations provide 2.68x speedup over the baseline Pocketsphinx decoder on a high-end Intel Skylake CPU, while achieving 61% energy savings. On a modern ARM Cortex-A57 mobile processor our techniques improve performance by 1.85x, while providing 59% energy savings without any loss in the accuracy of the ASR system. Secondly, we propose a register renaming technique that exploits register reuse to reduce the pressure on the register file. Our technique leverages physical register sharing by introducing minor changes in the register map table and the issue queue. We evaluated our renaming technique on top of a modern out-of-order processor. The proposed scheme supports precise exceptions and we show that it results in 9.5% performance improvements for GMM evaluation. Our experimental results show that the proposed register renaming scheme provides 6% speedup on average for the SPEC2006 benchmarks. Alternatively, our renaming scheme achieves the same performance while reducing the number of physical registers by 10.5%. Finally, we propose a hardware accelerator for GMM evaluation that reduces the energy consumption by three orders of magnitude compared to solutions based on CPUs and GPUs. The proposed accelerator implements a lazy evaluation scheme where Gaussians are computed on demand, avoiding 50% of the computations. Furthermore, it employs a novel clustering scheme to reduce the size of the GMM parameters, which results in 8x memory bandwidth savings with a negligible impact on accuracy. Finally, it includes a novel memoization scheme that avoids 74.88% of floating-point operations. The end design provides a 164x speedup and 3532x energy reduction when compared with a highly-tuned implementation running on a modern mobile CPU. Compared to a state-of-the-art mobile GPU, the GMM accelerator achieves 5.89x speedup over a highly optimized CUDA implementation, while reducing energy by 241x.
El reconocimiento automático de voz (ASR) es una de las aplicaciones más importantes en el área de la computación cognitiva. ASR rápido y preciso se está convirtiendo en una aplicación clave para dispositivos móviles y portátiles. Estos dispositivos, como los Smartphones, han incorporado el reconocimiento de voz como una de las principales interfaces de usuario. Es probable que esta tendencia hacia las interfaces de usuario basadas en voz continúe en los próximos años, lo que está cambiando la forma de interacción humano-máquina. Los sistemas de reconocimiento de voz efectivos requieren un reconocimiento en tiempo real, que es un desafío para los dispositivos móviles debido a la naturaleza de cálculo intensivo del problema y las limitaciones de potencia de dichos sistemas y supone un gran esfuerzo para las arquitecturas de CPU. Las arquitecturas GPU ofrecen capacidades de paralelización que pueden aprovecharse para aumentar el rendimiento de los sistemas de reconocimiento de voz. Sin embargo, la utilización eficiente de los recursos de la GPU para el reconocimiento de voz también es un desafío, ya que las implementaciones de software presentan accesos de memoria irregulares e impredecibles y una localidad temporal deficiente. El propósito de esta tesis es estudiar las características de los sistemas ASR que se ejecutan en dispositivos móviles de baja potencia para proponer diferentes técnicas para mejorar el rendimiento y el consumo de energía. Proponemos varias optimizaciones a nivel de software impulsadas por el análisis de potencia y rendimiento. A diferencia de las propuestas anteriores que intercambian precisión por el rendimiento al reducir el número de gaussianas evaluadas, mantenemos la precisión y mejoramos el rendimiento mediante el uso efectivo de la microarquitectura subyacente de la CPU. Usamos una implementación refactorizada del código de evaluación de GMM para reducir el impacto de las instrucciones de salto. Explotamos la unidad vectorial disponible en la mayoría de las CPU modernas para impulsar el cálculo de GMM. Además, calculamos las gaussianas para múltiples frames en paralelo, lo que reduce significativamente el uso de ancho de banda de memoria. Nuestros resultados experimentales muestran que las optimizaciones propuestas proporcionan un speedup de 2.68x sobre el decodificador Pocketsphinx en una CPU Intel Skylake de alta gama, mientras que logra un ahorro de energía del 61%. En segundo lugar, proponemos una técnica de renombrado de registros que explota la reutilización de registros físicos para reducir la presión sobre el banco de registros. Nuestra técnica aprovecha el uso compartido de registros físicos mediante la introducción de cambios en la tabla de renombrado de registros y la issue queue. Evaluamos nuestra técnica de renombrado sobre un procesador moderno. El esquema propuesto admite excepciones precisas y da como resultado mejoras de rendimiento del 9.5% para la evaluación GMM. Nuestros resultados experimentales muestran que el esquema de renombrado de registros propuesto proporciona un 6% de aceleración en promedio para SPEC2006. Finalmente, proponemos un acelerador para la evaluación de GMM que reduce el consumo de energía en tres órdenes de magnitud en comparación con soluciones basadas en CPU y GPU. El acelerador propuesto implementa un esquema de evaluación perezosa donde las GMMs se calculan bajo demanda, evitando el 50% de los cálculos. Finalmente, incluye un esquema de memorización que evita el 74.88% de las operaciones de coma flotante. El diseño final proporciona una aceleración de 164x y una reducción de energía de 3532x en comparación con una implementación altamente optimizada que se ejecuta en una CPU móvil moderna. Comparado con una GPU móvil de última generación, el acelerador de GMM logra un speedup de 5.89x sobre una implementación CUDA optimizada, mientras que reduce la energía en 241x.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Klitkou, Gabriel. "Automatisk trädkartering i urban miljö : En fjärranalysbaserad arbetssättsutveckling". Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-27301.

Texto completo
Resumen
Digital urban tree registers serve many porposes and facilitate the administration, care and management of urban trees within a city or municipality. Currently, mapping of urban tree stands is carried out manually with methods which are both laborious and time consuming. The aim of this study is to establish a way of operation based on the use of existing LiDAR data and othophotos to automatically detect individual trees. By using the extensions LIDAR Analyst and Feature Analyst for ArcMap a tree extraction was performed. This was carried out over the extent of the city district committee area of Östermalm in the city of Stockholm, Sweden. The results were compared to the city’s urban tree register and validated by calculating its Precision and Recall. This showed that FeatureAnalyst generated the result with the highest accuracy. The derived trees were represented by polygons which despite their high accuracy make the result unsuitable for detecting individual tree positions. Even though the use of LIDAR Analyst resulted in a less precise tree mapping result, individual tree positions were detected satisfactory. This especially in areas with more sparse, regular tree stands. The study concludes that the use of both tools complement each other and compensate the shortcomings of the other. FeatureAnalyst maps an acceptable tree coverage while LIDAR Analyst more accurately identifies individual tree positions. Thus, a combination of the two results could be used for individual tree mapping.
Digitala urbana trädregister tjänar många syften och underlättar för städer och kommuner att administrera, sköta och hantera sina park- och gatuträd. Dagens kartering av urbana trädbestånd sker ofta manuellt med metoder vilka är både arbetsintensiva och tidskrävande. Denna studie syftar till att utveckla ett arbetssätt för att med hjälp av befintliga LiDAR-data och ortofoton automatiskt kartera individuella träd. Med hjälp av tilläggen LIDAR Analyst och FeatureAnalyst för ArcMap utfördes en trädkartering över Östermalms stadsdelsnämndsområde i Stockholms stad. Efter kontroll mot stadens träddatabas och validering av resultatet genom beräknandet av Precision och Recall konstaterades att användningen av FeatureAnalyst resulterade i det bästa trädkarteringsresultatet. Dessa träd representeras av polygoner vilket medför att resultatet trots sin goda täckning inte lämpar sig för identifierandet av enskilda trädpositioner. Även om användningen av LIDAR Analyst resulterade i ett mindre precist karteringsresultat erhölls goda positionsbestämmelser för enskilda träd, främst i områden med jämna, glesa trädbestånd. Slutsatsen av detta är att användandet av de båda verktygen kompenserar varandras tillkortakommanden där FeatureAnalyst ger en godtagbar trädtäckning medan LIDAR Analyst bättre identifierar enskilda trädpositioner. En kombination av de båda resultaten skulle alltså kunna användas i trädkarteringssyfte.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Bezerra, Andrea Fernanda Fontes. "Geração de layout de interfaces gráficas baseado em ontologias para documentos do Registro Eletrônico em Saúde". Universidade Federal da Paraíba, 2014. http://tede.biblioteca.ufpb.br:8080/handle/tede/7828.

Texto completo
Resumen
Submitted by Clebson Anjos (clebson.leandro54@gmail.com) on 2016-02-11T19:57:13Z No. of bitstreams: 1 arquivototal.pdf: 4682448 bytes, checksum: 9f9a7a72b4132cb9d61c8cc0c1591ea3 (MD5)
Made available in DSpace on 2016-02-11T19:57:13Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 4682448 bytes, checksum: 9f9a7a72b4132cb9d61c8cc0c1591ea3 (MD5) Previous issue date: 2014-05-23
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Health informatics is a domain that presents several challenges to be overcome. Electronic Health Records (EHR) are one of its most important subdomain, in charge of storage, exhibition, and manipulation of patient clinical information, among others. EHR systems require domain flexibility, which allows modifications in the structure of documents without application recompilation or redeployment, for instance, in a web server. Current approaches in the literature propose generic models to represent domain and presentation, without ontological definitions for user interface (UI) layout and style. These, when properly organized, improve the acceptance of the system by users. This work aims to develop a framework to layout and style generation for graphical user interface of EHR documents, based on Web Ontology Language (OWL) ontologies and using restrictions. By centralizing and combining metadata from biomedical and documents domains, it was possible to apply layout and style to EHR documents, with the use of grids, including additional ontological definition of presentation formats for the medical field, facilitating UI development and maintenance.
A informática em saúde apresenta muitos desafios a serem superados. Um de seus principais ramos de pesquisa são os Registros Eletrônicos em Saúde (RES), responsáveis, dentre outros, pelo armazenamento, exibição e manipulação de registros clínicos do paciente. Sistemas deste tipo requerem flexibilidade do domínio da aplicação, de modo que alterações nos documentos do RES sejam realizadas em tempo de execução, sem recompilação ou reimplantação da aplicação, por exemplo, em um servidor web. Abordagens da literatura propõem modelos genéricos de representação de domínio e apresentação, sem definições ontológicas de layout e estilo de interface com o usuário (UI). Estes, quando bem organizados, melhoram a aceitação do sistema pelos usuários. Este trabalho teve como objetivo o desenvolvimento de um framework para geração de layout e estilo de interface gráfica com o usuário para documentos do RES, baseado em ontologias Web Ontology Language (OWL), com uso de restrições. Através da centralização e combinação dos metadados biomédicos e de documentos para o RES, foi possível aplicar layout e estilo para os documentos do RES, com uso de grids, com definição ontológica adicional de formatos de apresentação para a área médica, facilitando o desenvolvimento da UI para o RES a manutenção da interface gráfica da aplicação.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Косар, Ліля Петрівна y Lilya Kosar. "Автоматизоване робоче місце медичного працівника відділу реєстратури". Master's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2020. http://elartu.tntu.edu.ua/handle/lib/33187.

Texto completo
Resumen
Проект виконано на кафедрі біотехнічних систем Тернопільського національного технічного університету імені Івана Пулюя
В роботі проведено аналіз технічного завдання, аналітичний огляд відомих рішень та вибір напряму дослідження, розроблено математичну модель процесу роботи реєстратури, побудовано фізичну та логічну моделі. Представлено структуру і функціональне призначення програмного комплексу для автоматизації роботи реєстратури
The analysis of the technical task, the analytical review of the known decisions and the choice of the direction of research are carried out in the work, the mathematical model of process of work of the registry is developed, the physical and logical model is constructed. The structure and functional purpose of the software package for automation of the registry are presented
ВСТУП 7 РОЗДІЛ 1. АНАЛІТИЧНА ЧАСТИНА 9 1.1 Аналіз технічного завдання 9 1.2 Математична модель процесу роботи реєстратури 14 1.3 Розробка структурно-функціональної моделі для автоматизації роботи реєстратури поліклініки 20 1.4. Висновок до розділу 1 26 РОЗДІЛ 2. ОСНОВНА ЧАСТИНА 27 2.1 Розробка логічної моделі реєстратури поліклініки 27 2.2 Розробка фізичної моделі роботи реєстратури 33 2.3 Висновок до розділу 2 34 РОЗДІЛ 3. НАУКОВО-ДОСЛІДНА ЧАСТИНА 35 3.1 Структура і функціональне призначення програмного комплексу для автоматизації обліку роботи реєстратури 35 3.2 Висновок до розділу 3 42 РОЗДІЛ 4. ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ 43 4.1 Охорона праці 43 4.2 Безпека в надзвичайних ситуаціях 48 4.3 Висновок до розділу 4 50 ЗАГАЛЬНІ ВИСНОВКИ 52 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ 53 ДОДАТКИ 54
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Martins, Renata Cristófani. "Codificação automática das causas de morte e seleção da causa básica de morte: a adaptação para o Brasil do software Iris". Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/6/6132/tde-21082012-151114/.

Texto completo
Resumen
Introdução - Uma das formas de se aumentar a qualidade das informações sobre causas de morte é automatizar o processo de sua elaboração. O software Iris é um dos mecanismos disponíveis para esse fim. Suas principais características é que ele segue as regras internacionais de mortalidade e ele é independente de idioma. Objetivo - Elaborar para o Iris um dicionário na língua portuguesa e avaliar a sua completitude para a codificação das causas de morte. Método - O dicionário criado com dados do arquivo eletrônico do volume 1 da CID-10 e com o Tesauro da Classificação Internacional de Atenção Primária. Foi utilizado o Iris V4.0.34 e, como codificação manual, o que o Programa de Aprimoramento das Informações de Mortalidade no Município de São Paulo (PRO-AIM) da Secretaria Municipal de Saúde de São Paulo escreveu nas declarações de óbito. Caso o Iris não codificasse as causas de morte, ajustes eram feitos no dicionário ou na tabela de padronização. Resultado - O Iris é capaz de codificar as causas de morte e selecionar a causa básica de morte, ambas automaticamente, é um software recente que está em constantes adequações, é independente de idioma e para usá-lo em cada país é necessário realizar somente um dicionário de causas de morte. No teste para avaliação da primeira versão do dicionário em português, o Iris apresentou um desempenho satisfatório. Foi capaz de codificar diretamente 50,6 por cento das declarações de óbito e, após ajustes e acréscimos no dicionário e na tabela de padronização, o software codificou todas as linhas em 94,44 por cento das declarações. Das declarações não codificadas completamente 89,19 por cento delas tinham algum diagnóstico contido no capítulo XX da CID-10. O Iris apresentou 63,1 por cento de concordância nas declarações de óbito pareadas considerando todas as causas de morte com códigos completos de 4 caracteres da CID-10. Conclusão - A realização dos ajustes no dicionário ou na padronização faz parte do processo de desenvolvimento do dicionário e que esse processo é continuo. Com as novas versões do Iris e atualizações e aprimoramento da codificação das causas externas, avanços serão feitos para que ele seja mais compatível com a realidade brasileira. Somado a isso, as futuras versões do Iris com um dicionário mais desenvolvido podem satisfazer as necessidades de codificação automática e melhorar a precisão dos dados de causa morte paras estudos de saúde pública.
Introduction - One way to increase the quality of causes-of-death statistics is to use computers for applying the rules systematically. Iris software is an available system for this purpose. Its main characteristics are that it follows international rules of mortality and it is language independent. Objective - Produce a Portuguese dictionary for Iris and assess its completeness of coding of causes of death. Methods - The creation of the dictionary used two sources: the electronic file of volume 1 of ICD-10 and Thesaurus of Classificação Internacional de Atenção Primária. Was used Iris V4.0.34 and for manual coding the codes at the Programa de Aprimoramento das Informações de Mortalidade no Município de São Paulo (PRO-AIM) of Secretaria Municipal de Saúde of São Paulo has written on death certificates. If Iris couldnt codify the causes of death, adjustments were made in the dictionary or standardization table. Results - Iris is able to encode causes of death and select the underlying cause of death, either automatically; is a recent software that is in constant adjustments; is a language independent software and to use it in your country you need only dictionary of causes of death. In the test for evaluation the first version of the Portuguese dictionary Iris showed satisfactory performance. He was able to code directly for 50.6 per cent of death certificates and, after adjustments and additions in the dictionary and standardization table, the software coded all lines in 94.44 per cent of death certificate. The statements do not fully coded 89.19 per cent had a diagnosis contained in Chapter XX of ICD-10. Iris presented 63,1 per cent agreement on paired death certificates considering all causes of death and 4-digit ICD-10 code level. Conclusion - making adjustments in the dictionary or the standardization is part of the development process of the dictionary and that this process is ongoing. With new Iris versions and updates in the management of the coding external causes, progress will be made to make it more compatible with the Brazilian reality. Added to this, future versions of Iris with a dictionary more developed can meet the needs of auto-tagging and improve the accuracy of data causes death to public health studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Castilla, André Coutinho. "Instrumento de investigação clínico-epidemiológica em Cardiologia fundamentado no processamento de linguagem natural". Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/5/5131/tde-16022009-165641/.

Texto completo
Resumen
O registro eletrônico do paciente (REP) está sendo gradativamente implantado no meio médico hospitalar. Grande parte das informações essenciais do REP está armazenada na forma de texto narrativo livre, dificultando operações de procura, análise e comparação de dados. O processamento de linguagem natural (PLN) refere-se a um conjunto de técnicas computacionais, cujo objetivo é a análise de texto através de conhecimentos léxicos, gramaticais e semânticos. O presente projeto propõe a criação de uma ferramenta computacional de investigação clínicoepidemiológica aplicada a textos narrativos médicos. Como metodologia propomos a utilização do processador de linguagem natural especializado em medicina MEDLEE desenvolvido para textos em Inglês. Para que seu uso seja possível textos médicos em Português são traduzidos ao Inglês automaticamente. A tradução automatizada (TA) é realizada utilizando o aplicativo baseado em regras SYSTRAN especialmente configurado para processar textos médicos através da incorporação de terminologias especializadas. O resultado desta seqüência de TA e PLN são informações conceituais que serão investigadas à procura de achados clínicos pré-definidos, atrvés de inferência lógica sobre uma ontologia. O objetivo experimental desta tese foi conduzir um estudo de recuperação de informações em um conjunto de 12.869 relatórios de radiografias torácicas à procura de vinte e dois achados clínicos e radiológicas. A sensibilidade e especificidade médias obtidas em comparação com referência formada pela opinião de três médicos radiologistas foram de 0,91 e 0,99 respectivamente. Os resultados obtidos indicam a viabilidade da procura de achados clínicos em relatórios de radiografias torácicas através desta metodologia de acoplamento da TA e PLN. Conseqüentemente em trabalhos futuros poderá ser ampliado o número de achados investigados, estendida a metodologia para textos de outras modalidades, bem como de outros idiomas
The Electronic Medical Record (EMR) is gradually replacing paper storage on clinical care settings. Most of essential information contained on EMR is stored as free narrative text, imposing several difficulties on automated data extraction and retrieval. Natural language processing (NLP) refers to computational linguistics tools, whose main objective is text analysis using lexical, grammatical and semantic knowledge. This project describes the creation of a computational tool for clinical and epidemiologic queries on narrative medical texts. The proposed methodology uses the specialized natural language processor MEDLEE developed for English language. To use this processor on Portuguese medical texts chest x-ray reports were Machine Translated into English. The machine translation (MT) was performed by SYSTRAN software, a rule based system customized with a specialized lexicon developed for this project. The result of serial coupling of MT an NLP is tagged text which needs further investigation for extracting clinical findings, whish was done by logical inference upon an ontolgy. The experimental objective of this thesis project was to investigate twenty-two clinical and radiological findings on 12.869 chest x-rays reports. Estimated sensitivity and specificity were 0.91 and 0.99 respectively. The gold standard reference was formed by the opinion of three radiologists. The obtained results indicate the viability of extracting clinical findings from chest x-ray reports using the proposed methodology through coupling MT and NLP. Consequently on future works the number of investigated conditions could be expanded. It is also possible to use this methodology on other medical texts, and on texts of other languages
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Козирод, В. М. "Комплексна система захисту інформації клієнтської частини автоматизованої інформаційно-телекомунікаційній системи “Оберіг”". Thesis, Чернігів, 2021. http://ir.stu.cn.ua/123456789/24850.

Texto completo
Resumen
Козирод, В. М. Комплексна система захисту інформації клієнтської частини автоматизованої інформаційно-телекомунікаційній системи “Оберіг” : випускна кваліфікаційна робота : 125 "Кібербезпека" / В. М. Козирод ; керівник роботи В. І. Гур’єв ; НУ "Чернігівська політехніка", кафедра кібербезпеки та математичного моделювання . – Чернігів, 2021. – 70 с.
Метою даної роботи є створення комплексної системи захисту інформації на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи з метою захисту інформації з обмеженим доступом. Об’єктом дослідження виступає автоматизована інформаційно- телекомунікаційна система Єдиного державного реєстру військовозобов’язаних, яка потребує захисту інформації з обмеженим доступом від розголошення, витоку та несанкціонованого доступу. Предметом дослідження є захист інформації з обмеженим доступом (персональних даних військовозобов’язаних та призовників) на клієнтській частині автоматизованої інформаційно-телекомунікаційної системи. Методами дослідження є використання взаємопов’язаної сукупності організаційних та інженерно-технічних заходів, засобів та методів захисту інформації на клієнтській частині автоматизованої інформаційно- телекомунікаційній системи. Організаційні заходи та методи захисту інформації застосовано для створення концепції інформаційної безпеки. Інженерно-технічні заходи захисту інформації застосовано для захисту інформації з обмеженим доступом від розголошення, витоку та несанкціонованого доступу. Результати та новизна: для створення комплексної системи захисту інформації на клієнтській частині автоматизованої інформаційно- телекомунікаційній системи розв’язано наступні завдання: 1) забезпечено безпеку інформації з обмеженим доступом під час її обробки на клієнтській частині автоматизованої інформаційно- телекомунікаційній системи; 2) організовано криптографічний захист інформації на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи; 3) визначено порядок допуску до роботи з засобами та ключовими документами комплексу криптографічного захисту інформації; 4) забезпечено антивірусний захист службової інформації на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи; 5) визначено порядок розміщення, спеціального обладнання, охорони та організації режиму безпеки в приміщеннях клієнтської частини автоматизованої інформаційно-телекомунікаційній системи; 6) сплановано розмежування доступу до автоматизованої інформаційно- телекомунікаційній системи та її ресурсів на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи. Галузь застосування: комплексна система захисту інформації на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи може використовуватися для захисту персональних та службових даних, розміщених у Єдиному державному реєстрі військовозобов’язаних.
The purpose of this work is to create a comprehensive information security system on the client part of the automated information and telecommunications system to protect information with limited access. The object of the study is the automated information and telecommunication system of the Unified State Register of Conscripts, which requires the protection of information with limited access from disclosure, leakage and unauthorized access. The subject of the research is the protection of information with limited access (personal data of conscripts and conscripts) on the client part of the automated information and telecommunication system. The research methods are the use of an interconnected set of organizational and engineering measures, tools and methods of information protection on the client part of the automated information and telecommunications system. Organizational measures and methods of information protection were used to create the concept of information security. Engineering and technical measures to protect information are used to protect information with limited access from disclosure, leakage and unauthorized access. Results and novelty: to create a comprehensive information security system on the client part of the automated information and telecommunications system solved the following tasks: 1) security of information with limited access during its processing on the client part of the automated information and telecommunication system is provided; 2) cryptographic protection of information is organized on the client part of the automated information and telecommunication system; 3) the procedure for admission to work with the means and key documents of the complex of cryptographic protection of information is determined; 4) anti-virus protection of official information on the client part of the automated information and telecommunication system is provided; 5) the order of placement, special equipment, protection and organization of the security regime in the premises of the client part of the automated information and telecommunication system is determined; 6) delimitation of access to the automated information and telecommunication system and its resources on the client part of the automated information and telecommunication system is planned. Field of application: the complex system of information protection on the client part of the automated information and telecommunication system can be used to protect personal and official data placed in the Unified State Register of Conscripts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Jaume, Bennasar Andrés. "Las nuevas tecnologías en la administración de justicia. La validez y eficacia del documento electrónico en sede procesal". Doctoral thesis, Universitat de les Illes Balears, 2009. http://hdl.handle.net/10803/9415.

Texto completo
Resumen
La tesis se encarga de analizar, por un lado, la integración y el desarrollo de las nuevas tecnologías en la Administración de Justicia; y, por otro, los parámetros que constituyen la validez y eficacia del documento electrónico.
La primera cuestión se centra en la configuración de los Sistemas de Información de la Oficina Judicial y del Ministerio Fiscal, así como de la informatización de los Registros Civiles, donde el art. 230 LOPJ es la pieza clave. Se estudian sus programas, aplicaciones, la videoconferencia, los ficheros judiciales y las redes de telecomunicaciones que poseen la cobertura de la firma electrónica reconocida, donde cobran gran relevancia los convenios de colaboración tecnológica. La digitalización de las vistas quizá sea una de las cuestiones con más trascendencia, teniendo en cuenta que el juicio es el acto que culmina el proceso. Aunque no todos los proyectos adoptados en el ámbito de la e.justicia se han desarrollado de forma integral, ni han llegado a la totalidad de los órganos judiciales. El objetivo final es lograr una Justicia más ágil y de calidad, a lo cual aspira el Plan Estratégico de Modernización de la Justicia 2009-2012 aprobado recientemente.
En referencia a la segunda perspectiva, no cabe duda que el Ordenamiento jurídico y los tribunales, en el ámbito de la justicia material, otorgan plena validez y eficacia al documento electrónico. Nuestra línea de investigación se justifica porque cada vez son más los procesos que incorporan soportes electrónicos de todo tipo, ya sea al plantearse la acción o posteriormente como medio de prueba (art. 299.2 LEC). Entre otros temas examinamos el documento informático, la problemática que rodea al fax, los sistemas de videograbación y el contrato electrónico.
La tesi s'encarrega d'analitzar, per una part, la integració i el desenvolupament de les noves tecnologies dins l´Administració de Justícia; i, per l'altra, els paràmetres que constitueixen la validesa i l'eficàcia del document electrònic.
La primera qüestió es centra en la configuració dels Sistemes d´Informació de l´Oficina Judicial i del Ministeri Fiscal, així com de la informatització dels Registres Civils, on l'art. 230 LOPJ es la peça clau. S'estudien els seus programes, aplicacions, la videoconferència, el fitxers judicials i les xarxes de telecomunicacions que tenen la cobertura de la firma electrònica reconeguda, on cobren gran rellevància els convenis de col·laboració tecnològica. La digitalització de les vistes tal vegada sigui una de les qüestions amb més transcendència, tenint amb compte que el judici es l'acte que culmina el procés. Però no tots el projectes adoptats en l'àmbit de la e.justicia s'han desenvolupat d'una manera integral ni han arribat a la totalitat dels òrgans judicials. L'objectiu final es assolir una Justícia més àgil i de qualitat, al que aspira el Pla Estratègic de Modernització de la Justícia 2009-2012 aprovat recentment.
En referència a la segona perspectiva, no hi ha dubte que l´Ordenament jurídic i els tribunals, en l'àmbit de la justícia material, donen plena validesa i eficàcia al document electrònic. La nostra línia d'investigació es justifica perquè cada vegada son més el processos que incorporen suports electrònics de tot tipus, ja sigui quant es planteja l'acció o posteriorment como a medi de prova (art. 299.2 LEC). Entre altres temes examinem el document informàtic, la problemàtica que envolta al fax, els sistemes de videogravació i el contracte electrònic.
The thesis seeks to analyse, on the one hand, the integration and development of the new technologies in the Administration of Justice; and, on the other, the parameters which constitute the validity and efficiency of the electronic document.
The first question centres on the configuration of the Information Systems of the Judicial Office and the Public Prosecutor, as well as the computerisation of the Civil Registers, where the art. 230 LOPJ it's the part key. Their programmes, applications, the Video Conferencing, the judicial registers and the telecommunication networks which are covered by the recognised electronic signatures, are studied, where the agreements on technological collaboration gain great relevance. The digitalisation of evidence might perhaps be one of the questions with most consequence, bearing in mind that the judgment is the act by which the process is culminated. Although not all the projects adopted within the compass of e.justice have developed completely nor have reached all the judicial organs. The final objective is to achieve an agile, quality Justice, to which the recently approved Strategic Plan for the Modernisation of Justice aspires.
With reference to the second perspective, there is no doubt that the juridical Ordinance and the tribunals within the compass of material justice grant full validity and efficacy to the electronic document. Our line of investigation is justified because there are more and more processes which are sustained by electronic supports of all kinds, whether it be at the establishment of the action or later, as a proof of it (art. 299.2 LEC). Amongst other things, we examine the computerised document, the problems which surround the fax, the systems for video recording and the electronic contract.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie". Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Texto completo
Resumen
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Wang, Jiing-Yuh y 王景裕. "An Automatic Map Processing System for Land Register Map". Thesis, 1994. http://ndltd.ncl.edu.tw/handle/64370741694584901163.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Yu, Huan. "Automated Segmentation of Head and Neck Cancer Using Texture Analysis with Co-registered PET/CT images". Thesis, 2010. http://hdl.handle.net/1807/24920.

Texto completo
Resumen
Radiation therapy is often offered as the primary treatment for head and neck cancer(HNC). Accurate target delineation is essential for the success of radiation therapy. The current target definition technique - manual delineation using Computed Tomography(CT) - is subject to high observer variability. Functional imaging modalities such as 2-[18F]-fluoro-2-deoxy-D-glucose Positron Emission Tomography(FDG-PET) can greatly improve the visualization of tumor. FDG-PET co-registered with CT has shown potential to improve the accuracy of target localization and reduce observer variability. Unfortunately, due to the limitation of PET, the degree of improvement obtained by qualitative and simple quantitative (e.g. thresholding) use of FDG-PET is not ideal. However, both PET and CT images contain a wealth of texture information that could be used to improve the accuracy of target definition. This thesis has investigated using texture analysis techniques to automatically delineate radiation targets. Firstly, PET and CT texture features with high discrimination ability were identified and a texture analysis technique- a decision tree based K Nearest Neighbour(DTKNN) classifier – was developed. DTKNN could accurately classify head and neck tissue with an area under curve(AUC) of a Receiver Operator Characteristic(ROC) of 0.95. Subsequently, an automated target delineation technique - CO-registered Multi-modality Pattern Analysis Segmentation System(COMPASS) - was developed that can delineate tumor on a voxel-by-voxel basis. COMPASS was found to accurately delineate HNC with 84% sensitivity and 95% specificity on a voxel basis per patient. To accurately evaluate the utility of the COMPASS in radiation targeting, a validation method which can combine biased observers' contours to generate a probabilistic reference for validation was developed. The method was based on maximum likelihood analysis using a simulated annealing(SA) algorithm. The results from this thesis show that texture features of both PET and CT images can enhance the discrimination between HNC and normal tissue, and an automated delineation method of HNC using texture analysis of PET and CT images can accurately and consistently define radiation targets in head and neck. This suggests that automated segmentation of radiation targets based on texture analysis techniques may significantly reduce observer variability and improve the accuracy of radiation targeting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Karakaya, Fuat. "Automated exploration of the asic design space for minimum power-delay-area product at the register transfer level". 2004. http://etd.utk.edu/2004/KarakayaFuat.pdf.

Texto completo
Resumen
Thesis (Ph. D.)--University of Tennessee, Knoxville, 2004.
Title from title page screen (viewed May 13, 2004). Thesis advisor: Donald W. Bouldin. Document formatted into pages (x, 102 p. : ill. (some col.)). Vita. Includes bibliographical references (p. 99-101).
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Markel, Daniel. "Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in Co-registered 18-FDG PET/CT Images". Thesis, 2011. http://hdl.handle.net/1807/31332.

Texto completo
Resumen
Variability between oncologists in defining the tumor during radiation therapy planning can be as high as 700% by volume. Robust, automated definition of tumor boundaries has the ability to significantly improve treatment accuracy and efficiency. However, the information provided in computed tomography (CT) is not sensitive enough to differences between tumor and healthy tissue and positron emission tomography (PET) is hampered by blurriness and low resolution. The textural characteristics of thoracic tissue was investigated and compared with those of tumors found within 21 patient PET and CT images in order to enhance the differences and the boundary between cancerous and healthy tissue. A pattern recognition approach was used from these samples to learn the textural characteristics of each and classify voxels as being either normal or abnormal. The approach was compared to a number of alternative methods and found to have the highest overlap with that of an oncologist's tumor definition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yang, Shih-Lii y 楊世禮. "Handwritten Numeral Recognition Based on the Neural Network and Its Application in an Automatic Score Register System". Thesis, 1997. http://ndltd.ncl.edu.tw/handle/62315500368623754715.

Texto completo
Resumen
碩士
淡江大學
電機工程學系
85
Handwritten numeral recognition has high potential in some applications in our daily life. It can be used in a wide range of applications, such as an automatic score register system, license-plate data verification, ZIP code recognition, etc. As a result, in the recent years many researchers have proposed relevant methods and systems for handwritten numeral recognition. In this paper, the author proposed a handwritten digit recognition system based on a supervised HyperRectangular Composite Neural Network (HRCNN) and then applied this system to an automatic score register system. This system is composed of three parts: preprocessing, numeral extraction, and recognition and the author used this system in both the handwritten scores and the printed serial numbers on the examination paper. In the first stage, the image of the paper is obtained as the input and some processing is performed on the input image, such as image binarization and segmentation. In the second stage, object labeling is used to extract the connected components in the image. The connected components can be used to find the position of the serial number. In the third stage, nonlinear normalization is performed to get a normalized image for recognition. The purpose for using nonlinear normalization is to get a image with a fixed size and to adjust the density of the strokes in a adequate manner. The features using localized arc patterns are extracted from the normalized image. The features are then used as the input to the HRCNN and the recognition result can be obtained. Handwritten numerals of 70 persons were collected as the data set. Each person wrote numerals from 0 to 9 six times. Three times of these are used as the training set and the others as the testing set. A good result was obtained for this data set. Another 80 examination papers were used for testing. These papers were collected from four teachers and each teacher provided 20 papers. The recognition rate in the serial numbers is 100% since the numerals are printed numbers. On the other hand, in the handwritten scores, a recognition rate of 93.75% was obtained.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Barros, Ana Rita Amaro. "Classificação automática de registos eletrónicos médicos por diagnóstico". Master's thesis, 2020. http://hdl.handle.net/10071/21974.

Texto completo
Resumen
A crescente implementação de sistemas de registos eletrônicos médicos (REM’s) nos Hospitais, com vista a apoiar o atendimento individual dos pacientes, está a provocar um aumento do processamento e armazenamento dos dados clínicos diariamente. Estes registos contêm uma fonte infindável de informação clínica, no entanto o facto de não haver estrutura no texto produzido pelos médicos e o facto das informações introduzidas divergirem de paciente para paciente e de especialidade médica para especialidade médica, dificulta o aproveitamento destes dados. Outra dificuldade que existe na análise deste tipo de dados é conseguir criar um sistema capaz de extrair informação minuciosa presente nos REM’s, de forma a ajudar os profissionais de saúde a reduzir a taxa de erro de diagnóstico, prevendo o tipo de doença do paciente. Atualmente, para superar este desafio os hospitais realizam este processo manualmente, no entanto este processo é longo e está suscetível a erros. Esta dissertação pretende propor uma solução para este problema, ao utilizar técnicas de Processamento de Linguagem Natural e de Aprendizagem Automática, de forma a permitir um sistema que possibilite a extração de conhecimento clínico e respetiva classificação do REM por tipo de doença/ diagnóstico, de uma forma automática. Este sistema foi desenvolvido em língua portuguesa, visto que todos os sistemas médicos de extração de conhecimento existentes são desenvolvidos para língua inglesa. Este cenário visa ajudar na evolução do aproveitamento das informações contidas nos REM’s e, consequentemente, visa contribuir para o crescimento deste tipo de sistemas dentro do hospital português envolvido nesta dissertação.
The growing implementation of electronic medical record (EMR’s) systems in Hospitals, to support individual patient care, is causing an increase in the processing and storage of clinical data daily. These records contain an endless source of clinical information, however, the fact that there is no structure in the text produced by doctors and the fact that the information entered differ from patient to patient and from medical speciality to medical speciality, makes it difficult to use these data. Another difficulty that exists in the analysis of this type of data is to be able to create a system capable of extracting detailed information present in the EMR's, in order to help health professionals to reduce the error rate of diagnosis, predicting the type of disease of the patient. Currently, to overcome this challenge, hospitals carry out this process manually, however, this process is long and susceptible to errors. This dissertation intends to propose a solution to this problem, using techniques of Natural Language Processing and Machine Learning, in order to allow a system that allows the extraction of clinical knowledge and respective classification of EMR by type of disease/diagnosis, from an automatically. This system was developed in Portuguese language since all existing medical knowledge extraction systems are developed for English. This scenario aims to help in the evolution of the use of the information contained in the EMR’s and, consequently, aims to contribute to the growth of this type of systems within the Portuguese hospital involved in this dissertation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Rainho, Inês Margarida Louro. "Validação das Folhas de Cálculo Eletrónicas dos Produtos Acabados e de Estabilidade dos Laboratórios Vitória, S.A". Master's thesis, 2018. http://hdl.handle.net/10362/58225.

Texto completo
Resumen
As folhas de cálculo eletrónicas são bastante utilizadas na indústria farmacêutica no processamento e registo de informação, sendo equiparadas aos sistemas computadorizados e consideradas registos eletrónicos quando armazenadas eletronicamente, sendo fundamental que cumpram todas as Boas Práticas de Fabrico relativas a estes dois aspetos e que sejam validadas de modo a assegurar que estas são cumpridas e que as folhas de cálculo eletrónicas estão corretas. Assim, um dos principais objetivos deste trabalho é a validação das folhas de cálculo eletrónicas dos produtos acabados e de estabilidade utilizadas no Laboratório do Controlo da Qualidade dos Laboratórios Vitória S.A. de acordo com o método da International Society for Pharmaceutical Engineering publicado na Good Automated Manufacturing Practice 5. Para tal, realizou-se uma análise do risco de modo a identificar 57 perigos decorrentes do desenvolvimento e da utilização das mesmas e as respetivas medidas de intervenção. No seguimento da análise do risco, determinou-se que o impacto da utilização das folhas de cálculo eletrónicas é elevado, o que possibilitou, em conjunto com a categorização das mesmas, a selecção da abordagem da validação. Previamente à validação, todas as folhas de cálculo eletrónicas foram verificadas quanto à existência de erros e foram implementadas as medidas de intervenção que eliminam ou mitigam os perigos. Foram ainda implementados critérios relativos ao desenvolvimento de folhas de cálculo eletrónicas que permitem a sua uniformização e verificação pormenorizada. Foi feita uma comparação entre as folhas de cálculo eletrónicas antes e depois da implementação dos critérios e concluiu-se que alguns nunca eram cumpridos e outros eram irregularmente cumpridos. A implementação desses critérios resultou numa melhoria das folhas de cálculo eletrónicas. Por fim, foram validadas 15 folhas de cálculo eletrónicas, não sendo, no entanto, possível afirmar que estão devidamente validadas pois os testes relativos às medidas não implementadas não foram realizados.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía