Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Register Automata.

Dissertationen zum Thema „Register Automata“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-24 Dissertationen für die Forschung zum Thema "Register Automata" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Rueda, Cebollero Guillem. „Learning Cache Replacement Policies using Register Automata“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-212677.

Der volle Inhalt der Quelle
Annotation:
Processors are a basic unit of the computer which accomplish the mission of processing data stored in the memory. Large memories are required to process a big amount of data. Not all data is required at the same time, few data is required faster than other. For this reason, the memory is structured  in a hierarchy, from smaller and faster to bigger and slower. The cache memory is one of the fastest elements and closest to the processor in the memory hierarchy. The processor design companies hides its characteristics, usually under a confidential documentation that can not be accessed by the software developers. One of the most important characteristics kept in secret in this documentation is the replacement policy. The most famous replacement policies are known but the hardware designers can apply modifications for performance, cost or design reasons. The obfuscation of a part of the processor implies many developers to avoid problems with, for example, the runtime. If a task must be executed always in a certain time, the developer will take always the case requiring more time to execute (also called "Worst Case Execution Time") implying an underutilisation of the processor. This project will be focused on a new method to represent  and infer the replacement policy: modelling the replacement policies with automaton and using a learning process framework called LearnLib to guess them. This is not the first project trying to match the cache memory characteristics, actually a previous project is the basis to find a more general model to define the replacement policies. The results of LearnLib are modelled as an automaton. In order to test the effectiveness of this framework, different replacement policies will be simulated and verified. To provide a interface with a real cache memories is developed a program called hwquery. This program will interface a real cache request for using it in Learnlib.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Exibard, Léo. „Automatic synthesis of systems with data“. Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0312.

Der volle Inhalt der Quelle
Annotation:
Nous interagissons régulièrement avec des machines qui réagissent en temps réel à nos actions (robots, sites web etc). Celles-ci sont modélisées par des systèmes réactifs, caractérisés par une interaction constante avec leur environnement. L'objectif de la synthèse réactive est de générer automatiquement un tel système à partir de la description de son comportement afin de remplacer la phase de développement bas-niveau, sujette aux erreurs, par l'élaboration d'une spécification haut-niveau.Classiquement, on suppose que les signaux d'entrée de la machine sont en nombre fini. Un tel cadre échoue à modéliser les systèmes qui traitent des données issues d'un ensemble infini (un identifiant unique, la valeur d'un capteur, etc). Cette thèse se propose d'étendre la synthèse réactive au cas des mots de données. Nous étudions un modèle adapté à ce cadre plus général, et examinons la faisabilité des problèmes de synthèse associés. Nous explorons également les systèmes non réactifs, où l'on n'impose pas à la machine de réagir en temps réel
We often interact with machines that react in real time to our actions (robots, websites etc). They are modelled as reactive systems, that continuously interact with their environment. The goal of reactive synthesis is to automatically generate a system from the specification of its behaviour so as to replace the error-prone low-level development phase by a high-level specification design.In the classical setting, the set of signals available to the machine is assumed to be finite. However, this assumption is not realistic to model systems which process data from a possibly infinite set (e.g. a client id, a sensor value, etc.). The goal of this thesis is to extend reactive synthesis to the case of data words. We study a model that is well-suited for this more general setting, and examine the feasibility of its synthesis problem(s). We also explore the case of non-reactive systems, where the machine does not have to react immediately to its inputs
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kuriakose, R. B., und F. Aghdasi. „Automatic student attendance register using RFID“. Interim : Interdisciplinary Journal, Vol 6, Issue 2: Central University of Technology Free State Bloemfontein, 2007. http://hdl.handle.net/11462/406.

Der volle Inhalt der Quelle
Annotation:
Published Article
The purpose of this project is to investigate the application of Radio Frequency Identification, RFID, to automatic student attendance register. The aim is that the students in any class can be recorded when they carry their student cards with them without having to individually swipe the card or allocate special interaction time. The successful implementation of this proposal will facilitate such record keeping in a non-intrusive and efficient manner and will provide the platform for further research on the correlation between attendance and performance of the students. The opportunity for related research is identified regarding the range of the parameters involved, ensuring that individual identifications do not clash and interfacing challenges with the central record keeping are overcome.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hauck, Shahram. „Automated CtP Calibration for Offset Printing : Dot gain compensation, register variation and trapping evaluation“. Doctoral thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119366.

Der volle Inhalt der Quelle
Annotation:
Although offset printing has been and still is the most common printing technology for color print productions, its print productions are subject to variations due to environmental and process parameters. Therefore, it is very important to frequently control the print production quality criteria in order to make the process predictable, reproducible and stable. One of the most important parts in a modern industrial offset printing is Computer to Plate (CtP), which exposes the printing plate. One of the most important quality criteria for printing is to control the dot gain level. Dot gain refers to an important phenomenon that causes the printed elements to appear larger than their reference size sent to the CtP. It is crucial to have the dot gain level within an acceptable range, defined by ISO 12647-2 for offset printing. This is done by dot gain compensation methods in the Raster Image Processor (RIP). Dot gain compensation is however a complicated task in offset printing because of the huge number of parameters affecting dot gain. Another important quality criterion affecting the print quality in offset is the register variation caused by the misplacement of printing sheet in the printing unit. Register variation causes tone value variations, gray balance variation and blurred image details. Trapping is another important print quality criterion that should be measured in an offset printing process. Trapping occurs when the inks in different printing units are printed wet-on-wet in a multi-color offset printing machine. Trapping affects the gray balance and makes the resulting colors of overlapped inks pale. In this dissertation three different dot gain compensation methods are discussed. The most accurate and efficient dot gain compensation method, which is noniterative, has been tested, evaluated and applied using many offset printing workflows. To further increase the accuracy of this method, an approach to effectively select the correction points of a RIP with limited number of correction points, has also been proposed. Correction points are the tone values needed to be set in the RIP to define a dot gain compensation curve. To fulfill the requirement of having the register variation within the allowed range, it has to be measured and quantified. There have been two novel models proposed in this dissertation that determine the register variation value. One of the models is based on spectrophotometry and the other one on densitometry. The proposed methods have been evaluated by comparison to the industrial image processing based register variation model, which is expensive and not available in most printing companies. The results of all models were comparable, verifying that the proposed models are good  alternatives to the image processing based model. The existing models determining the trapping values are based on densitometric measurements and quantify the trapping effect by a percentage value. In this dissertation, a novel trapping model is proposed that quantifies the trapping effect by a color difference metric, i.e. , which is more useful and understandable for print machine operators. The comparison between the proposed trapping model and the existing models has shown very good correlations and verified that the proposed model has a bigger dynamic range. The proposed trapping model has also been extended to take into account the effect of ink penetration and gloss. The extended model has been tested using a  high glossy coated paper and the results have shown that the gloss and ink penetration can be neglected for this type of paper. An automated CtP calibration system for offset printing workflow has been introduced and described in this dissertation. This method is a good solution to generate the needed huge numbers of dot gain compensation curves to have an accurate CtP calibration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jouhet, Vianney. „Automated adaptation of Electronic Heath Record for secondary use in oncology“. Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0373/document.

Der volle Inhalt der Quelle
Annotation:
Avec la montée en charge de l’informatisation des systèmes d’information hospitaliers, une quantité croissante de données est produite tout au long de la prise en charge des patients. L’utilisation secondaire de ces données constitue un enjeu essentiel pour la recherche ou l’évaluation en santé. Dans le cadre de cette thèse, nous discutons les verrous liés à la représentation et à la sémantique des données, qui limitent leur utilisation secondaire en cancérologie. Nous proposons des méthodes basées sur des ontologies pour l’intégration sémantique des données de diagnostics. En effet, ces données sont représentées par des terminologies hétérogènes. Nous étendons les modèles obtenus pour la représentation de la maladie tumorale, et les liens qui existent avec les diagnostics. Enfin, nous proposons une architecture combinant entrepôts de données, registres de métadonnées et web sémantique. L’architecture proposée permet l’intégration syntaxique et sémantique d’un grand nombre d’observations. Par ailleurs, l’intégration de données et de connaissances (sous la forme d’ontologies) a été utilisée pour construire un algorithme d’identification de la maladie tumorale en fonction des diagnostics présents dans les données de prise en charge. Cet algorithme basé sur les classes de l’ontologie est indépendant des données effectivement enregistrées. Ainsi, il fait abstraction du caractère hétérogène des données diagnostiques initialement disponibles. L’approche basée sur une ontologie pour l’identification de la maladie tumorale, permet une adaptation rapide des règles d’agrégation en fonction des besoins spécifiques d’identification. Ainsi, plusieurs versions du modèle d’identification peuvent être utilisées avec des granularités différentes
With the increasing adoption of Electronic Health Records (EHR), the amount of data produced at the patient bedside is rapidly increasing. Secondary use is there by an important field to investigate in order facilitate research and evaluation. In these work we discussed issues related to data representation and semantics within EHR that need to be address in order to facilitate secondary of structured data in oncology. We propose and evaluate ontology based methods for heterogeneous diagnosis terminologies integration in oncology. We then extend obtained model to enable tumoral disease representation and links with diagnosis as recorded in EHR. We then propose and implement a complete architecture combining a clinical data warehouse, a metadata registry and web semantic technologies and standards. This architecture enables syntactic and semantic integration of a broad range of hospital information System observation. Our approach links data with external knowledge (ontology), in order to provide a knowledge resource for an algorithm for tumoral disease identification based on diagnosis recorded within EHRs. As it based on the ontology classes, the identification algorithm is uses an integrated view of diagnosis (avoiding semantic heterogeneity). The proposed architecture leading to algorithm on the top of an ontology offers a flexible solution. Adapting the ontology, modifying for instance the granularity provide a way for adapting aggregation depending on specific needs
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

MANSOURI, NAZANIN. „AUTOMATED CORRECTNESS CONDITION GENERATION FOR FORMAL VERIFICATION OF SYNTHESIZED RTL DESIGNS“. University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin982064542.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tabani, Hamid. „Low-power architectures for automatic speech recognition“. Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/462249.

Der volle Inhalt der Quelle
Annotation:
Automatic Speech Recognition (ASR) is one of the most important applications in the area of cognitive computing. Fast and accurate ASR is emerging as a key application for mobile and wearable devices. These devices, such as smartphones, have incorporated speech recognition as one of the main interfaces for user interaction. This trend towards voice-based user interfaces is likely to continue in the next years which is changing the way of human-machine interaction. Effective speech recognition systems require real-time recognition, which is challenging for mobile devices due to the compute-intensive nature of the problem and the power constraints of such systems and involves a huge effort for CPU architectures to reach it. GPU architectures offer parallelization capabilities which can be exploited to increase the performance of speech recognition systems. However, efficiently utilizing the GPU resources for speech recognition is also challenging, as the software implementations exhibit irregular and unpredictable memory accesses and poor temporal locality. The purpose of this thesis is to study the characteristics of ASR systems running on low-power mobile devices in order to propose different techniques to improve performance and energy consumption. We propose several software-level optimizations driven by the power/performance analysis. Unlike previous proposals that trade accuracy for performance by reducing the number of Gaussians evaluated, we maintain accuracy and improve performance by effectively using the underlying CPU microarchitecture. We use a refactored implementation of the GMM evaluation code to ameliorate the impact of branches. Then, we exploit the vector unit available on most modern CPUs to boost GMM computation, introducing a novel memory layout for storing the means and variances of the Gaussians in order to maximize the effectiveness of vectorization. In addition, we compute the Gaussians for multiple frames in parallel, significantly reducing memory bandwidth usage. Our experimental results show that the proposed optimizations provide 2.68x speedup over the baseline Pocketsphinx decoder on a high-end Intel Skylake CPU, while achieving 61% energy savings. On a modern ARM Cortex-A57 mobile processor our techniques improve performance by 1.85x, while providing 59% energy savings without any loss in the accuracy of the ASR system. Secondly, we propose a register renaming technique that exploits register reuse to reduce the pressure on the register file. Our technique leverages physical register sharing by introducing minor changes in the register map table and the issue queue. We evaluated our renaming technique on top of a modern out-of-order processor. The proposed scheme supports precise exceptions and we show that it results in 9.5% performance improvements for GMM evaluation. Our experimental results show that the proposed register renaming scheme provides 6% speedup on average for the SPEC2006 benchmarks. Alternatively, our renaming scheme achieves the same performance while reducing the number of physical registers by 10.5%. Finally, we propose a hardware accelerator for GMM evaluation that reduces the energy consumption by three orders of magnitude compared to solutions based on CPUs and GPUs. The proposed accelerator implements a lazy evaluation scheme where Gaussians are computed on demand, avoiding 50% of the computations. Furthermore, it employs a novel clustering scheme to reduce the size of the GMM parameters, which results in 8x memory bandwidth savings with a negligible impact on accuracy. Finally, it includes a novel memoization scheme that avoids 74.88% of floating-point operations. The end design provides a 164x speedup and 3532x energy reduction when compared with a highly-tuned implementation running on a modern mobile CPU. Compared to a state-of-the-art mobile GPU, the GMM accelerator achieves 5.89x speedup over a highly optimized CUDA implementation, while reducing energy by 241x.
El reconocimiento automático de voz (ASR) es una de las aplicaciones más importantes en el área de la computación cognitiva. ASR rápido y preciso se está convirtiendo en una aplicación clave para dispositivos móviles y portátiles. Estos dispositivos, como los Smartphones, han incorporado el reconocimiento de voz como una de las principales interfaces de usuario. Es probable que esta tendencia hacia las interfaces de usuario basadas en voz continúe en los próximos años, lo que está cambiando la forma de interacción humano-máquina. Los sistemas de reconocimiento de voz efectivos requieren un reconocimiento en tiempo real, que es un desafío para los dispositivos móviles debido a la naturaleza de cálculo intensivo del problema y las limitaciones de potencia de dichos sistemas y supone un gran esfuerzo para las arquitecturas de CPU. Las arquitecturas GPU ofrecen capacidades de paralelización que pueden aprovecharse para aumentar el rendimiento de los sistemas de reconocimiento de voz. Sin embargo, la utilización eficiente de los recursos de la GPU para el reconocimiento de voz también es un desafío, ya que las implementaciones de software presentan accesos de memoria irregulares e impredecibles y una localidad temporal deficiente. El propósito de esta tesis es estudiar las características de los sistemas ASR que se ejecutan en dispositivos móviles de baja potencia para proponer diferentes técnicas para mejorar el rendimiento y el consumo de energía. Proponemos varias optimizaciones a nivel de software impulsadas por el análisis de potencia y rendimiento. A diferencia de las propuestas anteriores que intercambian precisión por el rendimiento al reducir el número de gaussianas evaluadas, mantenemos la precisión y mejoramos el rendimiento mediante el uso efectivo de la microarquitectura subyacente de la CPU. Usamos una implementación refactorizada del código de evaluación de GMM para reducir el impacto de las instrucciones de salto. Explotamos la unidad vectorial disponible en la mayoría de las CPU modernas para impulsar el cálculo de GMM. Además, calculamos las gaussianas para múltiples frames en paralelo, lo que reduce significativamente el uso de ancho de banda de memoria. Nuestros resultados experimentales muestran que las optimizaciones propuestas proporcionan un speedup de 2.68x sobre el decodificador Pocketsphinx en una CPU Intel Skylake de alta gama, mientras que logra un ahorro de energía del 61%. En segundo lugar, proponemos una técnica de renombrado de registros que explota la reutilización de registros físicos para reducir la presión sobre el banco de registros. Nuestra técnica aprovecha el uso compartido de registros físicos mediante la introducción de cambios en la tabla de renombrado de registros y la issue queue. Evaluamos nuestra técnica de renombrado sobre un procesador moderno. El esquema propuesto admite excepciones precisas y da como resultado mejoras de rendimiento del 9.5% para la evaluación GMM. Nuestros resultados experimentales muestran que el esquema de renombrado de registros propuesto proporciona un 6% de aceleración en promedio para SPEC2006. Finalmente, proponemos un acelerador para la evaluación de GMM que reduce el consumo de energía en tres órdenes de magnitud en comparación con soluciones basadas en CPU y GPU. El acelerador propuesto implementa un esquema de evaluación perezosa donde las GMMs se calculan bajo demanda, evitando el 50% de los cálculos. Finalmente, incluye un esquema de memorización que evita el 74.88% de las operaciones de coma flotante. El diseño final proporciona una aceleración de 164x y una reducción de energía de 3532x en comparación con una implementación altamente optimizada que se ejecuta en una CPU móvil moderna. Comparado con una GPU móvil de última generación, el acelerador de GMM logra un speedup de 5.89x sobre una implementación CUDA optimizada, mientras que reduce la energía en 241x.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Elrod, JoAnn Broeckel, Raina Merchant, Mohamud Daya, Scott Youngquist, David Salcido, Terence Valenzuela und Graham Nichol. „Public health surveillance of automated external defibrillators in the USA: protocol for the dynamic automated external defibrillator registry study“. BMJ PUBLISHING GROUP, 2017. http://hdl.handle.net/10150/623946.

Der volle Inhalt der Quelle
Annotation:
Introduction: Lay use of automated external defibrillators (AEDs) before the arrival of emergency medical services (EMS) providers on scene increases survival after out-of-hospital cardiac arrest (OHCA). AEDs have been placed in public locations may be not ready for use when needed. We describe a protocol for AED surveillance that tracks these devices through time and space to improve public health, and survival as well as facilitate research. Methods and analysis: Included AEDs are installed in public locations for use by laypersons to treat patients with OHCA before the arrival of EMS providers on scene. Included cases of OHCA are patients evaluated by organised EMS personnel and treated for OHCA. Enrolment of 10 000 AEDs annually will yield precision of 0.4% in the estimate of readiness for use. Enrolment of 2500 patients annually will yield precision of 1.9% in the estimate of survival to hospital discharge. Recruitment began on 21 Mar 2014 and is ongoing. AEDs are found by using multiple methods. Each AED is then tagged with a label which is a unique two-dimensional (2D) matrix code; the 2D matrix code is recorded and the location and status of the AED tracked using a smartphone; these elements are automatically passed via the internet to a secure and confidential database in real time. Whenever the 2D matrix code is rescanned for any non-clinical or clinical use of an AED, the user is queried to answer a finite set of questions about the device status. The primary outcome of any clinical use of an AED is survival to hospital discharge. Results are summarised descriptively. Ethics and dissemination: These activities are conducted under a grant of authority for public health surveillance from the Food and Drug Administration. Results are provided periodically to participating sites and sponsors to improve public health and quality of care.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Giancoli, Ana Paula Müller. „Proposta de sistema para registro eletrônico de ponto com gerenciamento remoto“. Universidade de Taubaté, 2011. http://www.bdtd.unitau.br/tedesimplificado/tde_busca/arquivo.php?codArquivo=243.

Der volle Inhalt der Quelle
Annotation:
O presente trabalho propõe uma arquitetura de sistema para registro eletrônico de ponto baseado em software livre e capaz de atender os principais requisitos extraídos da portaria 1510 do Ministério do Trabalho. Essa arquitetura utiliza o sistema operacional Linux, a linguagem de programação Python, o framework Web Plone e o servidor de aplicações Zope, a fim de proporcionar, entre outros benefícios, a segurança, o acesso ao código fonte da aplicação e a independência de fornecedores. A validação é obtida por meio de testes práticos realizados em protótipo que adota os elementos do sistema. Os resultados satisfatórios obtidos nesses testes indicam que a aludida arquitetura é adequada para a aplicação em questão.
This study aims at proposing a system architecture for electronic clocking in and out based on free software to fulfill the main requirements extracted from the 1510 regulation from the Ministry of Labor. This architecture uses the Linux operating system; the Python programming language; the Web Plone framework and the Zope application server to provide, among other benefits, the security, the access to application source code, and the independence from suppliers. The validation is obtained though practical tests on prototypes that adopt the elements of the system. The satisfactory results obtained in these tests indicate that the aforementioned architecture is suitable for the application in question.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sánchez, Belenguer Carlos. „Surface Registration Techniques Applied to Archaeological Fragment Reconstruction“. Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/56152.

Der volle Inhalt der Quelle
Annotation:
[EN] Reconstruction of broken archaeological artifacts from fragments is a very time-consuming task that requires a big effort if performed manually. In fact, due to budgetary limitations, this is not even attempted in countless sites around the world, leaving vast quantities of material unstudied and stored indefinitely. This Thesis dissertation faces the application of surface registration techniques to the automatic re-assembly of broken archaeological artifacts from fragments. To efficiently do so, the reconstruction problem has been divided into two groups: 3 degrees of freedom and 6 degrees of freedom problems. This distinction is motivated for two major reasons: archaeological interest of the application and computational complexity of the solution. First kind of problems (3 degrees of freedom) deal with 2D objects or with flat 3D objects, like ripped-up documents or frescoes, respectively. In both cases, the mural paintings and engravings on the fragments' surface are of huge importance in the field of Cultural Heritage Recovery. In this sense, archaeologically speaking, the value of the reconstruction is not the final model itself, but the information stored in the upper surface. In terms of computation complexity, the reduced solution space allows using exhaustive techniques to ensure the quality of the results, while keeping execution times low. A fast hierarchical technique is introduced to face this kind of problems. Starting from an exhaustive search strategy, the technique progressively incorporates new features that lead to a hierarchical search strategy. Convergence and correction of the resulting technique are ensured using an optimistic cost function. Internal search calculations are optimized so the only operations performed are additions, subtractions and comparisons over aligned data. All heavy geometric operations are carried out by the GPU on a pre-processing stage that only happens once per fragment. Second kind of problems (6 degrees of freedom) deal with more general situations, where no special constraints are considered. Typical examples are broken sculptures, friezes, columns... In this case, computational complexity increases considerably with the extra 3 degrees of freedom, making exhaustive approaches prohibitive. To face this problems, an efficient sparse technique is introduced that uses a pre-processing stage to reduce the size of the problem: singular key-points in the original point cloud are selected based on a multi-scale feature extraction process driven by the saliency of each point. By computing a modified version of the PFH descriptor, the local neighborhood of each key- point is described in a compact histogram. Using exclusively the selected key-points and their associated descriptors, a very fast one-to-one search algorithm is executed for each possible pair of fragments. This process uses a three-level hierarchical search strategy driven by the local similarity between key-points, and applying a set of geometric consistence tests for intermediate results. Finally, a graph-based global registration algorithm uses all the individual matches to provide the final reconstruction of the artifact by creating clusters of matching fragments, appending new potential matches and joining individual clusters into bigger structures.
[ES] La reconstrucción de objetos arqueológicos fracturados a partir de fragmentos es una actividad que, si se realiza manualmente, supone un gran coste temporal. De hecho, debido a restricciones presupuestarias, esta tarea no llega a abordarse en incontables yacimientos arqueológicos, dejando grandes cantidades de material sin ser estudiado y almacenado indefinidamente. La presente propuesta de tesis aborda la aplicación de técnicas de registro de superficies a el re-ensamblado automático de objetos arqueológicos fracturados a partir de fragmentos. Por motivos de eficiencia, el problema de la reconstrucción se ha dividido en dos grupos: problemas de 3 grados de libertad y problemas de 6 grados de libertad. Esta distinción está motivada por dos razones: (1) el interés arqueológico de la aplicación final de las técnicas desarrolladas y (2) la complejidad computacional de la solución propuesta. El primer tipo de problemas (3 grados de libertad) se enfrenta a objetos bidimensionales o tridimensionales planos como documentos fragmentados y frescos, respectivamente. En ambos casos, los murales y grabados sobre la superficie de los fragmentos son de gran importancia en el ámbito de la conservación del patrimonio cultural. En este sentido, desde el punto de vista arqueológico, el valor de la reconstrucción final no radica en el modelo en sí, sino en la información almacenada sobre su superficie. En términos de complejidad computacional, el reducido espacio de soluciones permite emplear técnicas de búsqueda exhaustivas que garantizan la corrección de los resultados obtenidos con tiempos de ejecución acotados. La técnica propuesta para abordar este tipo de problemas parte de una estrategia exhaustiva y, progresivamente, incorpora nuevas optimizaciones que culminan con una técnica íntegramente jerárquica. La convergencia y corrección de la solución propuesta están garantizadas gracias a una función de coste optimista. Los cálculos internos durante las búsquedas han sido optimizados de modo que sólo son necesarias operaciones de adición/substracción y comparaciones sobre datos alineados en memoria. Todas las operaciones complejas asociadas a la manipulación de datos geométricos son realizadas por la GPU durante una etapa de pre-procesamiento que se ejecuta una sola vez por fragmento. El segundo tipo de problemas (6 grados de libertad) se enfrenta a situaciones más generales, en las que ninguna restricción especifica puede ser asumida. Ejemplos típicos son esculturas fragmentadas, frisos, columnas... En este caso, la complejidad computacional incrementa considerablemente debido a los 3 grados de libertad adicionales por lo que el coste temporal de las estrategias exhaustivas resulta prohibitivo. Para abordar este tipo de problemas, se propone una técnica dispersa eficiente apoyada en una fase de pre-procesamiento cuyo objetivo consiste en reducir la talla de los datos de entrada: a partir de las nubes de puntos originales, puntos clave singulares son identificados gracias a un proceso de extracción de características multi-escala apoyado en el valor de saliencia de cada punto. Mediante el cálculo de una versión modificada del descriptor PFH (Persistent Feature Histograms), el vecindario local de cada punto clave es descrito en un histograma compacto. Empleando únicamente estos puntos y sus descriptores asociados, un algoritmo de búsqueda uno-a-uno muy rápido se ejecuta sobre cada par de fragmentos. Dicho proceso emplea una estrategia de búsqueda jerárquica de tres niveles, dirigida por la similitud entre puntos clave y que aplica un conjunto de tests de consistencia geométrica sobre los resultados intermedios. Finalmente, un algoritmo de registro global toma como datos de entrada todas las correspondencias individuales para generar la reconstrucción final del objeto.
[CAT] La reconstrucció d'objectes arqueològics fracturats a partir de fragments és una activitat que, si es realitza manualment, suposa un gran cost temporal. De fet, a causa de restriccions pressupostàries, esta tasca no arriba a abordar-se en incomptables jaciments arqueològics, deixant grans quantitats de material sense ser estudiat i emmagatzemat indefinidament. La present proposta de tesi aborda l'aplicació de tècniques de registre de superfícies a l're-enssamblatge automàtic d'objectes arqueològics fracturats a partir de fragments. Per motius d'eficiència, el problema de la reconstrucció s'ha dividit en dos grups: problemes de 3 graus de llibertat i problemes de 6 graus de llibertat. Esta distinció està motivada per dues raons: (1) l'interès arqueològic de l'aplicació final de les tècniques desenvolupades i (2) la complexitat computacional de la solució proposada. El primer tipus de problemes (3 graus de llibertat) s'enfronta a objectes bidimensionals o tridimensionals plans com documents fragmentats i frescos, respectivament. En tots dos casos, els murals i gravats sobre la superfície dels fragments són de gran importància en l'àmbit de la conservació del patrimoni cultural. En este sentit, des del punt de vista arqueològic, el valor de la reconstrucció final no es basa en el model en si, sinó en la informació emmagatzemada sobre la seva superfície. En termes de complexitat computacional, el reduït espai de solucions permet emprar tècniques de recerca exhaustives que garanteixen la correcció dels resultats obtinguts amb temps d'execució acotats. La tècnica proposada per abordar aquest tipus de problemes part d'una estratègia exhaustiva i, progressivament, incorpora noves optimitzacions que culminen amb una tècnica íntegrament jeràrquica. La convergència i correcció de la solució proposada estan garantides gràcies a una funció de cost optimista. Els càlculs interns durant les recerques s'han optimitzat de manera que només són necessàries operacions d'addició / substracció i comparacions sobre dades alineats en memòria. Totes les operacions complexes associades a la manipulació de dades geomètriques són realitzades per la GPU durant una etapa de pre-processament que s'executa una única vegada per fragment. El segon tipus de problemes (6 graus de llibertat) s'enfronta a situacions més generals, en què cap restricció especifica pot ser assumida. Exemples típics són escultures fragmentades, frisos, columnes ... En este cas, la complexitat computacional s'incrementa considerablement a causa dels 3 graus de llibertat addicionals pel que el cost temporal de les estratègies exhaustives resulta prohibitiu. Per abordar este tipus de problemes, es proposa una tècnica dispersa eficient recolzada en una fase de pre-processament l'objectiu del qual consisteix a reduir la talla de les dades d'entrada: a partir dels núvols de punts originals, s'identifiquen punts clau singulars gràcies a un procés d'extracció de característiques multi-escala recolzat en el valor de saliència de cada punt. Mitjançant el càlcul d'una versió modificada del descriptor PFH (Persistent Feature Histograms), els veins locals de cada punt clau és descriuen en un histograma compacte. Emprant únicament estos punts i els seus descriptors associats, un algoritme de cerca un-a-un molt ràpid s'executa sobre cada parell de fragments. Aquest procés fa servir una estratègia de cerca jeràrquica de tres nivells, dirigida per la similitud entre punts clau i que aplica un conjunt de tests de consistència geomètrica sobre els resultats intermedis. Finalment, un algoritme de registre global pren com a dades d'entrada totes les correspondències individuals per generar la reconstrucció final de l'objecte.
Sánchez Belenguer, C. (2015). Surface Registration Techniques Applied to Archaeological Fragment Reconstruction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/56152
TESIS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Bezerra, Andrea Fernanda Fontes. „Geração de layout de interfaces gráficas baseado em ontologias para documentos do Registro Eletrônico em Saúde“. Universidade Federal da Paraíba, 2014. http://tede.biblioteca.ufpb.br:8080/handle/tede/7828.

Der volle Inhalt der Quelle
Annotation:
Submitted by Clebson Anjos (clebson.leandro54@gmail.com) on 2016-02-11T19:57:13Z No. of bitstreams: 1 arquivototal.pdf: 4682448 bytes, checksum: 9f9a7a72b4132cb9d61c8cc0c1591ea3 (MD5)
Made available in DSpace on 2016-02-11T19:57:13Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 4682448 bytes, checksum: 9f9a7a72b4132cb9d61c8cc0c1591ea3 (MD5) Previous issue date: 2014-05-23
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Health informatics is a domain that presents several challenges to be overcome. Electronic Health Records (EHR) are one of its most important subdomain, in charge of storage, exhibition, and manipulation of patient clinical information, among others. EHR systems require domain flexibility, which allows modifications in the structure of documents without application recompilation or redeployment, for instance, in a web server. Current approaches in the literature propose generic models to represent domain and presentation, without ontological definitions for user interface (UI) layout and style. These, when properly organized, improve the acceptance of the system by users. This work aims to develop a framework to layout and style generation for graphical user interface of EHR documents, based on Web Ontology Language (OWL) ontologies and using restrictions. By centralizing and combining metadata from biomedical and documents domains, it was possible to apply layout and style to EHR documents, with the use of grids, including additional ontological definition of presentation formats for the medical field, facilitating UI development and maintenance.
A informática em saúde apresenta muitos desafios a serem superados. Um de seus principais ramos de pesquisa são os Registros Eletrônicos em Saúde (RES), responsáveis, dentre outros, pelo armazenamento, exibição e manipulação de registros clínicos do paciente. Sistemas deste tipo requerem flexibilidade do domínio da aplicação, de modo que alterações nos documentos do RES sejam realizadas em tempo de execução, sem recompilação ou reimplantação da aplicação, por exemplo, em um servidor web. Abordagens da literatura propõem modelos genéricos de representação de domínio e apresentação, sem definições ontológicas de layout e estilo de interface com o usuário (UI). Estes, quando bem organizados, melhoram a aceitação do sistema pelos usuários. Este trabalho teve como objetivo o desenvolvimento de um framework para geração de layout e estilo de interface gráfica com o usuário para documentos do RES, baseado em ontologias Web Ontology Language (OWL), com uso de restrições. Através da centralização e combinação dos metadados biomédicos e de documentos para o RES, foi possível aplicar layout e estilo para os documentos do RES, com uso de grids, com definição ontológica adicional de formatos de apresentação para a área médica, facilitando o desenvolvimento da UI para o RES a manutenção da interface gráfica da aplicação.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Косар, Ліля Петрівна, und Lilya Kosar. „Автоматизоване робоче місце медичного працівника відділу реєстратури“. Master's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2020. http://elartu.tntu.edu.ua/handle/lib/33187.

Der volle Inhalt der Quelle
Annotation:
Проект виконано на кафедрі біотехнічних систем Тернопільського національного технічного університету імені Івана Пулюя
В роботі проведено аналіз технічного завдання, аналітичний огляд відомих рішень та вибір напряму дослідження, розроблено математичну модель процесу роботи реєстратури, побудовано фізичну та логічну моделі. Представлено структуру і функціональне призначення програмного комплексу для автоматизації роботи реєстратури
The analysis of the technical task, the analytical review of the known decisions and the choice of the direction of research are carried out in the work, the mathematical model of process of work of the registry is developed, the physical and logical model is constructed. The structure and functional purpose of the software package for automation of the registry are presented
ВСТУП 7 РОЗДІЛ 1. АНАЛІТИЧНА ЧАСТИНА 9 1.1 Аналіз технічного завдання 9 1.2 Математична модель процесу роботи реєстратури 14 1.3 Розробка структурно-функціональної моделі для автоматизації роботи реєстратури поліклініки 20 1.4. Висновок до розділу 1 26 РОЗДІЛ 2. ОСНОВНА ЧАСТИНА 27 2.1 Розробка логічної моделі реєстратури поліклініки 27 2.2 Розробка фізичної моделі роботи реєстратури 33 2.3 Висновок до розділу 2 34 РОЗДІЛ 3. НАУКОВО-ДОСЛІДНА ЧАСТИНА 35 3.1 Структура і функціональне призначення програмного комплексу для автоматизації обліку роботи реєстратури 35 3.2 Висновок до розділу 3 42 РОЗДІЛ 4. ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ 43 4.1 Охорона праці 43 4.2 Безпека в надзвичайних ситуаціях 48 4.3 Висновок до розділу 4 50 ЗАГАЛЬНІ ВИСНОВКИ 52 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ 53 ДОДАТКИ 54
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Maia, Josà Everardo Bessa. „Uma Nova metaheurÃstica evolucionÃria para a formaÃÃo de mapas topologicamente ordenados e extensÃes“. Universidade Federal do CearÃ, 2011. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=6933.

Der volle Inhalt der Quelle
Annotation:
Mapas topologicamente ordenados sÃo tÃcnicas de representaÃÃo de dados baseadas em reduÃÃo de dimensionalidade com a propriedade especial de preservaÃÃo da vizinhanÃa espacial entre os protÃtipos no espaÃo dos dados e entre suas respectivas posiÃÃes no espaÃo de saÃda. Com base nesta propriedade, mapas topologicamente ordenados sÃo aplicados principalmente em agrupamento, quantizaÃÃo vetorial ou reduÃÃo de dimensionalidade e visualizaÃÃo de dados. Esta tese propÃe uma nova classificaÃÃo para os algoritmos de formaÃÃo de mapas topologicamente ordenados baseada no mecanismo de correlaÃÃo entre os espaÃos de entrada e de saÃda, e descreve um novo algoritmo, baseado em computaÃÃo evolucionÃria, denominado EvSOM, para a formaÃÃo de mapas topologicamente ordenado. As principais propriedades do novo algoritmo sÃo a sua flexibilidade para ponderaÃÃo pelo usuÃrio da importÃncia relativa das propriedades de quantizaÃÃo vetorial e de preservaÃÃo de topologia no mapa final, alÃm de boa rejeiÃÃo a outliers quando comparado ao algoritmo SOM de Kohonen. O trabalho desenvolve uma avaliaÃÃo empÃrica destas propriedades. O EvSOM Ã um algoritmo hÃbrido, neural-evolucionÃrio, biologicamente inspirado, que se utiliza de conceitos de redes neurais competitivas, computaÃÃo evolucionÃria, otimizaÃÃo e aproximaÃÃo iterativa. Para validar sua viabilidade de aplicaÃÃo, o EvSOM Ã estendido e especializado para a soluÃÃo de dois problemas bÃsicos relevantes em processamento de imagens e visÃo computacional, quais sejam, o problema de registro de imagens mÃdicas e o problema de rastreamento visual de objetos em vÃdeo. O algoritmo apresentou desempenho satisfatÃrio nas duas aplicaÃÃes.
Topologically ordered maps are data representation techniques based on dimensionality reduction with the special property of preserving the neighborhood between the data prototypes lying in the data space and their positions on to the output space. Based on this property, topologically ordered maps are applied mainly in clustering projected, vector quantization or dimensionality reduction and data visualization. This thesis proposes a new classification for the existing algorithms devoted to the formation of topologically ordered maps, which is based on the mechanism of correlation between the input and output spaces, and describes a new algorithm based on evolutionary computation, called EvSOM, for the topologically ordered maps formation. The main properties of the new algorithm are its flexibility for consideration by the user of the relative importance of the properties of vector quantization and topology preservation of the final map, and good outliers rejection when compared to the Kohonen SOM algorithm. The work provides an empirical evaluation of these properties. The EvSOM is a hybrid , neural-evolutionary, biologically inspired algorithm, which uses concepts of competitive neural networks, evolutionary computing, optimization and iterative approximation approximation. To validate its application feasibility, EvSOM is extended and specialized to solve two relevant basic problems in image processing and computer vision, namely, the medical image registration problem and the visual tracking of objects in video problem. The algorithm exhibits satisfactory performance in both aplications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Козирод, В. М. „Комплексна система захисту інформації клієнтської частини автоматизованої інформаційно-телекомунікаційній системи “Оберіг”“. Thesis, Чернігів, 2021. http://ir.stu.cn.ua/123456789/24850.

Der volle Inhalt der Quelle
Annotation:
Козирод, В. М. Комплексна система захисту інформації клієнтської частини автоматизованої інформаційно-телекомунікаційній системи “Оберіг” : випускна кваліфікаційна робота : 125 "Кібербезпека" / В. М. Козирод ; керівник роботи В. І. Гур’єв ; НУ "Чернігівська політехніка", кафедра кібербезпеки та математичного моделювання . – Чернігів, 2021. – 70 с.
Метою даної роботи є створення комплексної системи захисту інформації на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи з метою захисту інформації з обмеженим доступом. Об’єктом дослідження виступає автоматизована інформаційно- телекомунікаційна система Єдиного державного реєстру військовозобов’язаних, яка потребує захисту інформації з обмеженим доступом від розголошення, витоку та несанкціонованого доступу. Предметом дослідження є захист інформації з обмеженим доступом (персональних даних військовозобов’язаних та призовників) на клієнтській частині автоматизованої інформаційно-телекомунікаційної системи. Методами дослідження є використання взаємопов’язаної сукупності організаційних та інженерно-технічних заходів, засобів та методів захисту інформації на клієнтській частині автоматизованої інформаційно- телекомунікаційній системи. Організаційні заходи та методи захисту інформації застосовано для створення концепції інформаційної безпеки. Інженерно-технічні заходи захисту інформації застосовано для захисту інформації з обмеженим доступом від розголошення, витоку та несанкціонованого доступу. Результати та новизна: для створення комплексної системи захисту інформації на клієнтській частині автоматизованої інформаційно- телекомунікаційній системи розв’язано наступні завдання: 1) забезпечено безпеку інформації з обмеженим доступом під час її обробки на клієнтській частині автоматизованої інформаційно- телекомунікаційній системи; 2) організовано криптографічний захист інформації на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи; 3) визначено порядок допуску до роботи з засобами та ключовими документами комплексу криптографічного захисту інформації; 4) забезпечено антивірусний захист службової інформації на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи; 5) визначено порядок розміщення, спеціального обладнання, охорони та організації режиму безпеки в приміщеннях клієнтської частини автоматизованої інформаційно-телекомунікаційній системи; 6) сплановано розмежування доступу до автоматизованої інформаційно- телекомунікаційній системи та її ресурсів на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи. Галузь застосування: комплексна система захисту інформації на клієнтській частині автоматизованої інформаційно-телекомунікаційній системи може використовуватися для захисту персональних та службових даних, розміщених у Єдиному державному реєстрі військовозобов’язаних.
The purpose of this work is to create a comprehensive information security system on the client part of the automated information and telecommunications system to protect information with limited access. The object of the study is the automated information and telecommunication system of the Unified State Register of Conscripts, which requires the protection of information with limited access from disclosure, leakage and unauthorized access. The subject of the research is the protection of information with limited access (personal data of conscripts and conscripts) on the client part of the automated information and telecommunication system. The research methods are the use of an interconnected set of organizational and engineering measures, tools and methods of information protection on the client part of the automated information and telecommunications system. Organizational measures and methods of information protection were used to create the concept of information security. Engineering and technical measures to protect information are used to protect information with limited access from disclosure, leakage and unauthorized access. Results and novelty: to create a comprehensive information security system on the client part of the automated information and telecommunications system solved the following tasks: 1) security of information with limited access during its processing on the client part of the automated information and telecommunication system is provided; 2) cryptographic protection of information is organized on the client part of the automated information and telecommunication system; 3) the procedure for admission to work with the means and key documents of the complex of cryptographic protection of information is determined; 4) anti-virus protection of official information on the client part of the automated information and telecommunication system is provided; 5) the order of placement, special equipment, protection and organization of the security regime in the premises of the client part of the automated information and telecommunication system is determined; 6) delimitation of access to the automated information and telecommunication system and its resources on the client part of the automated information and telecommunication system is planned. Field of application: the complex system of information protection on the client part of the automated information and telecommunication system can be used to protect personal and official data placed in the Unified State Register of Conscripts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Jaume, Bennasar Andrés. „Las nuevas tecnologías en la administración de justicia. La validez y eficacia del documento electrónico en sede procesal“. Doctoral thesis, Universitat de les Illes Balears, 2009. http://hdl.handle.net/10803/9415.

Der volle Inhalt der Quelle
Annotation:
La tesis se encarga de analizar, por un lado, la integración y el desarrollo de las nuevas tecnologías en la Administración de Justicia; y, por otro, los parámetros que constituyen la validez y eficacia del documento electrónico.
La primera cuestión se centra en la configuración de los Sistemas de Información de la Oficina Judicial y del Ministerio Fiscal, así como de la informatización de los Registros Civiles, donde el art. 230 LOPJ es la pieza clave. Se estudian sus programas, aplicaciones, la videoconferencia, los ficheros judiciales y las redes de telecomunicaciones que poseen la cobertura de la firma electrónica reconocida, donde cobran gran relevancia los convenios de colaboración tecnológica. La digitalización de las vistas quizá sea una de las cuestiones con más trascendencia, teniendo en cuenta que el juicio es el acto que culmina el proceso. Aunque no todos los proyectos adoptados en el ámbito de la e.justicia se han desarrollado de forma integral, ni han llegado a la totalidad de los órganos judiciales. El objetivo final es lograr una Justicia más ágil y de calidad, a lo cual aspira el Plan Estratégico de Modernización de la Justicia 2009-2012 aprobado recientemente.
En referencia a la segunda perspectiva, no cabe duda que el Ordenamiento jurídico y los tribunales, en el ámbito de la justicia material, otorgan plena validez y eficacia al documento electrónico. Nuestra línea de investigación se justifica porque cada vez son más los procesos que incorporan soportes electrónicos de todo tipo, ya sea al plantearse la acción o posteriormente como medio de prueba (art. 299.2 LEC). Entre otros temas examinamos el documento informático, la problemática que rodea al fax, los sistemas de videograbación y el contrato electrónico.
La tesi s'encarrega d'analitzar, per una part, la integració i el desenvolupament de les noves tecnologies dins l´Administració de Justícia; i, per l'altra, els paràmetres que constitueixen la validesa i l'eficàcia del document electrònic.
La primera qüestió es centra en la configuració dels Sistemes d´Informació de l´Oficina Judicial i del Ministeri Fiscal, així com de la informatització dels Registres Civils, on l'art. 230 LOPJ es la peça clau. S'estudien els seus programes, aplicacions, la videoconferència, el fitxers judicials i les xarxes de telecomunicacions que tenen la cobertura de la firma electrònica reconeguda, on cobren gran rellevància els convenis de col·laboració tecnològica. La digitalització de les vistes tal vegada sigui una de les qüestions amb més transcendència, tenint amb compte que el judici es l'acte que culmina el procés. Però no tots el projectes adoptats en l'àmbit de la e.justicia s'han desenvolupat d'una manera integral ni han arribat a la totalitat dels òrgans judicials. L'objectiu final es assolir una Justícia més àgil i de qualitat, al que aspira el Pla Estratègic de Modernització de la Justícia 2009-2012 aprovat recentment.
En referència a la segona perspectiva, no hi ha dubte que l´Ordenament jurídic i els tribunals, en l'àmbit de la justícia material, donen plena validesa i eficàcia al document electrònic. La nostra línia d'investigació es justifica perquè cada vegada son més el processos que incorporen suports electrònics de tot tipus, ja sigui quant es planteja l'acció o posteriorment como a medi de prova (art. 299.2 LEC). Entre altres temes examinem el document informàtic, la problemàtica que envolta al fax, els sistemes de videogravació i el contracte electrònic.
The thesis seeks to analyse, on the one hand, the integration and development of the new technologies in the Administration of Justice; and, on the other, the parameters which constitute the validity and efficiency of the electronic document.
The first question centres on the configuration of the Information Systems of the Judicial Office and the Public Prosecutor, as well as the computerisation of the Civil Registers, where the art. 230 LOPJ it's the part key. Their programmes, applications, the Video Conferencing, the judicial registers and the telecommunication networks which are covered by the recognised electronic signatures, are studied, where the agreements on technological collaboration gain great relevance. The digitalisation of evidence might perhaps be one of the questions with most consequence, bearing in mind that the judgment is the act by which the process is culminated. Although not all the projects adopted within the compass of e.justice have developed completely nor have reached all the judicial organs. The final objective is to achieve an agile, quality Justice, to which the recently approved Strategic Plan for the Modernisation of Justice aspires.
With reference to the second perspective, there is no doubt that the juridical Ordinance and the tribunals within the compass of material justice grant full validity and efficacy to the electronic document. Our line of investigation is justified because there are more and more processes which are sustained by electronic supports of all kinds, whether it be at the establishment of the action or later, as a proof of it (art. 299.2 LEC). Amongst other things, we examine the computerised document, the problems which surround the fax, the systems for video recording and the electronic contract.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Tröger, Ralph. „Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie“. Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Der volle Inhalt der Quelle
Annotation:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Wang, Jiing-Yuh, und 王景裕. „An Automatic Map Processing System for Land Register Map“. Thesis, 1994. http://ndltd.ncl.edu.tw/handle/64370741694584901163.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Yang, Shih-Lii, und 楊世禮. „Handwritten Numeral Recognition Based on the Neural Network and Its Application in an Automatic Score Register System“. Thesis, 1997. http://ndltd.ncl.edu.tw/handle/62315500368623754715.

Der volle Inhalt der Quelle
Annotation:
碩士
淡江大學
電機工程學系
85
Handwritten numeral recognition has high potential in some applications in our daily life. It can be used in a wide range of applications, such as an automatic score register system, license-plate data verification, ZIP code recognition, etc. As a result, in the recent years many researchers have proposed relevant methods and systems for handwritten numeral recognition. In this paper, the author proposed a handwritten digit recognition system based on a supervised HyperRectangular Composite Neural Network (HRCNN) and then applied this system to an automatic score register system. This system is composed of three parts: preprocessing, numeral extraction, and recognition and the author used this system in both the handwritten scores and the printed serial numbers on the examination paper. In the first stage, the image of the paper is obtained as the input and some processing is performed on the input image, such as image binarization and segmentation. In the second stage, object labeling is used to extract the connected components in the image. The connected components can be used to find the position of the serial number. In the third stage, nonlinear normalization is performed to get a normalized image for recognition. The purpose for using nonlinear normalization is to get a image with a fixed size and to adjust the density of the strokes in a adequate manner. The features using localized arc patterns are extracted from the normalized image. The features are then used as the input to the HRCNN and the recognition result can be obtained. Handwritten numerals of 70 persons were collected as the data set. Each person wrote numerals from 0 to 9 six times. Three times of these are used as the training set and the others as the testing set. A good result was obtained for this data set. Another 80 examination papers were used for testing. These papers were collected from four teachers and each teacher provided 20 papers. The recognition rate in the serial numbers is 100% since the numerals are printed numbers. On the other hand, in the handwritten scores, a recognition rate of 93.75% was obtained.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Karakaya, Fuat. „Automated exploration of the asic design space for minimum power-delay-area product at the register transfer level“. 2004. http://etd.utk.edu/2004/KarakayaFuat.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of Tennessee, Knoxville, 2004.
Title from title page screen (viewed May 13, 2004). Thesis advisor: Donald W. Bouldin. Document formatted into pages (x, 102 p. : ill. (some col.)). Vita. Includes bibliographical references (p. 99-101).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Yu, Huan. „Automated Segmentation of Head and Neck Cancer Using Texture Analysis with Co-registered PET/CT images“. Thesis, 2010. http://hdl.handle.net/1807/24920.

Der volle Inhalt der Quelle
Annotation:
Radiation therapy is often offered as the primary treatment for head and neck cancer(HNC). Accurate target delineation is essential for the success of radiation therapy. The current target definition technique - manual delineation using Computed Tomography(CT) - is subject to high observer variability. Functional imaging modalities such as 2-[18F]-fluoro-2-deoxy-D-glucose Positron Emission Tomography(FDG-PET) can greatly improve the visualization of tumor. FDG-PET co-registered with CT has shown potential to improve the accuracy of target localization and reduce observer variability. Unfortunately, due to the limitation of PET, the degree of improvement obtained by qualitative and simple quantitative (e.g. thresholding) use of FDG-PET is not ideal. However, both PET and CT images contain a wealth of texture information that could be used to improve the accuracy of target definition. This thesis has investigated using texture analysis techniques to automatically delineate radiation targets. Firstly, PET and CT texture features with high discrimination ability were identified and a texture analysis technique- a decision tree based K Nearest Neighbour(DTKNN) classifier – was developed. DTKNN could accurately classify head and neck tissue with an area under curve(AUC) of a Receiver Operator Characteristic(ROC) of 0.95. Subsequently, an automated target delineation technique - CO-registered Multi-modality Pattern Analysis Segmentation System(COMPASS) - was developed that can delineate tumor on a voxel-by-voxel basis. COMPASS was found to accurately delineate HNC with 84% sensitivity and 95% specificity on a voxel basis per patient. To accurately evaluate the utility of the COMPASS in radiation targeting, a validation method which can combine biased observers' contours to generate a probabilistic reference for validation was developed. The method was based on maximum likelihood analysis using a simulated annealing(SA) algorithm. The results from this thesis show that texture features of both PET and CT images can enhance the discrimination between HNC and normal tissue, and an automated delineation method of HNC using texture analysis of PET and CT images can accurately and consistently define radiation targets in head and neck. This suggests that automated segmentation of radiation targets based on texture analysis techniques may significantly reduce observer variability and improve the accuracy of radiation targeting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Markel, Daniel. „Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in Co-registered 18-FDG PET/CT Images“. Thesis, 2011. http://hdl.handle.net/1807/31332.

Der volle Inhalt der Quelle
Annotation:
Variability between oncologists in defining the tumor during radiation therapy planning can be as high as 700% by volume. Robust, automated definition of tumor boundaries has the ability to significantly improve treatment accuracy and efficiency. However, the information provided in computed tomography (CT) is not sensitive enough to differences between tumor and healthy tissue and positron emission tomography (PET) is hampered by blurriness and low resolution. The textural characteristics of thoracic tissue was investigated and compared with those of tumors found within 21 patient PET and CT images in order to enhance the differences and the boundary between cancerous and healthy tissue. A pattern recognition approach was used from these samples to learn the textural characteristics of each and classify voxels as being either normal or abnormal. The approach was compared to a number of alternative methods and found to have the highest overlap with that of an oncologist's tumor definition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Barros, Ana Rita Amaro. „Classificação automática de registos eletrónicos médicos por diagnóstico“. Master's thesis, 2020. http://hdl.handle.net/10071/21974.

Der volle Inhalt der Quelle
Annotation:
A crescente implementação de sistemas de registos eletrônicos médicos (REM’s) nos Hospitais, com vista a apoiar o atendimento individual dos pacientes, está a provocar um aumento do processamento e armazenamento dos dados clínicos diariamente. Estes registos contêm uma fonte infindável de informação clínica, no entanto o facto de não haver estrutura no texto produzido pelos médicos e o facto das informações introduzidas divergirem de paciente para paciente e de especialidade médica para especialidade médica, dificulta o aproveitamento destes dados. Outra dificuldade que existe na análise deste tipo de dados é conseguir criar um sistema capaz de extrair informação minuciosa presente nos REM’s, de forma a ajudar os profissionais de saúde a reduzir a taxa de erro de diagnóstico, prevendo o tipo de doença do paciente. Atualmente, para superar este desafio os hospitais realizam este processo manualmente, no entanto este processo é longo e está suscetível a erros. Esta dissertação pretende propor uma solução para este problema, ao utilizar técnicas de Processamento de Linguagem Natural e de Aprendizagem Automática, de forma a permitir um sistema que possibilite a extração de conhecimento clínico e respetiva classificação do REM por tipo de doença/ diagnóstico, de uma forma automática. Este sistema foi desenvolvido em língua portuguesa, visto que todos os sistemas médicos de extração de conhecimento existentes são desenvolvidos para língua inglesa. Este cenário visa ajudar na evolução do aproveitamento das informações contidas nos REM’s e, consequentemente, visa contribuir para o crescimento deste tipo de sistemas dentro do hospital português envolvido nesta dissertação.
The growing implementation of electronic medical record (EMR’s) systems in Hospitals, to support individual patient care, is causing an increase in the processing and storage of clinical data daily. These records contain an endless source of clinical information, however, the fact that there is no structure in the text produced by doctors and the fact that the information entered differ from patient to patient and from medical speciality to medical speciality, makes it difficult to use these data. Another difficulty that exists in the analysis of this type of data is to be able to create a system capable of extracting detailed information present in the EMR's, in order to help health professionals to reduce the error rate of diagnosis, predicting the type of disease of the patient. Currently, to overcome this challenge, hospitals carry out this process manually, however, this process is long and susceptible to errors. This dissertation intends to propose a solution to this problem, using techniques of Natural Language Processing and Machine Learning, in order to allow a system that allows the extraction of clinical knowledge and respective classification of EMR by type of disease/diagnosis, from an automatically. This system was developed in Portuguese language since all existing medical knowledge extraction systems are developed for English. This scenario aims to help in the evolution of the use of the information contained in the EMR’s and, consequently, aims to contribute to the growth of this type of systems within the Portuguese hospital involved in this dissertation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Faria, Mário André Oliveira. „Controlo de qualidade e inspeção visual por imagem: definição de uma aplicação para registo de alterações e avaliação de desempenho“. Master's thesis, 2015. http://hdl.handle.net/1822/40160.

Der volle Inhalt der Quelle
Annotation:
Dissertação de mestrado em Engenharia e Gestão da Qualidade
O processo de soldadura eletrónica Surface Mounting Technology (SMT) é amplamente utilizado na produção de bens eletrónicos nos dias de hoje. Para melhorar a qualidade dos produtos a empresa onde se desenvolveu este projeto utiliza um sistema de inspeção ótica automático (AOI) para identificar eventuais defeitos. Este sistema de inspeção é complexo e pode mesmo introduzir falhas que resultam em classificações erradas de produtos bons e defeituosos. Tal sistema é complementado por um especialista que analisa os mesmos produtos. O principal objetivo do trabalho consiste em definir propostas para melhorar o funcionamento do processo de inspeção ótica. O foco do projeto é sobre os dados de qualidade que são obtidos no referido processo. Para tal foram aplicadas as ferramentas de análise e melhoria da qualidade como o diagrama de Ishikawa que permitiu relacionar possíveis causas com o problema a ser tratado, a Análise de Pareto que permitiu selecionar as causas mais importantes do problema, assim como a representação do processo ficou mais fácil com a utilização de um Fluxograma. O Brainstorming foi utilizado em reuniões para divulgar o projeto e recolher perceções de pessoas relacionadas com o processo, com formações académicas distintas e perspetivas heterogéneas do processo. Assim, houve uma fase em que foram observadas as características do sistema de inspeção e dos recursos de suporte ao mesmo. Posteriormente analisaram-se os indicadores de qualidade, os dados de manutenção e de avarias fazendo uso de várias ferramentas. Por fim, foram reunidos todos os pressupostos, ponderadas as vantagens e limitações de diferentes alternativas de ação, e foram propostas ações de melhoria. Destaca-se a criação de uma aplicação (AOI Performance Meter), para materializar a medição do desempenho do AOI, entre outras funcionalidades. Foi também definida uma metodologia de aplicação de ações de manutenção preditiva, de forma a melhorar a previsibilidade e fiabilidade dos equipamentos. Nas propostas de melhoria foram considerados os respetivos custos de implementação e manutenção. Nas propostas apresentadas é expectável que os benefícios compensem os custos. Conclui-se que os objetivos definidos foram alcançados.
The electronic welding process Surface Mounting Technology (SMT) is widely used in the production of electronic goods nowadays. To improve the quality of products the company which this project was developed uses an automated optical inspection system (AOI) to identify any defects. This inspection system is complex and may even introduce failures that result in erroneous ratings of good and defective products. Such system is complemented by an expert who analyzes the same products. The main objective of the study is to define proposals to improve the operation of the optical inspection process. The project focus is on quality data that are obtained in that case. To do this we applied analytical and quality improvement tools such as the Ishikawa diagram that allowed to relate possible causes to the problem being treated, the Pareto analysis allowing us to select the most important causes of the problem, as well as the process representation was easier with the use of a flowchart. Brainstorming was used in meetings to promote the project and gather perceptions of people related to the process, with different academic backgrounds and varying perspectives of the process. Thus, there was a phase which describe the inspection system characteristics and it’s support resources. Subsequently quality indicators, maintenance data and breakdowns were analyzed by making use of various tools. Finally, gathered all assumptions, weighed the advantages and limitations of alternative approaches, improvement actions were proposed, such as the creation of an application (AOI Performance Meter), to materialize the measurement of AOI performance, among other features. A method of applying predictive maintenance actions, in order to improve the predictability and reliability of the equipment, was also defined. The improvement proposals considered the respective implementation and maintenance costs. For the selected improvement actions it is expected that the benefits outweigh the costs. We conclude that this work’s objectives have been achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Rainho, Inês Margarida Louro. „Validação das Folhas de Cálculo Eletrónicas dos Produtos Acabados e de Estabilidade dos Laboratórios Vitória, S.A“. Master's thesis, 2018. http://hdl.handle.net/10362/58225.

Der volle Inhalt der Quelle
Annotation:
As folhas de cálculo eletrónicas são bastante utilizadas na indústria farmacêutica no processamento e registo de informação, sendo equiparadas aos sistemas computadorizados e consideradas registos eletrónicos quando armazenadas eletronicamente, sendo fundamental que cumpram todas as Boas Práticas de Fabrico relativas a estes dois aspetos e que sejam validadas de modo a assegurar que estas são cumpridas e que as folhas de cálculo eletrónicas estão corretas. Assim, um dos principais objetivos deste trabalho é a validação das folhas de cálculo eletrónicas dos produtos acabados e de estabilidade utilizadas no Laboratório do Controlo da Qualidade dos Laboratórios Vitória S.A. de acordo com o método da International Society for Pharmaceutical Engineering publicado na Good Automated Manufacturing Practice 5. Para tal, realizou-se uma análise do risco de modo a identificar 57 perigos decorrentes do desenvolvimento e da utilização das mesmas e as respetivas medidas de intervenção. No seguimento da análise do risco, determinou-se que o impacto da utilização das folhas de cálculo eletrónicas é elevado, o que possibilitou, em conjunto com a categorização das mesmas, a selecção da abordagem da validação. Previamente à validação, todas as folhas de cálculo eletrónicas foram verificadas quanto à existência de erros e foram implementadas as medidas de intervenção que eliminam ou mitigam os perigos. Foram ainda implementados critérios relativos ao desenvolvimento de folhas de cálculo eletrónicas que permitem a sua uniformização e verificação pormenorizada. Foi feita uma comparação entre as folhas de cálculo eletrónicas antes e depois da implementação dos critérios e concluiu-se que alguns nunca eram cumpridos e outros eram irregularmente cumpridos. A implementação desses critérios resultou numa melhoria das folhas de cálculo eletrónicas. Por fim, foram validadas 15 folhas de cálculo eletrónicas, não sendo, no entanto, possível afirmar que estão devidamente validadas pois os testes relativos às medidas não implementadas não foram realizados.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie