Academic literature on the topic 'Automates à registres'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automates à registres.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Automates à registres"

1

Максимовский, Александр Юрьевич, Григорий Александрович Остапенко, and Олег Николаевич Чопоров. "ABOUT PARAMETERS OF AUTOMATED MODELS FOR MONITORING INFORMATION SECURITY OF NETWORK OBJECTS, II." ИНФОРМАЦИЯ И БЕЗОПАСНОСТЬ, no. 3(-) (December 1, 2020): 327–36. http://dx.doi.org/10.36622/vstu.2020.23.3.001.

Full text
Abstract:
В статье изучаются свойства сложных систем, представимых в виде сети автоматов, обладающих специальными свойствами. Данные свойства используются в интересах организации наблюдения за динамикой изменения (мониторинга) поведения состояния указанных систем в целях обеспечения надежного контроля функционирования. При этом в множество критериев контроля могут включаться результаты проверки соответствия троек входных последовательностей, последовательностей состояний и выходных последовательностей объектов контроля набору отношений, формируемых с использованием информации о свойствах рассматриваемых автоматных моделей сетевых объектов и, в частности, особенности функционирования указанных автоматных моделей. Предложены дальнейшие пути развития методов и средств выявления особенностей внешнего поведения автоматных моделей объектов контроля, способы построения и использования экспериментов с автоматами, а также отношений специального вида для автоматных моделей компонентов сложных систем и ассоциированных с ними комбинаторных объектов, определяемых на мультиграфах состояний соответствующих автоматов. Указаны общие подходы к применению автоматных моделей регистрового типа для мониторинга информационной безопасности сетевых объектов регистров сдвига или их обобщений, обладающие необходимыми свойствами. Получены новые результаты о возможностях и предложены новые подходы к выбору характеристик применения рассмотренных ранее автоматных моделей. Основное внимание уделено изучению групп автоматных моделей обобщенных недвоичных регистров сдвига и их обобщений, обладающих необходимыми свойствами. На основании этих результатов построены новые классы автоматных моделей параметров мониторинга информационной безопасности объектов сетевой инфраструктуры, которые включают не только основанные на контроле алгебраических и комбинаторных соотношений входных и выходных последовательностей указанных объектов, но и позволяют выявить потенциальные угрозы безопасности средствам контроля. The article studies the properties of complex systems that can be represented as a network of automata with special properties. These properties are used in the interests of organizing observation of the dynamics of changes (monitoring) the behavior of the state of these systems is in order to ensure reliable control over the functioning. In this case, the set of control criteria can include the results of checking the correspondence of triplets of input sequences, sequences of states and output sequences of control objects to a set of relations, generated using information about the properties of the considered automatic models of network objects and, in particular, the features of the functioning of these automatic models. Further ways of developing methods and means of identifying the features of the external behavior of automaton models of control objects, methods of constructing and using experiments with automata, as well as relations of a special kind for automata models of components of complex systems and associated combinatorial objects defined on the multigraphs of states of the corresponding automata. General approaches to the use of register-type automata models for monitoring information security of network objects of shift registers or their generalizations, which have the necessary properties, are indicated. New results on the possibilities are obtained and new approaches to the choice of characteristics of the application of the previously considered automatic models are proposed. The main attention is paid to the study of groups of automatic models of generalized non-binary shift registers of shift registers and their generalizations, which have the necessary properties. Based on these results, new classes of automata models of parameters for monitoring information security of network infrastructure objects were constructed, which include not only control-based algebraic and combinatorial relationships of the input and output sequences of the specified objects, but also allow identifying potential security threats to the controls themselves.
APA, Harvard, Vancouver, ISO, and other styles
2

KAMINSKI, MICHAEL, and DANIEL ZEITLIN. "FINITE-MEMORY AUTOMATA WITH NON-DETERMINISTIC REASSIGNMENT." International Journal of Foundations of Computer Science 21, no. 05 (October 2010): 741–60. http://dx.doi.org/10.1142/s0129054110007532.

Full text
Abstract:
In this paper we extend finite-memory automata with non-deterministic reassignment that allows an automaton to "guess" the future content of its registers, and introduce the corresponding notion of a regular expression over an infinite alphabet.
APA, Harvard, Vancouver, ISO, and other styles
3

Pancini, Stefania, Gabriel J. Pent, Robin R. White, Guillermo Goncherenko, Nicholas W. Wege Dias, Hannah Haines, and Vitor R. G. Mercadante. "269 Validation of an Automatic Scale Equipped with Solar Panels for Grazing Beef Cattle." Journal of Animal Science 99, Supplement_3 (October 8, 2021): 143. http://dx.doi.org/10.1093/jas/skab235.262.

Full text
Abstract:
Abstract Body weight (BW) is used to detect health and nutritional disorders in cattle, as well as calculate profitability of the production system based on weight gain curves. In grazing systems, measuring BW frequently implies moving animals, which is labor intense, stressful, and reduces grazing time and feed intake. All of which negatively impacts animal performance. An automated scale in the pasture can reduce labor and animal handling, while ensuring an accurate BW estimation. Our objective was to evaluate the functionality and accuracy of an automatic wireless scale system equipped with solar panels (SmartScale, C-LOCKTM), when compared to a conventional scale located at the cattle working facility. Eight multiparous beef cows were weighed in a 14-day interval for a period of 57 days with a conventional scale, while at the same time BW was measured daily with an automated scale located at the pasture in front of the water trough. This wireless system registers BW every time the animal approaches the water trough and automatically transmit it to a server via cellular network. Correlation between weighing systems was evaluated through a linear regression (R Core Team, 2019), where the adjusted R2 value was 0.99, determining an excellent linear relationship between values obtained by the conventional scale and values obtained by the automated scale. In addition, the automated scale registered the time of day, time spent in the scale, and number of daily visits. The probability to find an animal at the scale varies between 15% to 20% during daylight, decreasing under 9% during the night, with 2.56±1.50 average number of visits per day, where animals spend in average 2.94±1.84 minutes. In conclusion, the automated scale has the ability to measure BW with great precision and has potential to be used as a complimentary instrument to evaluate animal behavior in grazing systems.
APA, Harvard, Vancouver, ISO, and other styles
4

O'Sullivan, Jack, and Jon Tilbury. "Towards Automated Digital Preservation through Preservation Action Registries." Archiving Conference 2020, no. 1 (April 7, 2020): 6–11. http://dx.doi.org/10.2352/issn.2168-3204.2020.1.0.6.

Full text
Abstract:
Since the 1960s, digital preservation has transformed from a secondary activity at a select few cultural heritage organizations to a vital international effort with its own best practices, standards, and community. This keynote presentation and paper presents an overview of the changing scope of digital preservation, issues, and strategies for digital preservation in the cultural heritage community.
APA, Harvard, Vancouver, ISO, and other styles
5

Negreanu, D., L. d'Amours, J. Neves Briard, F. De Champlain, and V. Homier. "ASSESSMENT OF CANADIAN PUBLIC AUTOMATED EXTERNAL DEFIBRILLATOR REGISTRIES." Canadian Journal of Cardiology 36, no. 10 (October 2020): S83. http://dx.doi.org/10.1016/j.cjca.2020.07.165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

FIGUEIRA, DIEGO, PIOTR HOFMAN, and SŁAWOMIR LASOTA. "Relating timed and register automata." Mathematical Structures in Computer Science 26, no. 6 (December 5, 2014): 993–1021. http://dx.doi.org/10.1017/s0960129514000322.

Full text
Abstract:
Timed and register automata are well-known models of computation over timed and data words, respectively. The former has clocks that allow to test the lapse of time between two events, whilst the latter includes registers that can store data values for later comparison. Although these two models behave differently in appearance, several decision problems have the same (un)decidability and complexity results for both models. As a prominent example, emptiness is decidable for alternating automata with one clock or register, both with non-primitive recursive complexity. This is not by chance.This work confirms that there is indeed a tight relationship between the two models. We show that a run of a timed automaton can be simulated by a register automaton over ordered data domain, and conversely that a run of a register automaton can be simulated by a timed automaton. These are exponential time reductions hold both in the finite and infinite words settings. Our results allow to transfer decidability results back and forth between these two kinds of models, as well complexity results modulo an exponential time reduction. We justify the usefulness of these reductions by obtaining new results on register automata.
APA, Harvard, Vancouver, ISO, and other styles
7

Pianon, R., A. D'Amico, and D. Schiavone. "The Endoscopy and Endourology registers." Urologia Journal 61, no. 1 (February 1994): 45–47. http://dx.doi.org/10.1177/039156039406100109.

Full text
Abstract:
Computerised archives of endoscopic and endourologic procedures can be obtained by a common data base. Computerised archives are significantly advantageous compared to traditional filing methods if they succeed in achieving the following aims: improvement of data registration, automatic description of the procedures, fast searches of data for clinic and scientific purposes.
APA, Harvard, Vancouver, ISO, and other styles
8

Kruszyński, Michał, and Ewa Szkic-Czech. "ROAD FEE CHARGING SYSTEMS IN THE MANAGEMENT OF TRANSPORT LOGISTICS." Logistics and Transport 42, no. 2 (2019): 101–8. http://dx.doi.org/10.26411/83-1734-2015-2-42-14-19.

Full text
Abstract:
The article discusses the logistic utility of registration data of automated measuring systems in the practice of operational management of national roads, expressways and motorways, as well as the shaping of the quality of logistic services. The analysis of the key problem is presented in the article on the example of the automatic toll collection system Viatoll, operating on the indicated roads. The publication draws attention to the key and extramural information character of data registered, stored and offered by specialized, automated record systems. He points to the multifaceted and simultaneous usefulness of this data to the organization and management of the scale of road logistics processes, regardless of the chief, specialized function assigned to a specific automated recording system involved in the recording of specific road incidents.
APA, Harvard, Vancouver, ISO, and other styles
9

Hjorth-Hansen, Anna Katarina, Malgorzata Izabela Magelssen, Garrett Newton Andersen, Torbjørn Graven, Jens Olaf Kleinau, Bodil Landstad, Lasse Løvstakken, Kyrre Skjetne, Ole Christian Mjølstad, and Havard Dalen. "Real-time automatic quantification of left ventricular function by hand-held ultrasound devices in patients with suspected heart failure: a feasibility study of a diagnostic test with data from general practitioners, nurses and cardiologists." BMJ Open 12, no. 10 (October 2022): e063793. http://dx.doi.org/10.1136/bmjopen-2022-063793.

Full text
Abstract:
ObjectivesTo evaluate the feasibility and reliability of hand-held ultrasound (HUD) examinations with real-time automatic decision-making software for ejection fraction (autoEF) and mitral annular plane systolic excursion (autoMAPSE) by novices (general practitioners), intermediate users (registered cardiac nurses) and expert users (cardiologists), respectively, compared to reference echocardiography by cardiologists in an outpatient cohort with suspected heart failure (HF).DesignFeasibility study of a diagnostic test.Setting and participants166 patients with suspected HF underwent HUD examinations with autoEF and autoMAPSE measurements by five novices, three intermediate-skilled users and five experts. HUD results were compared with a reference echocardiography by experts. A blinded cardiologist scored all HUD recordings with automatic measurements as (1) discard, (2) accept, but adjust the measurement or (3) accept the measurement as it is.Primary outcome measureThe feasibility of automatic decision-making software for quantification of left ventricular function.ResultsThe users were able to run autoEF and autoMAPSE in most patients. The feasibility for obtaining accepted images (score of ≥2) with automatic measurements ranged from 50% to 91%. The feasibility was lowest for novices and highest for experts for both autoEF and autoMAPSE (p≤0.001). Large coefficients of variation and wide coefficients of repeatability indicate moderate agreement. The corresponding intraclass correlations (ICC) were moderate to good (ICC 0.51–0.85) for intra-rater and poor (ICC 0.35–0.51) for inter-rater analyses. The findings of modest to poor agreement and reliability were not explained by the experience of the users alone.ConclusionNovices, intermediate and expert users were able to record four-chamber views for automatic assessment of autoEF and autoMAPSE using HUD devices. The modest feasibility, agreement and reliability suggest this should not be implemented into clinical practice without further refinement and clinical evaluation.Trial registration numberNCT03547076.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Guanglei, Pengyu Wang, Yan Li, Tianqi Su, Xiuling Liu, and Hongrui Wang. "A Motion Artifact Reduction Method in Cerebrovascular DSA Sequence Images." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 08 (April 8, 2018): 1854022. http://dx.doi.org/10.1142/s0218001418540228.

Full text
Abstract:
Digital Subtraction Angiography (DSA) can be used for diagnosing the pathologies of vascular system including systemic vascular disease, coronary heart disease, arrhythmia, valvular disease and congenital heart disease. Previous studies have provided some image enhancement algorithms for DSA images. However, these studies are not suitable for automated processes in huge amounts of data. Furthermore, few algorithms solved the problems of image contrast corruption after artifact removal. In this paper, we propose a fully automatic method for cerebrovascular DSA sequence images artifact removal based on rigid registration and guided filter. The guided filtering method is applied to fuse the original DSA image and registered DSA image, the results of which preserve clear vessel boundary from the original DSA image and remove the artifacts by the registered procedure. The experimental evaluation with 40 DSA sequence images shows that the proposed method increases the contrast index by 24.1% for improving the quality of DSA images compared with other image enhancement methods, and can be implemented as a fully automatic procedure.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Automates à registres"

1

Exibard, Léo. "Automatic synthesis of systems with data." Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0312.

Full text
Abstract:
Nous interagissons régulièrement avec des machines qui réagissent en temps réel à nos actions (robots, sites web etc). Celles-ci sont modélisées par des systèmes réactifs, caractérisés par une interaction constante avec leur environnement. L'objectif de la synthèse réactive est de générer automatiquement un tel système à partir de la description de son comportement afin de remplacer la phase de développement bas-niveau, sujette aux erreurs, par l'élaboration d'une spécification haut-niveau.Classiquement, on suppose que les signaux d'entrée de la machine sont en nombre fini. Un tel cadre échoue à modéliser les systèmes qui traitent des données issues d'un ensemble infini (un identifiant unique, la valeur d'un capteur, etc). Cette thèse se propose d'étendre la synthèse réactive au cas des mots de données. Nous étudions un modèle adapté à ce cadre plus général, et examinons la faisabilité des problèmes de synthèse associés. Nous explorons également les systèmes non réactifs, où l'on n'impose pas à la machine de réagir en temps réel
We often interact with machines that react in real time to our actions (robots, websites etc). They are modelled as reactive systems, that continuously interact with their environment. The goal of reactive synthesis is to automatically generate a system from the specification of its behaviour so as to replace the error-prone low-level development phase by a high-level specification design.In the classical setting, the set of signals available to the machine is assumed to be finite. However, this assumption is not realistic to model systems which process data from a possibly infinite set (e.g. a client id, a sensor value, etc.). The goal of this thesis is to extend reactive synthesis to the case of data words. We study a model that is well-suited for this more general setting, and examine the feasibility of its synthesis problem(s). We also explore the case of non-reactive systems, where the machine does not have to react immediately to its inputs
APA, Harvard, Vancouver, ISO, and other styles
2

Kuriakose, R. B., and F. Aghdasi. "Automatic student attendance register using RFID." Interim : Interdisciplinary Journal, Vol 6, Issue 2: Central University of Technology Free State Bloemfontein, 2007. http://hdl.handle.net/11462/406.

Full text
Abstract:
Published Article
The purpose of this project is to investigate the application of Radio Frequency Identification, RFID, to automatic student attendance register. The aim is that the students in any class can be recorded when they carry their student cards with them without having to individually swipe the card or allocate special interaction time. The successful implementation of this proposal will facilitate such record keeping in a non-intrusive and efficient manner and will provide the platform for further research on the correlation between attendance and performance of the students. The opportunity for related research is identified regarding the range of the parameters involved, ensuring that individual identifications do not clash and interfacing challenges with the central record keeping are overcome.
APA, Harvard, Vancouver, ISO, and other styles
3

Rueda, Cebollero Guillem. "Learning Cache Replacement Policies using Register Automata." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-212677.

Full text
Abstract:
Processors are a basic unit of the computer which accomplish the mission of processing data stored in the memory. Large memories are required to process a big amount of data. Not all data is required at the same time, few data is required faster than other. For this reason, the memory is structured  in a hierarchy, from smaller and faster to bigger and slower. The cache memory is one of the fastest elements and closest to the processor in the memory hierarchy. The processor design companies hides its characteristics, usually under a confidential documentation that can not be accessed by the software developers. One of the most important characteristics kept in secret in this documentation is the replacement policy. The most famous replacement policies are known but the hardware designers can apply modifications for performance, cost or design reasons. The obfuscation of a part of the processor implies many developers to avoid problems with, for example, the runtime. If a task must be executed always in a certain time, the developer will take always the case requiring more time to execute (also called "Worst Case Execution Time") implying an underutilisation of the processor. This project will be focused on a new method to represent  and infer the replacement policy: modelling the replacement policies with automaton and using a learning process framework called LearnLib to guess them. This is not the first project trying to match the cache memory characteristics, actually a previous project is the basis to find a more general model to define the replacement policies. The results of LearnLib are modelled as an automaton. In order to test the effectiveness of this framework, different replacement policies will be simulated and verified. To provide a interface with a real cache memories is developed a program called hwquery. This program will interface a real cache request for using it in Learnlib.
APA, Harvard, Vancouver, ISO, and other styles
4

Jouhet, Vianney. "Automated adaptation of Electronic Heath Record for secondary use in oncology." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0373/document.

Full text
Abstract:
Avec la montée en charge de l’informatisation des systèmes d’information hospitaliers, une quantité croissante de données est produite tout au long de la prise en charge des patients. L’utilisation secondaire de ces données constitue un enjeu essentiel pour la recherche ou l’évaluation en santé. Dans le cadre de cette thèse, nous discutons les verrous liés à la représentation et à la sémantique des données, qui limitent leur utilisation secondaire en cancérologie. Nous proposons des méthodes basées sur des ontologies pour l’intégration sémantique des données de diagnostics. En effet, ces données sont représentées par des terminologies hétérogènes. Nous étendons les modèles obtenus pour la représentation de la maladie tumorale, et les liens qui existent avec les diagnostics. Enfin, nous proposons une architecture combinant entrepôts de données, registres de métadonnées et web sémantique. L’architecture proposée permet l’intégration syntaxique et sémantique d’un grand nombre d’observations. Par ailleurs, l’intégration de données et de connaissances (sous la forme d’ontologies) a été utilisée pour construire un algorithme d’identification de la maladie tumorale en fonction des diagnostics présents dans les données de prise en charge. Cet algorithme basé sur les classes de l’ontologie est indépendant des données effectivement enregistrées. Ainsi, il fait abstraction du caractère hétérogène des données diagnostiques initialement disponibles. L’approche basée sur une ontologie pour l’identification de la maladie tumorale, permet une adaptation rapide des règles d’agrégation en fonction des besoins spécifiques d’identification. Ainsi, plusieurs versions du modèle d’identification peuvent être utilisées avec des granularités différentes
With the increasing adoption of Electronic Health Records (EHR), the amount of data produced at the patient bedside is rapidly increasing. Secondary use is there by an important field to investigate in order facilitate research and evaluation. In these work we discussed issues related to data representation and semantics within EHR that need to be address in order to facilitate secondary of structured data in oncology. We propose and evaluate ontology based methods for heterogeneous diagnosis terminologies integration in oncology. We then extend obtained model to enable tumoral disease representation and links with diagnosis as recorded in EHR. We then propose and implement a complete architecture combining a clinical data warehouse, a metadata registry and web semantic technologies and standards. This architecture enables syntactic and semantic integration of a broad range of hospital information System observation. Our approach links data with external knowledge (ontology), in order to provide a knowledge resource for an algorithm for tumoral disease identification based on diagnosis recorded within EHRs. As it based on the ontology classes, the identification algorithm is uses an integrated view of diagnosis (avoiding semantic heterogeneity). The proposed architecture leading to algorithm on the top of an ontology offers a flexible solution. Adapting the ontology, modifying for instance the granularity provide a way for adapting aggregation depending on specific needs
APA, Harvard, Vancouver, ISO, and other styles
5

Elrod, JoAnn Broeckel, Raina Merchant, Mohamud Daya, Scott Youngquist, David Salcido, Terence Valenzuela, and Graham Nichol. "Public health surveillance of automated external defibrillators in the USA: protocol for the dynamic automated external defibrillator registry study." BMJ PUBLISHING GROUP, 2017. http://hdl.handle.net/10150/623946.

Full text
Abstract:
Introduction: Lay use of automated external defibrillators (AEDs) before the arrival of emergency medical services (EMS) providers on scene increases survival after out-of-hospital cardiac arrest (OHCA). AEDs have been placed in public locations may be not ready for use when needed. We describe a protocol for AED surveillance that tracks these devices through time and space to improve public health, and survival as well as facilitate research. Methods and analysis: Included AEDs are installed in public locations for use by laypersons to treat patients with OHCA before the arrival of EMS providers on scene. Included cases of OHCA are patients evaluated by organised EMS personnel and treated for OHCA. Enrolment of 10 000 AEDs annually will yield precision of 0.4% in the estimate of readiness for use. Enrolment of 2500 patients annually will yield precision of 1.9% in the estimate of survival to hospital discharge. Recruitment began on 21 Mar 2014 and is ongoing. AEDs are found by using multiple methods. Each AED is then tagged with a label which is a unique two-dimensional (2D) matrix code; the 2D matrix code is recorded and the location and status of the AED tracked using a smartphone; these elements are automatically passed via the internet to a secure and confidential database in real time. Whenever the 2D matrix code is rescanned for any non-clinical or clinical use of an AED, the user is queried to answer a finite set of questions about the device status. The primary outcome of any clinical use of an AED is survival to hospital discharge. Results are summarised descriptively. Ethics and dissemination: These activities are conducted under a grant of authority for public health surveillance from the Food and Drug Administration. Results are provided periodically to participating sites and sponsors to improve public health and quality of care.
APA, Harvard, Vancouver, ISO, and other styles
6

Petersson, Håkan. "On information quality in primary health care registries /." Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/tek805s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hauck, Shahram. "Automated CtP Calibration for Offset Printing : Dot gain compensation, register variation and trapping evaluation." Doctoral thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119366.

Full text
Abstract:
Although offset printing has been and still is the most common printing technology for color print productions, its print productions are subject to variations due to environmental and process parameters. Therefore, it is very important to frequently control the print production quality criteria in order to make the process predictable, reproducible and stable. One of the most important parts in a modern industrial offset printing is Computer to Plate (CtP), which exposes the printing plate. One of the most important quality criteria for printing is to control the dot gain level. Dot gain refers to an important phenomenon that causes the printed elements to appear larger than their reference size sent to the CtP. It is crucial to have the dot gain level within an acceptable range, defined by ISO 12647-2 for offset printing. This is done by dot gain compensation methods in the Raster Image Processor (RIP). Dot gain compensation is however a complicated task in offset printing because of the huge number of parameters affecting dot gain. Another important quality criterion affecting the print quality in offset is the register variation caused by the misplacement of printing sheet in the printing unit. Register variation causes tone value variations, gray balance variation and blurred image details. Trapping is another important print quality criterion that should be measured in an offset printing process. Trapping occurs when the inks in different printing units are printed wet-on-wet in a multi-color offset printing machine. Trapping affects the gray balance and makes the resulting colors of overlapped inks pale. In this dissertation three different dot gain compensation methods are discussed. The most accurate and efficient dot gain compensation method, which is noniterative, has been tested, evaluated and applied using many offset printing workflows. To further increase the accuracy of this method, an approach to effectively select the correction points of a RIP with limited number of correction points, has also been proposed. Correction points are the tone values needed to be set in the RIP to define a dot gain compensation curve. To fulfill the requirement of having the register variation within the allowed range, it has to be measured and quantified. There have been two novel models proposed in this dissertation that determine the register variation value. One of the models is based on spectrophotometry and the other one on densitometry. The proposed methods have been evaluated by comparison to the industrial image processing based register variation model, which is expensive and not available in most printing companies. The results of all models were comparable, verifying that the proposed models are good  alternatives to the image processing based model. The existing models determining the trapping values are based on densitometric measurements and quantify the trapping effect by a percentage value. In this dissertation, a novel trapping model is proposed that quantifies the trapping effect by a color difference metric, i.e. , which is more useful and understandable for print machine operators. The comparison between the proposed trapping model and the existing models has shown very good correlations and verified that the proposed model has a bigger dynamic range. The proposed trapping model has also been extended to take into account the effect of ink penetration and gloss. The extended model has been tested using a  high glossy coated paper and the results have shown that the gloss and ink penetration can be neglected for this type of paper. An automated CtP calibration system for offset printing workflow has been introduced and described in this dissertation. This method is a good solution to generate the needed huge numbers of dot gain compensation curves to have an accurate CtP calibration.
APA, Harvard, Vancouver, ISO, and other styles
8

MANSOURI, NAZANIN. "AUTOMATED CORRECTNESS CONDITION GENERATION FOR FORMAL VERIFICATION OF SYNTHESIZED RTL DESIGNS." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin982064542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tabani, Hamid. "Low-power architectures for automatic speech recognition." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/462249.

Full text
Abstract:
Automatic Speech Recognition (ASR) is one of the most important applications in the area of cognitive computing. Fast and accurate ASR is emerging as a key application for mobile and wearable devices. These devices, such as smartphones, have incorporated speech recognition as one of the main interfaces for user interaction. This trend towards voice-based user interfaces is likely to continue in the next years which is changing the way of human-machine interaction. Effective speech recognition systems require real-time recognition, which is challenging for mobile devices due to the compute-intensive nature of the problem and the power constraints of such systems and involves a huge effort for CPU architectures to reach it. GPU architectures offer parallelization capabilities which can be exploited to increase the performance of speech recognition systems. However, efficiently utilizing the GPU resources for speech recognition is also challenging, as the software implementations exhibit irregular and unpredictable memory accesses and poor temporal locality. The purpose of this thesis is to study the characteristics of ASR systems running on low-power mobile devices in order to propose different techniques to improve performance and energy consumption. We propose several software-level optimizations driven by the power/performance analysis. Unlike previous proposals that trade accuracy for performance by reducing the number of Gaussians evaluated, we maintain accuracy and improve performance by effectively using the underlying CPU microarchitecture. We use a refactored implementation of the GMM evaluation code to ameliorate the impact of branches. Then, we exploit the vector unit available on most modern CPUs to boost GMM computation, introducing a novel memory layout for storing the means and variances of the Gaussians in order to maximize the effectiveness of vectorization. In addition, we compute the Gaussians for multiple frames in parallel, significantly reducing memory bandwidth usage. Our experimental results show that the proposed optimizations provide 2.68x speedup over the baseline Pocketsphinx decoder on a high-end Intel Skylake CPU, while achieving 61% energy savings. On a modern ARM Cortex-A57 mobile processor our techniques improve performance by 1.85x, while providing 59% energy savings without any loss in the accuracy of the ASR system. Secondly, we propose a register renaming technique that exploits register reuse to reduce the pressure on the register file. Our technique leverages physical register sharing by introducing minor changes in the register map table and the issue queue. We evaluated our renaming technique on top of a modern out-of-order processor. The proposed scheme supports precise exceptions and we show that it results in 9.5% performance improvements for GMM evaluation. Our experimental results show that the proposed register renaming scheme provides 6% speedup on average for the SPEC2006 benchmarks. Alternatively, our renaming scheme achieves the same performance while reducing the number of physical registers by 10.5%. Finally, we propose a hardware accelerator for GMM evaluation that reduces the energy consumption by three orders of magnitude compared to solutions based on CPUs and GPUs. The proposed accelerator implements a lazy evaluation scheme where Gaussians are computed on demand, avoiding 50% of the computations. Furthermore, it employs a novel clustering scheme to reduce the size of the GMM parameters, which results in 8x memory bandwidth savings with a negligible impact on accuracy. Finally, it includes a novel memoization scheme that avoids 74.88% of floating-point operations. The end design provides a 164x speedup and 3532x energy reduction when compared with a highly-tuned implementation running on a modern mobile CPU. Compared to a state-of-the-art mobile GPU, the GMM accelerator achieves 5.89x speedup over a highly optimized CUDA implementation, while reducing energy by 241x.
El reconocimiento automático de voz (ASR) es una de las aplicaciones más importantes en el área de la computación cognitiva. ASR rápido y preciso se está convirtiendo en una aplicación clave para dispositivos móviles y portátiles. Estos dispositivos, como los Smartphones, han incorporado el reconocimiento de voz como una de las principales interfaces de usuario. Es probable que esta tendencia hacia las interfaces de usuario basadas en voz continúe en los próximos años, lo que está cambiando la forma de interacción humano-máquina. Los sistemas de reconocimiento de voz efectivos requieren un reconocimiento en tiempo real, que es un desafío para los dispositivos móviles debido a la naturaleza de cálculo intensivo del problema y las limitaciones de potencia de dichos sistemas y supone un gran esfuerzo para las arquitecturas de CPU. Las arquitecturas GPU ofrecen capacidades de paralelización que pueden aprovecharse para aumentar el rendimiento de los sistemas de reconocimiento de voz. Sin embargo, la utilización eficiente de los recursos de la GPU para el reconocimiento de voz también es un desafío, ya que las implementaciones de software presentan accesos de memoria irregulares e impredecibles y una localidad temporal deficiente. El propósito de esta tesis es estudiar las características de los sistemas ASR que se ejecutan en dispositivos móviles de baja potencia para proponer diferentes técnicas para mejorar el rendimiento y el consumo de energía. Proponemos varias optimizaciones a nivel de software impulsadas por el análisis de potencia y rendimiento. A diferencia de las propuestas anteriores que intercambian precisión por el rendimiento al reducir el número de gaussianas evaluadas, mantenemos la precisión y mejoramos el rendimiento mediante el uso efectivo de la microarquitectura subyacente de la CPU. Usamos una implementación refactorizada del código de evaluación de GMM para reducir el impacto de las instrucciones de salto. Explotamos la unidad vectorial disponible en la mayoría de las CPU modernas para impulsar el cálculo de GMM. Además, calculamos las gaussianas para múltiples frames en paralelo, lo que reduce significativamente el uso de ancho de banda de memoria. Nuestros resultados experimentales muestran que las optimizaciones propuestas proporcionan un speedup de 2.68x sobre el decodificador Pocketsphinx en una CPU Intel Skylake de alta gama, mientras que logra un ahorro de energía del 61%. En segundo lugar, proponemos una técnica de renombrado de registros que explota la reutilización de registros físicos para reducir la presión sobre el banco de registros. Nuestra técnica aprovecha el uso compartido de registros físicos mediante la introducción de cambios en la tabla de renombrado de registros y la issue queue. Evaluamos nuestra técnica de renombrado sobre un procesador moderno. El esquema propuesto admite excepciones precisas y da como resultado mejoras de rendimiento del 9.5% para la evaluación GMM. Nuestros resultados experimentales muestran que el esquema de renombrado de registros propuesto proporciona un 6% de aceleración en promedio para SPEC2006. Finalmente, proponemos un acelerador para la evaluación de GMM que reduce el consumo de energía en tres órdenes de magnitud en comparación con soluciones basadas en CPU y GPU. El acelerador propuesto implementa un esquema de evaluación perezosa donde las GMMs se calculan bajo demanda, evitando el 50% de los cálculos. Finalmente, incluye un esquema de memorización que evita el 74.88% de las operaciones de coma flotante. El diseño final proporciona una aceleración de 164x y una reducción de energía de 3532x en comparación con una implementación altamente optimizada que se ejecuta en una CPU móvil moderna. Comparado con una GPU móvil de última generación, el acelerador de GMM logra un speedup de 5.89x sobre una implementación CUDA optimizada, mientras que reduce la energía en 241x.
APA, Harvard, Vancouver, ISO, and other styles
10

Klitkou, Gabriel. "Automatisk trädkartering i urban miljö : En fjärranalysbaserad arbetssättsutveckling." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-27301.

Full text
Abstract:
Digital urban tree registers serve many porposes and facilitate the administration, care and management of urban trees within a city or municipality. Currently, mapping of urban tree stands is carried out manually with methods which are both laborious and time consuming. The aim of this study is to establish a way of operation based on the use of existing LiDAR data and othophotos to automatically detect individual trees. By using the extensions LIDAR Analyst and Feature Analyst for ArcMap a tree extraction was performed. This was carried out over the extent of the city district committee area of Östermalm in the city of Stockholm, Sweden. The results were compared to the city’s urban tree register and validated by calculating its Precision and Recall. This showed that FeatureAnalyst generated the result with the highest accuracy. The derived trees were represented by polygons which despite their high accuracy make the result unsuitable for detecting individual tree positions. Even though the use of LIDAR Analyst resulted in a less precise tree mapping result, individual tree positions were detected satisfactory. This especially in areas with more sparse, regular tree stands. The study concludes that the use of both tools complement each other and compensate the shortcomings of the other. FeatureAnalyst maps an acceptable tree coverage while LIDAR Analyst more accurately identifies individual tree positions. Thus, a combination of the two results could be used for individual tree mapping.
Digitala urbana trädregister tjänar många syften och underlättar för städer och kommuner att administrera, sköta och hantera sina park- och gatuträd. Dagens kartering av urbana trädbestånd sker ofta manuellt med metoder vilka är både arbetsintensiva och tidskrävande. Denna studie syftar till att utveckla ett arbetssätt för att med hjälp av befintliga LiDAR-data och ortofoton automatiskt kartera individuella träd. Med hjälp av tilläggen LIDAR Analyst och FeatureAnalyst för ArcMap utfördes en trädkartering över Östermalms stadsdelsnämndsområde i Stockholms stad. Efter kontroll mot stadens träddatabas och validering av resultatet genom beräknandet av Precision och Recall konstaterades att användningen av FeatureAnalyst resulterade i det bästa trädkarteringsresultatet. Dessa träd representeras av polygoner vilket medför att resultatet trots sin goda täckning inte lämpar sig för identifierandet av enskilda trädpositioner. Även om användningen av LIDAR Analyst resulterade i ett mindre precist karteringsresultat erhölls goda positionsbestämmelser för enskilda träd, främst i områden med jämna, glesa trädbestånd. Slutsatsen av detta är att användandet av de båda verktygen kompenserar varandras tillkortakommanden där FeatureAnalyst ger en godtagbar trädtäckning medan LIDAR Analyst bättre identifierar enskilda trädpositioner. En kombination av de båda resultaten skulle alltså kunna användas i trädkarteringssyfte.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Automates à registres"

1

United States. Congress. Office of Technology Assessment., ed. Automated record checks of firearm purchasers: Issues and options. Washington, D.C: Congress of the United States, Office of Technology Assessment, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ruiz, Antonio Lloris, Luis Parrilla Roure, Encarnación Castillo Morales, and Antonio García Ríos. Algebraic Circuits. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Roure, Luis Parrilla, Antonio Lloris Lloris Ruiz, Encarnación Castillo Morales, and Antonio García Ríos. Algebraic Circuits. Springer, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ruiz, Antonio Lloris, Luis Parrilla Roure, Encarnación Castillo Morales, and Antonio García Ríos. Algebraic Circuits. Springer Berlin / Heidelberg, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ślusarski, Marek. Metody i modele oceny jakości danych przestrzennych. Publishing House of the University of Agriculture in Krakow, 2017. http://dx.doi.org/10.15576/978-83-66602-30-4.

Full text
Abstract:
The quality of data collected in official spatial databases is crucial in making strategic decisions as well as in the implementation of planning and design works. Awareness of the level of the quality of these data is also important for individual users of official spatial data. The author presents methods and models of description and evaluation of the quality of spatial data collected in public registers. Data describing the space in the highest degree of detail, which are collected in three databases: land and buildings registry (EGiB), geodetic registry of the land infrastructure network (GESUT) and in database of topographic objects (BDOT500) were analyzed. The results of the research concerned selected aspects of activities in terms of the spatial data quality. These activities include: the assessment of the accuracy of data collected in official spatial databases; determination of the uncertainty of the area of registry parcels, analysis of the risk of damage to the underground infrastructure network due to the quality of spatial data, construction of the quality model of data collected in official databases and visualization of the phenomenon of uncertainty in spatial data. The evaluation of the accuracy of data collected in official, large-scale spatial databases was based on a representative sample of data. The test sample was a set of deviations of coordinates with three variables dX, dY and Dl – deviations from the X and Y coordinates and the length of the point offset vector of the test sample in relation to its position recognized as a faultless. The compatibility of empirical data accuracy distributions with models (theoretical distributions of random variables) was investigated and also the accuracy of the spatial data has been assessed by means of the methods resistant to the outliers. In the process of determination of the accuracy of spatial data collected in public registers, the author’s solution was used – resistant method of the relative frequency. Weight functions, which modify (to varying degree) the sizes of the vectors Dl – the lengths of the points offset vector of the test sample in relation to their position recognized as a faultless were proposed. From the scope of the uncertainty of estimation of the area of registry parcels the impact of the errors of the geodetic network points was determined (points of reference and of the higher class networks) and the effect of the correlation between the coordinates of the same point on the accuracy of the determined plot area. The scope of the correction was determined (in EGiB database) of the plots area, calculated on the basis of re-measurements, performed using equivalent techniques (in terms of accuracy). The analysis of the risk of damage to the underground infrastructure network due to the low quality of spatial data is another research topic presented in the paper. Three main factors have been identified that influence the value of this risk: incompleteness of spatial data sets and insufficient accuracy of determination of the horizontal and vertical position of underground infrastructure. A method for estimation of the project risk has been developed (quantitative and qualitative) and the author’s risk estimation technique, based on the idea of fuzzy logic was proposed. Maps (2D and 3D) of the risk of damage to the underground infrastructure network were developed in the form of large-scale thematic maps, presenting the design risk in qualitative and quantitative form. The data quality model is a set of rules used to describe the quality of these data sets. The model that has been proposed defines a standardized approach for assessing and reporting the quality of EGiB, GESUT and BDOT500 spatial data bases. Quantitative and qualitative rules (automatic, office and field) of data sets control were defined. The minimum sample size and the number of eligible nonconformities in random samples were determined. The data quality elements were described using the following descriptors: range, measure, result, and type and unit of value. Data quality studies were performed according to the users needs. The values of impact weights were determined by the hierarchical analytical process method (AHP). The harmonization of conceptual models of EGiB, GESUT and BDOT500 databases with BDOT10k database was analysed too. It was found that the downloading and supplying of the information in BDOT10k creation and update processes from the analyzed registers are limited. An effective approach to providing spatial data sets users with information concerning data uncertainty are cartographic visualization techniques. Based on the author’s own experience and research works on the quality of official spatial database data examination, the set of methods for visualization of the uncertainty of data bases EGiB, GESUT and BDOT500 was defined. This set includes visualization techniques designed to present three types of uncertainty: location, attribute values and time. Uncertainty of the position was defined (for surface, line, and point objects) using several (three to five) visual variables. Uncertainty of attribute values and time uncertainty, describing (for example) completeness or timeliness of sets, are presented by means of three graphical variables. The research problems presented in the paper are of cognitive and application importance. They indicate on the possibility of effective evaluation of the quality of spatial data collected in public registers and may be an important element of the expert system.
APA, Harvard, Vancouver, ISO, and other styles
6

Unger, Brigitte, Lucia Rossel, and Joras Ferwerda, eds. Combating Fiscal Fraud and Empowering Regulators. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198854722.001.0001.

Full text
Abstract:
This book showcases a multidisciplinary set of work on the impact of regulatory innovation on the scale and nature of tax evasion, tax avoidance, and money laundering. We consider the international tax environment an ecosystem undergoing a period of rapid change as shocks such as the financial crisis, new business forms, scandals and novel regulatory instruments impact upon it. This ecosystem evolves as jurisdictions, taxpayers, and experts react. Our analysis focuses mainly on Europe and five new regulations: Automatic Exchange of Information, which requires that accounts held by foreigners are reported to authorities in the account holder’s country of residence; the OECD’s Base Erosion and Profit Shifting initiative and Country by Country Reporting, which attempt to reduce the opportunity spaces in which corporations can limit tax payments and utilize low or no tax jurisdictions; the Legal Entity Identifier which provides a 20-digit identification code for all individual, corporate or government entities conducting financial transactions; and the Fourth and Fifth Anti-Money Laundering Directives, that criminalize tax crimes and prescribe that the Ultimate Beneficial Owner of a company is registered. Working from accounting, economic, political science, and legal perspectives, the analysis in this book provides an assessment of the reforms and policy recommendations that will reinforce the international tax system. The collection also flags the dangers posed by emerging tax loopholes provided by new business models and in the form of freeports and golden passports. Our central message is that inequality can and has to be reduced substantially, and we can achieve this through an improved international tax system.
APA, Harvard, Vancouver, ISO, and other styles
7

Bucy, Erik P., and Patrick Stewart. The Personalization of Campaigns: Nonverbal Cues in Presidential Debates. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190228637.013.52.

Full text
Abstract:
Nonverbal cues are important elements of persuasive communication whose influence in political debates are receiving renewed attention. Recent advances in political debate research have been driven by biologically grounded explanations of behavior that draw on evolutionary theory and view televised debates as contests for social dominance. The application of biobehavioral coding to televised presidential debates opens new vistas for investigating this time-honored campaign tradition by introducing a systematic and readily replicated analytical framework for documenting the unspoken signals that are a continuous feature of competitive candidate encounters. As research utilizing biobehavioral measures of presidential debates and other political communication progresses, studies are becoming increasingly characterized by the use of multiple methodologies and merging of disparate data into combined systems of coding that support predictive modeling.Key elements of nonverbal persuasion include candidate appearance, communication style and behavior, as well as gender dynamics that regulate candidate interactions. Together, the use of facial expressions, voice tone, and bodily gestures form uniquely identifiable display repertoires that candidates perform within televised debate settings. Also at play are social and political norms that govern candidate encounters. From an evaluative standpoint, the visual equivalent of a verbal gaffe is the commission of a nonverbal expectancy violation, which draws viewer attention and interferes with information intake. Through second screens, viewers are able to register their reactions to candidate behavior in real time, and merging biobehavioral and social media approaches to debate effects is showing how such activity can be used as an outcome measure to assess the efficacy of candidate nonverbal communication during televised presidential debates.Methodological approaches employed to investigate nonverbal cues in presidential debates have expanded well beyond the time-honored technique of content analysis to include lab experiments, focus groups, continuous response measurement, eye tracking, vocalic analysis, biobehavioral coding, and use of the Facial Action Coding System to document the muscle movements that comprise leader expressions. Given the tradeoffs and myriad considerations involved in analyzing nonverbal cues, critical issues in measurement and methodology must be addressed when conducting research in this evolving area. With automated coding of nonverbal behavior just around the corner, future research should be designed to take advantage of the growing number of methodological advances in this rapidly evolving area of political communication research.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Automates à registres"

1

Gomes da Silva, Paula, Anne-Laure Beck, Jara Martinez Sanchez, Raúl Medina Santanmaria, Martin Jones, and Amine Taji. "Advances on coastal erosion assessment from satellite earth observations: exploring the use of Sentinel products along with very high resolution sensors." In Proceedings e report, 412–21. Florence: Firenze University Press, 2020. http://dx.doi.org/10.36253/978-88-5518-147-1.41.

Full text
Abstract:
This work proposes the use of automatic co-registered satellite images to obtain large, high frequency and highly accurate shorelines time series. High resolution images are used to co-register Landsat and Sentinel-2 images. 90% of the co-registered images presented vertical and horizontal shift lower than 3 m. Satellite derived shorelines presented errors lower than mission’s precision. A discussion is presented on the applicability of those shorelines through an application to Tordera Delta (Spain).
APA, Harvard, Vancouver, ISO, and other styles
2

Koho, Mikko, Petri Leskinen, and Eero Hyvönen. "Integrating Historical Person Registers as Linked Open Data in the WarSampo Knowledge Graph." In Semantic Systems. In the Era of Knowledge Graphs, 118–26. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59833-4_8.

Full text
Abstract:
Abstract Semantic data integration from heterogeneous, distributed data silos enables Digital Humanities research and application development employing a larger, mutually enriched and interlinked knowledge graph. However, data integration is challenging, involving aligning the data models and reconciling the concepts and named entities, such as persons and places. This paper presents a record linkage process to reconcile person references in different military historical person registers with structured metadata. The information about persons is aggregated into a single knowledge graph. The process was applied to reconcile three person registers of the popular semantic portal “WarSampo – Finnish World War 2 on the Semantic Web”. The registers contain detailed information about some 100 000 people and are individually maintained by domain experts. Thus, the integration process needs to be automatic and adaptable to changes in the registers. An evaluation of the record linkage results is promising and provides some insight into military person register reconciliation in general.
APA, Harvard, Vancouver, ISO, and other styles
3

Exibard, Léo, Emmanuel Filiot, and Pierre-Alain Reynier. "On Computability of Data Word Functions Defined by Transducers." In Lecture Notes in Computer Science, 217–36. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45231-5_12.

Full text
Abstract:
AbstractIn this paper, we investigate the problem of synthesizing computable functions of infinite words over an infinite alphabet (data $$\omega $$ ω -words). The notion of computability is defined through Turing machines with infinite inputs which can produce the corresponding infinite outputs in the limit. We use non-deterministic transducers equipped with registers, an extension of register automata with outputs, to specify functions. Such transducers may not define functions but more generally relations of data $$\omega $$ ω -words, and we show that it is PSpace-complete to test whether a given transducer defines a function. Then, given a function defined by some register transducer, we show that it is decidable (and again, PSpace-c) whether such function is computable. As for the known finite alphabet case, we show that computability and continuity coincide for functions defined by register transducers, and show how to decide continuity. We also define a subclass for which those problems are PTime.
APA, Harvard, Vancouver, ISO, and other styles
4

Tzevelekos, Nikos, and Radu Grigore. "History-Register Automata." In Lecture Notes in Computer Science, 17–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37075-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

D’Antoni, Loris, Tiago Ferreira, Matteo Sammartino, and Alexandra Silva. "Symbolic Register Automata." In Computer Aided Verification, 3–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25540-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gao, Ziyuan, Sanjay Jain, Zeyong Li, Ammar Fathin Sabili, and Frank Stephan. "Alternating Automatic Register Machines." In Lecture Notes in Computer Science, 195–211. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17715-6_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kjos-Hanssen, Bjørn. "Shift Registers Fool Finite Automata." In Logic, Language, Information, and Computation, 170–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2017. http://dx.doi.org/10.1007/978-3-662-55386-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Khalimov, Ayrat, Benedikt Maderbacher, and Roderick Bloem. "Bounded Synthesis of Register Transducers." In Automated Technology for Verification and Analysis, 494–510. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01090-4_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Howar, Falk, Bernhard Steffen, Bengt Jonsson, and Sofia Cassel. "Inferring Canonical Register Automata." In Lecture Notes in Computer Science, 251–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27940-9_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cassel, Sofia, Falk Howar, Bengt Jonsson, Maik Merten, and Bernhard Steffen. "A Succinct Canonical Register Automaton Model." In Automated Technology for Verification and Analysis, 366–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24372-1_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Automates à registres"

1

Borges, Fernando Elias Melo, Danton Diego Ferreira, and Antônio Carlos de Sousa Couto Júnior. "Classificação e Interpretação de dados do Cadastro Ambiental Rural utilizando técnicas de Aprendizagem de Máquina." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-108.

Full text
Abstract:
The Rural Environmental Registry (CAR) consists of a mandatory public electronic registry for all rural properties in the Brazilian territory, integrates environmental information of the properties, assists the monitoring of them and the fight against deforestation. However, a large number of registrations are carried out erroneously generating inconsistent data, leading these to be canceled and/or to be requested to correct the registration. Performing automatic verification of these records is important to improve the processing of records. This paper proposes an automatic classification method to approve or cancel the CAR registers with interpretation of the classifications performed. For this, four machine learning-based classifiers were tested and the results were evaluated. The model with the best performance was used to interpret the classification using the Local Interpretable Model-agnostic Explanations (LIME) algorithm. The results showed the potential of the method in future real applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Tzevelekos, Nikos. "Fresh-register automata." In the 38th annual ACM SIGPLAN-SIGACT symposium. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1926385.1926420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Kyungnam, Yuri Owechko, Arturo Flores, and Dmitriy Korchev. "Multisensor ISR in geo-registered contextual visual dataspace (CVD)." In Automatic Target Recognition XXI. SPIE, 2011. http://dx.doi.org/10.1117/12.887618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Agarwal, Nainesh, and Nikitas Dimopoulos. "Towards Automated Power Gating of Registers using CoDeL." In 2007 IEEE International Symposium on Circuits and Systems. IEEE, 2007. http://dx.doi.org/10.1109/iscas.2007.378831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jie Fu and H. G. Tanner. "Optimal planning on register automata." In 2012 American Control Conference - ACC 2012. IEEE, 2012. http://dx.doi.org/10.1109/acc.2012.6315508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Murawski, Andrzej S., Steven J. Ramsay, and Nikos Tzevelekos. "Bisimilarity in Fresh-Register Automata." In 2015 30th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). IEEE, 2015. http://dx.doi.org/10.1109/lics.2015.24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yu-Fang, Ondrej Lengal, Tony Tan, and Zhilin Wu. "Register automata with linear arithmetic." In 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). IEEE, 2017. http://dx.doi.org/10.1109/lics.2017.8005111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Segoufin, Luc, and Victor Vianu. "Projection Views of Register Automata." In SIGMOD/PODS '20: International Conference on Management of Data. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3375395.3387651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Touili, Tayssir. "Register Automata for Malware Specification." In ARES 2022: The 17th International Conference on Availability, Reliability and Security. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3538969.3544442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

K, Salman. "Feedback Shift Registers as Cellular Automata Boundary Conditions." In First International Conference on Computational Science and Engineering. Academy & Industry Research Collaboration Center (AIRCC), 2013. http://dx.doi.org/10.5121/csit.2013.3302.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Automates à registres"

1

Perez, Jorge E., and Vijay K. Madisetti. Integrated Automatic Target Detection from Pixel-Registered Visual-Thermal-Range Images. Fort Belvoir, VA: Defense Technical Information Center, January 1998. http://dx.doi.org/10.21236/ada358524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Neeley, Aimee, Stace E. Beaulieu, Chris Proctor, Ivona Cetinić, Joe Futrelle, Inia Soto Ramos, Heidi M. Sosik, et al. Standards and practices for reporting plankton and other particle observations from images. Woods Hole Oceanographic Institution, July 2021. http://dx.doi.org/10.1575/1912/27377.

Full text
Abstract:
This technical manual guides the user through the process of creating a data table for the submission of taxonomic and morphological information for plankton and other particles from images to a repository. Guidance is provided to produce documentation that should accompany the submission of plankton and other particle data to a repository, describes data collection and processing techniques, and outlines the creation of a data file. Field names include scientificName that represents the lowest level taxonomic classification (e.g., genus if not certain of species, family if not certain of genus) and scientificNameID, the unique identifier from a reference database such as the World Register of Marine Species or AlgaeBase. The data table described here includes the field names associatedMedia, scientificName/ scientificNameID for both automated and manual identification, biovolume, area_cross_section, length_representation and width_representation. Additional steps that instruct the user on how to format their data for a submission to the Ocean Biodiversity Information System (OBIS) are also included. Examples of documentation and data files are provided for the user to follow. The documentation requirements and data table format are approved by both NASA’s SeaWiFS Bio-optical Archive and Storage System (SeaBASS) and the National Science Foundation’s Biological and Chemical Oceanography Data Management Office (BCO-DMO).
APA, Harvard, Vancouver, ISO, and other styles
3

McCarthy, Sean T., Aneesa Motala, Emily Lawson, and Paul G. Shekelle. Prevention in Adults of Transmission of Infection With Multidrug-Resistant Organisms. Rapid Review. Agency for Healthcare Research and Quality (AHRQ), April 2024. http://dx.doi.org/10.23970/ahrqepc_mhs4mdro.

Full text
Abstract:
Objectives. This rapid review summarizes literature for patient safety practices intended to prevent and control the transmission of multidrug-resistant organisms (MDROs). Methods. We followed rapid review processes of the Agency for Healthcare Research and Quality Evidence-based Practice Center Program. We searched PubMed to identify eligible systematic reviews from 2011 to May 2023 and primary studies published from 2011 to May 2023, supplemented by targeted gray literature searches. We included literature that addressed patient safety practices intending to prevent or control transmission of MDROs which were implemented in hospitals and nursing homes and that included clinical outcomes of infection or colonization with MDROs as well as unintended consequences such as mental health effects and noninfectious adverse healthcare-associated outcomes. The protocol for the review has been registered in PROSPERO (CRD42023444973). Findings. Our search retrieved 714 citations, of which 42 articles were eligible for review. Systematic reviews, which were primarily of observational studies, included a wide variety of infection prevention and control (IPC) practices, including universal gloving, contact isolation precautions, adverse effects of patient isolation, patient and/or staff cohorting, room decontamination, patient decolonization, IPC practices specifically in nursing homes, features of organizational culture to facilitate implementation of IPC practices and the role of dedicated IPC staff. While systematic reviews were of good or fair quality, strength of evidence for the conclusions was always low or very low, due to reliance on observational studies. Decolonization strategies showed some benefit in certain populations, such as nursing home patients and patients discharging from acute care hospitalization. Universal gloving showed a small benefit in the intensive care unit. Contact isolation targeting patients colonized or infected with MDROs showed mixed effects in the literature and may be associated with mental health and noninfectious (e.g., falls and pressure ulcers) adverse effects when compared with standard precautions, though based on before/after studies in which such precautions were ceased. There was no significant evidence of benefit for patient cohorting (except possibly in outbreak settings), automated room decontamination or cleaning feedback protocols, and IPC practices in long-term settings. Infection rates may be improved when IPC practices are implemented in the context of certain logistical and staffing characteristics including a supportive organizational culture, though again strength of evidence was low. Dedicated infection prevention staff likely improve compliance with other patient safety practices, though there is little evidence of their downstream impact on rates of infection. Conclusions. Selected infection prevention and control interventions had mixed evidence for reducing healthcare-associated infection and colonization by multidrug resistant organisms. Where these practices did show benefit, they often had evidence that applied only to certain subpopulations (such as intensive care unit patients), though overall strength of evidence was low.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography