Дисертації з теми "004.891.3:004.3"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-44 дисертацій для дослідження на тему "004.891.3:004.3".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Архирей, Марина Володимирівна, та Аліна Оленксандрівна Христенко. "АВТОМАТИЗОВАНЕ РОБОЧЕ МІСЦЕ ЛІКАРЯ". Thesis, ПОЛІТ. Сучасні проблеми науки. Інформаційно-діагностичні системи: тези доповідей ХV міжнародної науково-практичної конференції молодих учених і студентів, 2015. https://er.nau.edu.ua/handle/NAU/47902.
Повний текст джерелаSeng, Shay Ping. "Adaptive flexible instruction processors." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406853.
Повний текст джерелаLi, Bo. "Detecting TCP-based applications using packet size distributions." Thesis, Loughborough University, 2008. https://dspace.lboro.ac.uk/2134/15329.
Повний текст джерелаHuang, Wei. "Scheduling of batch processing plants : constraint model and computer scheduling system." Thesis, Loughborough University, 2003. https://dspace.lboro.ac.uk/2134/7592.
Повний текст джерелаНизовець, Олександр Вікторович. "Метод створення інтерактивно-аналітичної системи для контролю стану здоров’я". Thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2014. http://elartu.tntu.edu.ua/handle/123456789/3239.
Повний текст джерелаДипломна робота магістра за спеціальністю 8.05090204 – біотехнічні та медичні апарати та системи, Тернопільський національний технічний університет імені Івана Пулюя, Тернопіль, 2014
Sansrimahachai, Watsawee. "Tracing fine-grained provenance in stream processing systems using a reverse mapping method." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/337675/.
Повний текст джерелаGrosser, Tobias. "A decoupled approach to high-level loop optimization : tile shapes, polyhedral building blocks and low-level compilers." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066270/document.
Повний текст джерелаDespite decades of research on high-level loop optimizations and theirsuccessful integration in production C/C++/FORTRAN com- pilers, most compilerinternal loop transformation systems only partially address the challengesposed by the increased complexity and diversity of today’s hardware. Especiallywhen exploiting domain specific knowledge to obtain optimal code for complextargets such as accelerators or many-cores processors, many existing loopoptimization frameworks have difficulties exploiting this hardware. As aresult, new domain specific optimization schemes are developed independentlywithout taking advantage of existing loop optimization technology. This resultsboth in missed optimization opportunities as well as low portability of theseoptimization schemes to different compilers. One area where we see the need forbetter optimizations are iterative stencil computations, an importantcomputational problem that is regularly optimized by specialized, domainspecific compilers, but where generating efficient code is difficult.In this work we present new domain specific optimization strategies that enablethe generation of high-performance GPU code for stencil computations. Differentto how most existing domain specific compilers are implemented, we decouple thehigh-level optimization strategy from the low-level optimization andspecialization necessary to yield optimal performance. As high-leveloptimization scheme we present a new formulation of split tiling, a tilingtechnique that ensures reuse along the time dimension as well as balancedcoarse grained parallelism without the need for redundant computations. Usingsplit tiling we show how to integrate a domain specific optimization into ageneral purpose C-to-CUDA translator, an approach that allows us to reuseexisting non-domain specific optimizations. We then evolve split tiling into ahybrid hexagonal/parallelogram tiling scheme that allows us to generate codethat even better addresses GPU specific concerns. To conclude our work ontiling schemes we investigate the relation between diamond and hexagonaltiling. Starting with a detailed analysis of diamond tiling including therequirements it poses on tile sizes and wavefront coefficients, we provide aunified formulation of hexagonal and diamond tiling which enables us to performhexagonal tiling for two dimensional problems (one time, one space) in thecontext of a general purpose optimizer such as Pluto. Finally, we use thisformulation to evaluate hexagonal and diamond tiling in terms ofcompute-to-communication and compute-to-synchronization ratios.In the second part of this work, we discuss our contributions to importantinfrastructure components, our building blocks, that enviable us to decoupleour high-level optimizations from both the necessary code generationoptimizations as well as the compiler infrastructure we apply the optimizationto. We start with presenting a new polyhedral extractor that obtains apolyhedral representation from a piece of C code, widening the supported C codeto exploit the full generality of Presburger arithmetic and taking special careof modeling language semantics even in the presence of defined integerwrapping. As a next step, we present a new polyhedral AST generation approach,which extends AST generation beyond classical control flow generation byallowing the generation of user provided mappings. Providing a fine-grainedoption mechanism, we give the user fine grained control about AST generatordecisions and add extensive support for specialization e.g., with a newgeneralized form of polyhedral unrolling. To facilitate the implementation ofpolyhedral transformations, we present a new schedule representation, scheduletrees, which proposes to make the inherent tree structure of schedules explicitto simplify the work with complex polyhedral schedules.The last part of this work takes a look at our contributions to low-levelcompilers
Федорчук, Арсен Олександрович, та Arsen Fedorchuk. "Комп’ютеризована система організації доступу в приміщення на основі RFID-технології". Bachelor's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2021. http://elartu.tntu.edu.ua/handle/lib/35562.
Повний текст джерелаThe bachelor's thesis offers an engineering solution for building a computerized system for organizing access to the premises on the basis of RFID technology. Prototyping of such a system is based on Arduino Uno using the RFID module SparkFun RFID Reader, which performs the functions of an RFID tag scanner. In addition, the hardware of the computerized system includes: three diodes, LCD display, Wi Fi module ESP8266, servo, which emulates the operation of the door lock. The system software for controlling the computerized system of the organization of access to the room is created by means of language C in the Arduino IDE environment. Application software that allows to increase the efficiency of the proposed solution by recording and accumulating statistics on the time spent by employees in the workplace, implemented using Windows Forms technology, C # programming language in Visual Studio. The database is implemented in SQL language in the database management system MS SQL Server Express 2019.
ПЕРЕЛІК ОСНОВНИХ УМОВНИХ ПОЗНАЧЕНЬ, СИМВОЛІВ І СКОРОЧЕНЬ 7 ВСТУП 8 1 АНАЛІЗ ТЕХНІЧНОГО ЗАВДАННЯ ТА ОСОБЛИВОСТЕЙ ФУНКЦІОНУВАННЯ КОМП’ЮТЕРИЗОВАНИХ СИСТЕМ З RFID-ПРИСТРОЯМИ 10 1.1 Аналіз технічного завдання на комп’ютеризовану систему доступу у приміщення на основі RFID-технологій 10 1.2 Аналіз структури та особливостей проектування комп’ютеризованих систем на основі RFID-технологій 16 2 ПРОЕКТУВАННЯ АРХІТЕКТУРИ ТА ОБГРУНТУВАННЯ ВИБОРУ АПАРАТНОГО ЗАБЕЗПЕЧЕННЯ КОМП’ЮТЕРИЗОВАНОЇ СИСТЕМИ ДОСТУПУ У ПРИМІЩЕННЯ 23 2.1 Проектування архітектури комп’ютеризованої системи доступу у приміщення 23 2.2 Обґрунтування мікроконтролера для управління процесом зчитування та передачі даних на основі RFID-технології 25 2.3 Обґрунтування вибору і технічні характеристики RFID-компонентів 29 2.4 LCD дисплей візуалізації стану RFID-зчитувача 32 2.5 Сервопривід 33 2.6 Модуль взаємодії Arduino Uno з Wi Fi роутером 34 2.7 Проектування схеми та системного програмного забезпечення комп’ютеризованої системи доступу у приміщення на основі RFID-технології 36 3 РЕАЛІЗАЦІЯ ПРИКЛАДНОГО ПРОГРАМНОГО ЗАБЕЗПЕЧЕННЯ ДЛЯ РОБОТИ З СИСТЕМОЮ ОРГАНІЗАЦІЇ ДОСТУПУ У ПРИМІЩЕННЯ 44 3.1 Виявлення і деталізація функціональних вимог до прикладного програмного забезпечення комп’ютеризованої системи доступу у приміщення 44 3.2 Проектування архітектури прикладного додатку 46 3.3 Проектування та реалізація бази даних системи доступу у приміщення 48 3.4 Розробка алгоритму зчитування унікального ідентифікатора з RFID-мітки та запису в базу даних 52 3.5 Користувацький інтерфейс прикладного додатку організації доступу у приміщення 56 ВИСНОВКИ 61
Грам’як, Микола Юрійович, та Mykola Hramiak. "Комп’ютеризована система зчитування штрих-кодів". Bachelor's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2021. http://elartu.tntu.edu.ua/handle/lib/35564.
Повний текст джерелаThe result of the bachelor's qualification work is a computerized system of reading barcodes with the ability to write data to a database, which is hosted on the Internet. The paper analyzes the requirements for the system, the principles of formation and types of barcodes, as well as existing technologies for building computerized systems with IoT components. In addition, the architecture of a computerized barcode reading system was built, the technical characteristics of the hardware were substantiated and researched, and a component scheme of the system was designed. The final stage of the project was the setup and development of system software and high-level software to obtain read and decoded barcodes in the database. The following hardware devices were used during the project implementation: Arduino MKR 1000 WiFi prototyping board, GM-65 barcode scanner, Nextion NX3224K024 touch display, UART programmer.
ПЕРЕЛІК ОСНОВНИХ УМОВНИХ ПОЗНАЧЕНЬ, СИМВОЛІВ І СКОРОЧЕНЬ 7 ВСТУП 8 1 АНАЛІЗ ТЕХНІЧНОГО ЗАВДАННЯ ТА ОСОБЛИВОСТЕЙ ФУНКЦІОНУВАННЯ СИСТЕМ ЗЧИТУВАННЯ ШТРИХ-КОДІВ 9 1.1 Аналіз вимог до проектування комп’ютеризованої системи зчитування штрих-кодів 9 1.2 Особливості систем автоматичної ідентифікації об’єктів 15 1.3 Сфера застосування штрих-кодів 18 1.4 Класифікація штрих-кодів 19 2 ПРОЕКТУВАННЯ КОМП’ЮТЕРИЗОВАНОЇ СИСТЕМИ ЗЧИТУВАННЯ ШТРИХ-КОДІВ 22 2.1 Аналіз принципів функціонування сканерів штрих-кодів 22 2.2 Архітектура комп’ютеризованої системи зчитування штрих-кодів 26 2.3 Плата макетування Arduino MKR 1000 WiFi 28 2.4 Модуль зчитування штрих-кодів MG-65 33 2.5 Сенсорний дисплей NX3224K024 37 2.6 Проектування схеми компонентів комп’ютеризованої системи зчитування штрих-кодів 41 3 НАЛАШТУВАННЯ ПРОГРАМНОГО ЗАБЕЗПЕЧЕННЯ КОМП’ЮТЕРИЗОВАНОЇ СИСТЕМИ ЗЧИТУВАННЯ ШТРИХ-КОДІВ 44 3.1 Налаштування параметрів сканера GM-65 44 3.2 Налаштування програмного забезпечення сенсорного дисплею 47 3.3 Підключення плати Arduino MKR 1000 WiFi до мережі Інтернет 53 3.4 Запис штрих-кодів у базу даних 55 ВИСНОВКИ 58
Aigrain, Jonathan. "Multimodal detection of stress : evaluation of the impact of several assessment strategies." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066681/document.
Повний текст джерелаIt is now widely accepted that stress plays an important role in modern societies. It impacts the body and the mind at several levels and the association between stress and disease has been observed in several studies. However, there is no consensual definition of stress yet, and therefore there is no consensual way of assessing it either. Thus, although the quality of assessment is a key factor to build robust stress detection solutions, researchers have to choose among a wide variety of assessment strategies. This heterogeneity impacts the validity of comparing solutions among them. In this thesis, we evaluate the impact of several assessment strategies for stress detection. We first review how different fields of research define and assess stress. Then, we describe how we collected stress data along with multiple assessments. We also study the association between these assessments. We present the behavioural and physiological features that we extracted for our experiments. Finally, we present the results we obtained regarding the impact of assessment strategies on 1) data normalization, 2) feature classification performance and 3) on the design of machine learning algorithms. Overall, we argue that one has to take a global and comprehensive approach to design stress detection solutions
Уйван, Іванна Русланівна, та Ivanna Uyvan. "Комп’ютерна система збору та аналізу даних з метеостанцій". Bachelor's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2021. http://elartu.tntu.edu.ua/handle/lib/35574.
Повний текст джерелаThe bachelor's thesis is dedicated to the development of a computer system for collecting and analyzing data from weather stations. The process of building the system includes the design of hardware for reading weather indicators and a set of software to control the processes of obtaining information from sensors. In addition, the computer system implements the recording of information in the database and obtaining the latest weather indicators from the nearest weather stations. The hardware components of a weather station are divided into two types: passive and active. The passive components include: module BME280 - to obtain data on air temperature, humidity and atmospheric pressure; DS18B20 module - measures soil temperature. The active components that work on the principle of a pseudo-button based on reed switches include: anemometer - allows you to measure speed, direction, and wind gusts; precipitation sensor - measures the height of the rainwater column per 1 m2.
ПЕРЕЛІК ОСНОВНИХ УМОВНИХ ПОЗНАЧЕНЬ, СИМВОЛІВ І СКОРОЧЕНЬ 8 ВСТУП 9 1 ВИЗНАЧЕННЯ ВИМОГ ТА АНАЛІЗ ІСНУЮЧИХ РІШЕНЬ ЩОДО ПРОЕКТУВАННЯ МЕТЕОСТАНЦІЙ 11 1.1 Аналіз технічного завдання на проектування комп’ютерної системи збору та аналізу даних з метеостанцій 11 1.2 Аналіз структури та принципів організації метеостанцій 16 2 ПРОЕКТУВАННЯ АПАРАТНОЇ СКЛАДОВОЇ КОМП’ЮТЕРНОЇ СИСТЕМИ ДЛЯ ЗБОРУ ТА АНАЛІЗУ ДАНИХ З МЕТЕОСТАНЦІЙ 23 2.1 Проектування структури системи збору та аналізу даних з метеостанцій 23 2.2 Обґрунтування вибору типу мікроконтролера 25 2.3 Обґрунтування вибору пасивних компонентів комп’ютерної системи збору та аналізу даних з метеостанцій 27 2.3.1 Сенсори температури повітря, вологості та атмосферного тиску 27 2.3.2 Сенсор температури ґрунту 29 2.4 Обґрунтування вибору пасивних компонентів комп’ютерної системи збору та аналізу даних з метеостанцій 33 2.4.1 Анемометр 34 2.4.2 Сенсор кількості опадів 42 3 ПРОГРАМНЕ ЗАБЕЗПЕЧЕННЯ КОМП’ЮТЕРНОЇ СИСТЕМИ ЗБОРУ ТА АНАЛІЗУ ДАНИХ З МЕТЕОСТАНЦІЙ 45 3.1 Обґрунтування середовища та мови програмування для управління компонентами метеостанції 45 3.2 Програмне забезпечення пасивних компонентів метеостанції 46 3.3 Програмне забезпечення активних компонентів метеостанції 50 3.4 База даних для зберігання метеопоказників 56 3.5 Програмне забезпечення збору даних з відкритих джерел 59 ВИСНОВКИ 63
Янковська, Неля Володимирівна, та Nelia Yankovska. "Комп’ютеризована система відеоспостереження з функцією розпізнавання обличчя". Bachelor's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2021. http://elartu.tntu.edu.ua/handle/lib/35575.
Повний текст джерелаIn the bachelor's qualification work, a prototype of a computerized video surveillance system with a face recognition function based on the ESP32-CAM module, an OV2640 camcorder with a resolution of 2 MP and an electromechanical door lock was designed. During this work, an analysis of existing approaches to the implementation of classical video surveillance systems, proposed its own system architecture, which takes into account the peculiarities of this process and provides control of the electromagnetic lock based on the approach "own" - "not own". The analysis of technical characteristics of hardware is substantiated and carried out, in particular: ESP32-CAM module as a basic component; The OV2640 camera, as a means of video capture, shows the principle and algorithm of recording program code in ESP32-CAM using the FTDI programmer and the Arduino IDE. System software for video surveillance process control and electromagnetic lock control has been developed, and a face recognition model based on a convolutional neural network has been built and implemented.
ПЕРЕЛІК ОСНОВНИХ УМОВНИХ ПОЗНАЧЕНЬ, СИМВОЛІВ І СКОРОЧЕНЬ 8 ВСТУП 9 1 АНАЛІЗ ТЕХНІЧНОГО ЗАВДАННЯ НА ПРОЕКТУВАННЯ КОМП’ЮТЕРИЗОВАНОЇ СИСТЕМИ ВІДЕОСПОСТЕРЕЖЕННЯ 11 1.1 Аналіз вимог до апаратного і програмного забезпечення системи відеоспостереження 11 1.2 Аналіз сфер застосування систем відеоспостереження та способів їх реалізації 17 2 АРХІТЕКТУРА ТА АПАРАТНЕ ЗАБЕЗПЕЧЕННЯ КОМП’ЮТЕРИЗОВАНОЇ СИСТЕМИ ВІДЕОСПОСТЕРЖЕННЯ 24 2.1 Аналіз типових архітектур і проектування структури прототипу системи відеоспостереження з функцією розпізнавання обличчя 24 2.2 Обґрунтування вибору модуля ESP32-CAM та аналіз його технічних характеристик 30 2.3 Характеристики камери OV2640 35 2.4 Проектування схеми системи відеоспостереження з функцією розпізнавання обличчя 38 3 РОЗРОБКА СИСТЕМНОГО ПРОГРАМНОГО ЗАБЕЗПЕЧЕННЯ ТА ПОБУДОВА МОДЕЛІ ІНТЕЛЕКТУАЛЬНОГО РОЗПІЗАНВАННЯ ОБЛИЧЧЯ 43 3.1 Проектування алгоритму та реалізація системного програмного забезпечення комп’ютеризованої системи відеоспостереження 43 3.2 Інсталяція веб-сервера та бібліотеки для роботи з ESP32-CAM 45 3.3 Обґрунтування засобів та побудова моделі для реалізації функції виявлення та розпізнавання обличчя 49 3.4 Тестування комп’ютеризованої системи відеоспостереження з функцією розпізнавання обличчя 54 4 БЕЗПЕКА ЖИТТЄДІЯЛЬНОСТІ, ОСНОВИ ОХОРОНИ ПРАЦІ 58 4.1 Вимоги до виробничих приміщень для експлуатації ВДТ 58 4.2 Вплив іонізуючого випромінювання на організм людини та запобігання його негативній дії 60 ВИСНОВКИ 64 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ 65 Додаток A. Технічне завдання
Monna, Florence. "Ordonnancement pour les nouvelles plateformes de calcul avec GPUs." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066390/document.
Повний текст джерелаMore and more computers use hybrid architectures combining multi-core processors (CPUs) and hardware accelerators like GPUs (Graphics Processing Units). These hybrid parallel platforms require new scheduling strategies. This work is devoted to a characterization of this new type of scheduling problems. The most studied objective in this work is the minimization of the makespan, which is a crucial problem for reaching the potential of new platforms in High Performance Computing. The core problem studied in this work is scheduling efficiently n independent sequential tasks with m CPUs and k GPUs, where each task of the application can be processed either on a CPU or on a GPU, with minimum makespan. This problem is NP-hard, therefore we propose approximation algorithms with performance ratios ranging from 2 to (2q+1)/(2q)+1/(2qk), q>0, and corresponding polynomial time complexities. The proposed solving method is the first general purpose algorithm for scheduling on hybrid machines with a theoretical performance guarantee that can be used for practical purposes. Some variants of the core problem are studied: a special case where all the tasks are accelerated when assigned to a GPU, with a 3/2-approximation algorithm, a case where preemptions are allowed on CPUs, the same problem with malleable tasks, with an algorithm with a ratio of 3/2. Finally, we studied the problem with dependent tasks, providing a 6-approximation algorithm. Experiments based on realistic benchmarks have been conducted. Some algorithms have been integrated into the scheduler of the xKaapi runtime system for linear algebra kernels, and compared to the state-of-the-art algorithm HEFT
Allam, Mootaz Bellah Mohamed Mahmoud. "Convertisseur analogique-numérique ΣΔ à base VCO". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066299.
Повний текст джерелаToday's wireless communication systems are requiring high performance Converters analog-digital (ADC) with increasing demand on bandwidth and resolution.There is a growing need for low-power and multi-functional RF receivers , since the market is expecting complex receiving capacities with low power battery operated devices.For this reason the current trend is to decrease the analogue part of the receivers, while increasing the tasks performed by the digital part.Therefore, this imposes stringent requirements on the ADC such as wideband operation, high resolution and low power consumption.In this dissertation, we studied and realized several types of VCO-based ADCs.We show the design, implementation and the measurements of two types of VCO-based ADCs in 65nm CMOS process. The first is using the voltage to frequency conversion technique while the second uses the principle of voltage to phase conversion.The voltage to frequency converter is a 4-bit ADC with a programmable sampling frequency that goes from 220MHz up to 1500MHz.The measured Signal-to-noise-and-distortion-ratio (SNDR) is of 40.5dB in a band of 30MHz with a power consumption of 0.5mW.The voltage phase converter is a 4-bit ADC with a programmable sampling frequency that goes from 300MHz up to 1440MHz.The measured SNDR is 48dB in a band of 30MHz with a consumption of 1mW.We then present a systematic design method of high order SigmaDelta ADCs with VCO-based quantizers.To validate the design method, a SigmaDelta ADC with a 4-bit voltage-frequency is designed in 65nm. The measured SNDR is 62dB in a band of 28MHz and a power consumption of 30mw.We propose the use of VCO-based quantizers in quadrature SigmaDelta modulators. A systematic design method is presented for the quadrature VCO-based Sigmadelta modulators.A 4th order quadrature sigmadelta with 4-bit voltage to frequency quantizers is designed in 65nm. The measurements of this circuit are currently in progress. In post layout simulations, the quadrature modulator achieves 75dB in a band of 40MHz and a power consumption of 60mW
Sakho, Mouhamadou Tafsir. "⌈-Pomset pour la modélisation et la vérification de systèmes parallèles." Thesis, Orléans, 2014. http://www.theses.fr/2014ORLE2068/document.
Повний текст джерелаMultiset of partially ordered events (pomset) can describe distributed behavior. Although very intuitive and compact, these models are difficult to verify. The main technique used in this thesis is to bring back decision problems for MSO over pomsets to problems for MSO over words. The problems considered are satisfiability and verification. The verification problem for a formula and a given pomset consists in deciding whether such an interpretation exists, and the satisfiability problem consists in deciding whether a pomset satisfying the formula exists. The satisfiability problem of MSO over pomsets is undecidable. A semi-decision procedures can provide solutions for many cases despite the fact that they may not terminate. We propose a new model, so called ⌈-Pomset, making the exploration of pomsets space possible. Consequently, if a formula is satisfiable then our approach will eventually lead to the detection of a solution. Moreover, using ⌈-Pomsets as models for concurrent systems, the model checking of partial order formulas on concurrent systems is decidable. Some experiments have been made using MONA. We compare also the expressive power of some classical model of concurrency such as Mazurkiewicz traces with our ⌈-Pomsets
Belabed, Dallal. "Design and Evaluation of Cloud Network Optimization Algorithms." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066149/document.
Повний текст джерелаThis dissertation tries to give a deep understanding of the impact of the new Cloud paradigms regarding to the Traffic Engineering goal, to the Energy Efficiency goal, to the fairness in the endpoints offered throughput, and of the new opportunities given by virtualized network functions.In the first part of our dissertation we investigate the impact of these novel features in Data Center Network optimization, providing a formal comprehensive mathematical formulation on virtual machine placement and a metaheuristic for its resolution. We show in particular how virtual bridging and multipath forwarding impact common DCN optimization goals, Traffic Engineering and Energy Efficiency, assess their utility in the various cases in four different DCN topologies.In the second part of the dissertation our interest move into better understand the impact of novel attened and modular DCN architectures on congestion control protocols, and vice-versa. In fact, one of the major concerns in congestion control being the fairness in the offered throughput, the impact of the additional path diversity, brought by the novel DCN architectures and protocols, on the throughput of individual endpoints and aggregation points is unclear.Finally, in the third part we did a preliminary work on the new Network Function Virtualization paradigm. In this part we provide a linear programming formulation of the problem based on virtual network function chain routing problem in a carrier network. The goal of our formulation is to find the best route in a carrier network where customer demands have to pass through a number of NFV node, taking into consideration the unique constraints set by NFV
Carle, Thomas. "Compilation efficace de spécifications de contrôle embarqué avec prise en compte de propriétés fonctionnelles et non-fonctionnelles complexes." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066392/document.
Повний текст джерелаThere is a long standing separation between the fields of compiler construction and real-time scheduling. While both fields have the same objective - the construction of correct implementations – the separation was historically justified by significant differences in the models and methods that were used. Nevertheless, with the ongoing complexification of applications and of the hardware of the execution platforms, the objects and problems studied in these two fields are now largely overlapping. In this thesis, we focus on the automatic code generation for embedded control systems with complex constraints, including hard real-time requirements. To this purpose, we advocate the need for a reconciled research effort between the communities of compilation and real-time systems. By adapting a technique usually used in compilers (software pipelining) to the system-level problem of multiprocessor scheduling of hard real-time applications, we shed light on the difficulties of this unified research effort, but also show how it can lead to real advances. Indeed we explain how adapting techniques for the optimization of new objectives, in a different context, allows us to develop more easily systems of better quality than what was done until now. In this adaptation process, we propose to use synchronous formalisms and languages as a common formal ground. These can be naturally seen as extensions of classical models coming from both real-time scheduling (dependent task graphs) and compilation (single static assignment and data dependency graphs), but also provide powerful techniques for manipulating complex control structures. We implemented our results in the LoPhT compiler
Marchand, Ugo. "Caractérisation du rythme à partir de l'analyse du signal audio." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066453/document.
Повний текст джерелаThis thesis is within the scope of Music Information Retrieval. The goal of this research field is to extract meaningful informations from music. There are numerous applications: music recommendation systems, music transcription to a score or automatic generation of music. In this manuscript, oOur objective is to propose new rhythm descriptions inspired from perceptual and neurological studies.Rhythm representation of a musical signal is a complex problem. Detecting attack positions and note durations is not sufficient: we have the model the temporal interaction between the different instruments collaborating together to create rhythm. We try to obtain representations that are invariant to some parameters (like the position over time, the small tempo or instrumentation variations) but sensitive to other parameters (like the rhythm pattern or the swing factor). We study the three key aspect of rhythm description: tempo, deviations and rhythm pattern
Charbel, Nathalie. "Semantic Representation of a Heterogeneous Document Corpus for an Innovative Information Retrieval Model : Application to the Construction Industry." Thesis, Pau, 2018. http://www.theses.fr/2018PAUU3025/document.
Повний текст джерелаThe recent advances of Information and Communication Technology (ICT) have resulted in the development of several industries. Adopting semantic technologies has proven several benefits for enabling a better representation of the data and empowering reasoning capabilities over it, especially within an Information Retrieval (IR) application. This has, however, few applications in the industries as there are still unresolved issues, such as the shift from heterogeneous interdependent documents to semantic data models and the representation of the search results while considering relevant contextual information. In this thesis, we address two main challenges. The first one focuses on the representation of the collective knowledge embedded in a heterogeneous document corpus covering both the domain-specific content of the documents, and other structural aspects such as their metadata, their dependencies (e.g., references), etc. The second one focuses on providing users with innovative search results, from the heterogeneous document corpus, helping the users in interpreting the information that is relevant to their inquiries and tracking cross document dependencies.To cope with these challenges, we first propose a semantic representation of a heterogeneous document corpus that generates a semantic graph covering both the structural and the domain-specific dimensions of the corpus. Then, we introduce a novel data structure for query answers, extracted from this graph, which embeds core information together with structural-based and domain-specific context. In order to provide such query answers, we propose an innovative query processing pipeline, which involves query interpretation, search, ranking, and presentation modules, with a focus on the search and ranking modules.Our proposal is generic as it can be applicable in different domains. However, in this thesis, it has been experimented in the Architecture, Engineering and Construction (AEC) industry using real-world construction projects
Dien, Matthieu. "Processus concurrents et combinatoire des structures croissantes : analyse quantitative et algorithmes de génération aléatoire." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066210/document.
Повний текст джерелаA concurrent program is a composition of several logical blocks: the processes. Each process has its own behavior, independent from the others: it sequentially runs its actions. An important goal is to ensure that such concurrent complex systems are faultless. This problem is studied in the field of concurrency theory. When several process are running in parallel, the running order of the actions of the total program is no more decided. This is the well-known "combinatorial explosion" phenomena, meaning that the number of possible runs of the global program is huge. The analysis techniques and methods existing (model checking, static analysis, automated testing, etc) are irremediably limited by this "explosion". This thesis is a part of a long-term project about the quantitative study of this phenomena and the development of statistic analysis methods based on the uniform random generation. Our specific goal is to study a fundamental principle of the concurrency theory: the synchronization. This mechanism allows communications between the processes. In this thesis we propose a combinatorial model of increasingly labeled structures to deal with runs of synchronized concurrent programs. Using the tools of analytic combinatorics we obtain close formulas and asymptotic equivalents for the average number of runs in several subclasses of concurrent programs. We also present algorithms of uniform random generation of increasingly labeled structures and for their increasing labelings
Cincilla, Pierpaolo. "Gargamel : accroître les performances des DBMS en parallélisant les transactions en écriture." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066592/document.
Повний текст джерелаDatabases often scale poorly in distributed configurations, due to the cost of concurrency control and to resource contention. The alternative of centralizing writes works well only for read-intensive workloads, whereas weakening transactional properties is problematic for application developers. Our solution spreads non-conflicting update transactions to different replicas, but still provides strong transactional guarantees. In effect, Gargamel partitions the database dynamically according to the update workload. Each database replica runs sequentially, at full bandwidth; mutual synchronisation between replicas remains minimal. Our prototype show that Gargamel improves both response time and load by an order of magnitude when contention is high (highly loaded system with bounded resources), and that otherwise slow-down is negligible
Huraux, Thomas. "Simulation multi-agent d'un système complexe : combiner des domaines d'expertise par une approche multi-niveau. Le cas de la consommation électrique résidentielle." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066674/document.
Повний текст джерелаThe purpose of this work is to tackle a key problem in the study of complex systems when using multi-agent simulation: how to assemble several domains of expertise with a multi-level approach. While existing approaches usually consider the viewpoint of a unique main expert, we propose to use a multi-level model to integrate the multiple domains of expertise embodied in agents located at different abstraction levels. In this work, we show that it is possible to both stay close to the concepts manipulated by the experts (for the sake of the validation process in the domain of each expert) and combine the levels of those concepts. That way, each expert can easily understand the dynamics of the components related to their domain.We present SIMLAB, our meta-model based on a unified representation of the concepts using agents. Each agent can influence the others on different axes and levels. This work is materialised in a study of human activity in relation to electrical consumption. It is a typical example of complex system which requires many domains of expertise such as psychology, energetics, sociology, heat science, … In this context, we present the implementation of our approach in SMACH, a simulation platform of human behaviours. We Then describe several experiments to illustrate the characteristics of our approach. Finally, we show how SIMLAB can reproduce and extend in silico a field study of energy demand management
Mpouli, Njanga Seh Suzanne Patience. "Automatic annotation of similes in literary texts." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066298/document.
Повний текст джерелаThis thesis tackles the problem of the automatic recognition of similes in literary texts written in English or in French and proposes a framework to describe them from a stylistic perspective. For the purpose of this study, a simile has been defined as a syntactic structure that draws a parallel between at least two entities, lacks compositionality and is able to create an image in the receiver’s mind.Three main points differentiate the proposed approach from existing ones: it is strongly influenced by cognitive and linguistic theories on similes and comparisons, it takes into consideration a wide range of markers and it can adapt to diverse syntactic scenarios. Concretely speaking, it relies on three interconnected modules: - a syntactic module, which extracts potential simile candidates and identifies their components using grammatical roles and a set of handcrafted rules, - a semantic module which separates creative similes from both idiomatic similes and literal comparisons based on the salience of the ground and semantic similarity computed from data automatically retrieved from machine-readable dictionaries;- and an annotation module which makes use of the XML format and gives among others information on the type of comparisons (idiomatic, perceptual…) and on the semantic categories used.Finally, the two annotation tasks we designed show that the automatic detection of figuration in similes must take into consideration.Finally, the two annotation tasks we designed show that the automatic detection of figuration in similes must take into consideration a series of features among which salience, categorisation and the sentence syntax
Chan-Hon-Tong, Adrien. "Segmentation supervisée d'actions à partir de primitives haut niveau dans des flux vidéos." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066226/document.
Повний текст джерелаThis thesis focuses on the supervised segmentation of video streams within the application context of daily action recognition.A segmentation algorithm is obtained from Implicit Shape Model by optimising the votes existing in this polling method.We prove that this optimisation can be linked to the sliding windows plus SVM framework and more precisely is equivalent with a standard training by adding temporal constraint, or, by encoding the data through a dense pyramidal decomposition. This algorithm is evaluated on a public database of segmentation where it outperforms other Implicit Shape Model like methods and the standard linear SVM.This algorithm is then integrated into a action segmentation system.Specific features are extracted from skeleton obtained from the video by standard software.These features are then clustered and given to the polling method.This system, combining our feature and our algorithm, obtains the best published performance on a human daily action segmentation dataset
Klikpo, Enagnon Cédric. "Méthode de conception de systèmes temps réels embarqués multi-coeurs en milieu automobile." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS033/document.
Повний текст джерелаThe increasing complexity of embedded applications in modern cars has increased the need of computing power. To meet this need, the European automotive standard AUTOSAR has introduced the use of \multicore platforms. However, \multicore platform for critical automotive applications raises several issues. In particular, it is necessary to respect the functional specification and to guarantee deterministically the data exchanges between cores. In this thesis, we consider multi-periodic systems specified and validated with \mat. So, we developed a framework to deploy \mat applications on AUTOSAR \multicore. This framework guarantees the functional and temporal determinism and exploits the parallelism. Our contribution is threefold. First, we identify the communication mechanisms in \mat. Then, we prove that the dataflow in a multi-periodic \mat system is modeled by a SDFG. The SDFG formalism is an excellent analysis tool to exploit the parallelism. In fact, it is very popular in the literature and it is widely studied for the deployment of dataflow applications on multi/many-core. Then, we develop methods to realize the dataflow expressed by the SDFG in a preemptive \rt scheduling. These methods use theoretical results on SDFGs to guarantee deterministic precedence constraints without using blocking synchronization mechanisms. As such, both the functional and temporal determinism are guaranteed. Finally, we characterize the impact of dataflow requirements on tasks. We propose a partitioning technique that minimizes this impact. We show that this technique promotes the construction of a partitioning and a feasible scheduling when it is used to initiate multi-objective research and optimization algorithms. %As such, we reduce the number of design iterations and shorten the design time
Bramas, Quentin. "Réseaux de capteurs sans fil efficaces en énergie." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066309/document.
Повний текст джерелаA wireless sensor network is an ad-hoc network connecting small devices equipped with sensors. Such networks are self-organized and independent of any infrastructure. The deployment of a WSN is possible in areas inaccessible to humans, or for applications with a long lifetime requirement. Indeed, devices in a wireless sensor network are usually battery-powered, tolerate failure, and may use their own communication protocols, allowing them to optimize the energy consumption. The main application of WSNs it to sense the environment at different locations and aggregate all the data to a specific node that logs it and can send alerts if necessary. This task of data aggregation is performed regularly, making it the most energy consuming. As reducing the energy consumed by sensor is the leading challenge to ensure sustainable applications, we tackle in this thesis the problem of aggregating efficiently the data of the network. Then, we study lifetime evaluation techniques and apply it to benchmark existing energy-centric protocols
Nasr, Imen. "Algorithmes et Bornes minimales pour la Synchronisation Temporelle à Haute Performance : Application à l’internet des objets corporels." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLY007/document.
Повний текст джерелаTime synchronization is the first function performed by the demodulator. It ensures that the samples transmitted to the demodulation processes allow to achieve the lowest bit error rate.In this thesis we propose the study of innovative algorithms for high performance time synchronization.First, we propose algorithms exploiting the soft information from the decoder in addition to the received signal to improve the blind estimation of the time delay. Next, we develop an original algorithm based on low complexity smoothing synchronization techniques. This step consisted in proposing a technique operating in an off-line context, making it possible to estimate a random delay that varies over time on several iterations via Forward- Backward loops. The performance of such estimators exceeds that of traditional algorithms. In order to evaluate the relevance of all the proposed estimators, for deterministic and random delays, we evaluated and compared their performance to Cramer-Rao bounds that we developed within these frameworks. We, finally, evaluated the proposed algorithms on WBAN signals
Margeta, Ján. "Apprentissage automatique pour simplifier l’utilisation de banques d’images cardiaques." Thesis, Paris, ENMP, 2015. http://www.theses.fr/2015ENMP0055/document.
Повний текст джерелаThe recent growth of data in cardiac databases has been phenomenal. Cleveruse of these databases could help find supporting evidence for better diagnosis and treatment planning. In addition to the challenges inherent to the large quantity of data, the databases are difficult to use in their current state. Data coming from multiple sources are often unstructured, the image content is variable and the metadata are not standardised. The objective of this thesis is therefore to simplify the use of large databases for cardiology specialists withautomated image processing, analysis and interpretation tools. The proposed tools are largely based on supervised machine learning techniques, i.e. algorithms which can learn from large quantities of cardiac images with groundtruth annotations and which automatically find the best representations. First, the inconsistent metadata are cleaned, interpretation and visualisation of images is improved by automatically recognising commonly used cardiac magnetic resonance imaging views from image content. The method is based on decision forests and convolutional neural networks trained on a large image dataset. Second, the thesis explores ways to use machine learning for extraction of relevant clinical measures (e.g. volumes and masses) from3D and 3D+t cardiac images. New spatio-temporal image features are designed andclassification forests are trained to learn how to automatically segment the main cardiac structures (left ventricle and left atrium) from voxel-wise label maps. Third, a web interface is designed to collect pairwise image comparisons and to learn how to describe the hearts with semantic attributes (e.g. dilation, kineticity). In the last part of the thesis, a forest-based machinelearning technique is used to map cardiac images to establish distances and neighborhoods between images. One application is retrieval of the most similar images
Sanchez, Manuel. "Autonomic process management for Integration in Industry 4.0." Thesis, Pau, 2020. http://www.theses.fr/2020PAUU3006.
Повний текст джерелаBecause of the digital revolution, also known as Industry 3.0, the boundaries between the physical and digital worlds are shrinking to give life to a more interconnected and smart factories. These factories allow employees, machines, processes, and products to interact oriented to provide a better organization of all the productive means, empowering the entire company itself to achieve higher levels of efficiency and productivity. These technologies are profoundly transforming our society, allowing customizing everything in detail, reducing goods and services costs, transforming worker's and job’s conditions for safety and security, among others. In that sense, Industry 3.0 acted as a catalyst that promoted new production mechanisms, which originated a new industrial revolution known as Industry 4.0. The concept of Industry 4.0, is used to designate the new generation of connected, robotics, and intelligent factories. Fundamentally, the vision of Industry 4.0 is to give smart capabilities to the production and physical operations to create a more holistic and better-connected ecosystem.One crucial aspect to consider, regarding the idea of the Industry 4.0 concept, is related to integrability and interoperability of the actors involved in manufacturing processes. It means that people, things, processes, and data have to be able not only to make decisions for themselves and to carry out their work in a more autonomous way (independence) but, also, the self-management of the whole factory (need to promote integrability and interoperability). The previous statement implies that the production processes’ actors should be able to autonomously negotiate in order to reach agreements linked to achieve both individual and collective production goals. In that sense, Industry 4.0 represents not only a new way to produce goods and services but also a crucial integration challenge of the actors involved in the manufacturing processes that need connection, communication, coordination, cooperation, and collaboration (denoted as 5C) capabilities that allow them to comply with the vision of Industry 4.0.Principally, this thesis aims at empowering processes management for Industry 4.0, proposing a stack of five levels, denoted as 5C. The 5C stack levels represent a way to deal with integration and interoperability challenges so that they can be solved incrementally at each level. From this perspective, we must start solving connection and communication issues as a first step to promote more elaborated organization processes like coordination, cooperation, and collaboration. Mainly, the 5C denote the elements needed to allow autonomous integration and interoperability of actors in Industry 4.0.From this point of view, in this thesis project, we present a first contribution that is oriented to deal with the integration challenges regarding the Industry 4.0 context at the level of connection and communication. In the second place, we will solve some integration challenges of Industry 4.0 at the level of coordination, cooperation, and collaboration. Finally, we implement an autonomous cycle of data analytics tasks for self-supervising, using several Everything-mining techniques over data sources corresponding to a real manufacturing process. It defines a self-value-driven supervisory system, according to the classification made by Xu et al. (2017), that can process and verify the functionalities and applicability of our framework in manufacturing processes. Moreover, the self-supervising system developed in this thesis project is compared to other research works
Saeed, Mohamed Ahmed. "Approche algébrique sur l'équivalence de codes." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR034/document.
Повний текст джерелаCode equivalence problem plays an important role in coding theory and code based cryptography.That is due to its significance in classification of codes and also construction and cryptanalysis of code based cryptosystems. It is also related to the long standing problem of graph isomorphism, a well-known problem in the world of complexity theory. We introduce new method for solving code equivalence problem. We develop algebraic approaches to solve the problem in its permutation and diagonal versions. We build algebraic system by establishing relations between generator matrices and parity check matrices of the equivalent codes. We end up with system of multivariables of linear and quadratic equations which can be solved using algebraic tools such as Groebner basis and related techniques. By using Groebner basis techniques we can solve the code equivalence but the computation becomes complex as the length of the code increases. We introduced several improvements such as block linearization and Frobenius action. Using these techniques we identify many cases where permutation equivalence problem can be solved efficiently. Our method for diagonal equivalence solves the problem efficiently in small fields, namely F3 and F4. The increase in the field size results in an increase in the number of variables in our algebraic system which makes it difficult to solve. We introduce a new reduction from permutation code equivalence when the hull is trivial to graph isomorphism. This shows that this subclass of permutation equivalence is not harder than graph isomorphism.Using this reduction we obtain an algebraic system for graph isomorphism with interesting properties in terms of the rank of the linear part and the number of variables. We solve the graph isomorphism problem efficiently for random graphs with large number of vertices and also for some regular graphs such as Petersen, Cubical and Wagner Graphs
Cuvillier, Philippe. "On temporal coherency of probabilistic models for audio-to-score alignment." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066532/document.
Повний текст джерелаThis thesis deals with automatic alignment of audio recordings with corresponding music scores. We study algorithmic solutions for this problem in the framework of probabilistic models which represent hidden evolution on the music score as stochastic process. We begin this work by investigating theoretical foundations of the design of such models. To do so, we undertake an axiomatic approach which is based on an application peculiarity: music scores provide nominal duration for each event, which is a hint for the actual and unknown duration. Thus, modeling this specific temporal structure through stochastic processes is our main problematic. We define temporal coherency as compliance with such prior information and refine this abstract notion by stating two criteria of coherency. Focusing on hidden semi-Markov models, we demonstrate that coherency is guaranteed by specific mathematical conditions on the probabilistic design and that fulfilling these prescriptions significantly improves precision of alignment algorithms. Such conditions are derived by combining two fields of mathematics, Lévy processes and total positivity of order 2. This is why the second part of this work is a theoretical investigation which extends existing results in the related literature
Dollinger, Jean-François. "A framework for efficient execution on GPU and CPU+GPU systems." Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAD019/document.
Повний текст джерелаTechnological limitations faced by the semi-conductor manufacturers in the early 2000's restricted the increase in performance of the sequential computation units. Nowadays, the trend is to increase the number of processor cores per socket and to progressively use the GPU cards for highly parallel computations. Complexity of the recent architectures makes it difficult to statically predict the performance of a program. We describe a reliable and accurate parallel loop nests execution time prediction method on GPUs based on three stages: static code generation, offline profiling, and online prediction. In addition, we present two techniques to fully exploit the computing resources at disposal on a system. The first technique consists in jointly using CPU and GPU for executing a code. In order to achieve higher performance, it is mandatory to consider load balance, in particular by predicting execution time. The runtime uses the profiling results and the scheduler computes the execution times and adjusts the load distributed to the processors. The second technique, puts CPU and GPU in a competition: instances of the considered code are simultaneously executed on CPU and GPU. The winner of the competition notifies its completion to the other instance, implying the termination of the latter
Нечипорук, Олена Петрівна, та Olena Nechyporuk. "Інформаційна технологія діагностування багаторівневих технічних систем". Thesis, Національний авіаційний університет, 2021. https://er.nau.edu.ua/handle/NAU/48947.
Повний текст джерелаAit, Wakrime Abderrahim. "Une approche par composants pour l'analyse visuelle interactive de résultats issus de simulations numériques." Thesis, Orléans, 2015. http://www.theses.fr/2015ORLE2060/document.
Повний текст джерелаComponent-based approaches are increasingly studied and used for the effective development of the applications in software engineering. They offer, on the one hand, safe architecture to developers, and on the other one, a separation of the various functional parts and particularly in the interactive scientific visualization applications. Modeling such applications enables the behavior description of each component and the global system’s actions. Moreover, the interactions between components are expressed through a communication schemes sometimes very complex with, for example, the possibility to lose messages to enhance performance. This thesis describes ComSA model (Component-based approach for Scientific Applications) that relies on a component-based approach dedicated to interactive and dynamic scientific visualization applications and its formalization in strict Colored FIFO Nets (sCFN). The main contributions of this thesis are, first, the definition of a set of tools to model the component’s behaviors and the various application communication policies. Second, providing some properties on the application to guarantee it starts properly. It is done by analyzing and detecting deadlocks. This ensures the liveness throughout the application execution. Finally, we present dynamic reconfiguration of visual analytics applications by adding or removing on the fly of a component without stopping the whole application. This reconfiguration minimizes the number of unavailable services
Шабан, Максим Радуйович, та Maksym Shaban. "Моделі підтримки прийняття рішень для проведення експертиз систем технічного захисту інформації". Thesis, Національний авіаційний університет, 2021. https://er.nau.edu.ua/handle/NAU/48823.
Повний текст джерелаThe dissertation is devoted to the decision of an actual scientific and applied problem of creation of models of decision support for examinations of systems of technical protection of the information which will provide reduction of time and errors at drawing up of documents of the state examination of complex information protection system. The paper analyzes the existing decision support systems. The models have been analized which has allowed automating the process of identification of the functional protection profile. The developed model of parameters for identification of functional protection profiles in computer systems which due to theoretical and multiple representation of certain sets of criteria of information security, their elements and corresponding levels will formally form the necessary set of values for realization of process of functional protection profiles identification in computer systems. The actual scientific and applied problem of automation of the process of examinations of complex information protection systems and detection of discrepancies in the formation of functional protection profiles is solved in the work. In the dissertation work the analysis of existing models, methods and means of decision support is carried out. It was found that the existing models, methods and tools do not meet the requirements for decision support systems that could be used in conducting state examinations of integrated information security systems. A decomposition model of representation of semantic constants and variables was developed which allowed to form basic templates of source documents. model of parameters which allowed to formally form the necessary set of values for the implementation of the process of identification of the functional profile of protection in computer systems. The method of identification of the functional protection profile, which allowed to implement the process of generating the functional protection profile and verification of its requirements for protection functions (security services) and guarantees. A structural model of the decision support system is proposed which allowed to automate the process of compiling source documents according to their templates. Algorithmic and software have been developed which allowed to automate the process of examination of the complex system of information protection and detection of inconsistencies in the formation of the functional profile of protection. The conducted experimental researches of software application, introduction and successful practical use of the specified developments have confirmed reliability of theoretical hypotheses and conclusions of dissertation work
Kadri, Ahmed Abdelmoumene. "Simulation and optimization models for scheduling and balancing the public bicycle-sharing systems." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0268.
Повний текст джерелаIn our days, developed countries have to face many public transport problems, including traffic congestion, air pollution, global oil prices and global warming. In this context, Public Bike sharing systems are one of the solutions that have been recently implemented in many big cities around the world. Despite their apparent success, the exploitation and management of such transportation systems imply crucial operational challenges that confronting the operators while few scientific works are available to support such complex dynamical systems. In this context, this thesis addresses the scheduling and balancing in public bicycle-sharing systems. These problems are the most crucial questions for their operational efficiency and economic viability. Bike sharing systems are balanced by distributing bicycles from one station to another. This procedure is generally ensured by using specific redistribution vehicles. Therefore, two hard optimization problems can be considered: finding a best tour for the redistribution vehicles (scheduling) and the determination of the numbers of bicycles to be assigned and of the vehicles to be used (balancing of the stations). In this context, this thesis constitutes a contribution to modelling and optimizing the bicycle sharing systems' performances in order to ensure a coherent scheduling and balancing strategies. Several optimization methods have been proposed and tested. Such methods incorporate different approaches of simulation or optimization like the Petri nets, the genetic algorithms, the greedy search algorithms, the local search algorithms, the arborescent branch-and-bound algorithms, the elaboration of upper and lower bounds, ... Different variants of the problem have been studied: the static mode, the dynamic mode, the scheduling and the balancing by using a single or multiple vehicle(s). In order to demonstrate the coherence and the suitability of our approaches, the thesis contains several real applications and experimentations
Говорущенко, Тетяна Олександрівна. "Підвищення достовірності процесу тестування програмних продуктів на основі нейромережних інформаційних технологій". Дисертація, 2007. http://elar.khnu.km.ua/jspui/handle/123456789/1516.
Повний текст джерелаBotelho, Paulo Daniel Azevedo. "Seguimento de pessoas em ambientes interiores." Master's thesis, 2017. http://hdl.handle.net/10348/7337.
Повний текст джерелаOs Serviços Baseados em Localização são definidos como sendo serviços que utilizam a localização geográfica de um terminal. O utilizadores destes serviços podem pretender obter conhecimento sobre serviços próximos ou sobre a sua própria localização ao nível da rua, cidade, coordenadas geográficas, etc. ou relativa a ou- tros utilizadores. Os sinais de satélite, e.g. os usados pelo GPS (Global Positioning System), são severamente degradados quando os terminais se encontram em ambi- entes interiores (indoor). Neste tipo de ambientes, uma das técnicas mais utilizadas é recorrer à análise do sinal das redes de comunicações sem fios já existente no local, quando não existe uma infraestrutura complementar ao sistema de localização. Ha- bitualmente, estas técnicas consistem em análise de cena por fingerprinting ou com base em modelos de propagação. Os requisitos de precisão em ambientes indoor são, geralmente, mais elevados. Caso não exista uma infraestrutura sem fios, o recurso a sensores IMU (Inertial Measurement Units) pode ser o único modo de determinar a localização de um indivíduo em ambientes interiores, desde que a posição inicial seja conhecida. A estimativa da localização com estes sensores, tem como base o comportamento corporal do utilizador ao longo da sua caminhada. Os sensores IMU permitem detetar passos humanos em certas condições e, com a orientação do utilizador, prever o seu percurso. Neste trabalho é estudada a possibilidade de implementação de um sistema de localização indoor baseado na biomecânica dos movimentos dos utilizadores.
Location Based Services (LBS) are defined as services that use the geographic localization of terminals. Users of these services may wish to know about nearby services or their own localization at a street or city level, geographical co- ordinates, etc. or related to other users. Satellite signals e.g., those used by GPS (Global Positioning System) are severely degraded when the terminals are in in- door environments. In such environments, the most common technique is to analyse the signal of the wireless communications network locally present, when there is no additional infrastructure dedicated for the localization system. Usually, these techniques consist of scene analysis by fingerprinting or using propagation models. The precision requirements in indoor environments are, usually, higher. If there is not a wireless infrastructure, the use of IMU (Inertial Measurement Units) sensors may be the only way to estimate the localization of individuals indoors, provided that the initial position is known. The estimation of localizations with these sensors is based on the user's body behavior throughout his walking. The IMU sensors can identify human steps on certain conditions and with the orientation informa- tion of the users, predict their path. In this work, the possibility of implementing an indoor tracking system based on the biomechanics of the users' motion is studied.
Баран, Галина Степанівна. "Методи і засоби забезпечення захисту комп’ютеризованих систем при використанні VPN". Master's thesis, 2018. http://elartu.tntu.edu.ua/handle/lib/26769.
Повний текст джерелаBeires, Francisco António Alves. "Aplicação Android para monitorização e processamento de dados de pressão arterial." Master's thesis, 2017. http://hdl.handle.net/10348/7723.
Повний текст джерелаO mais alto nível possível de saúde é um direito fundamental para um indivíduo sendo no entanto um dos desafios globais para a humanidade. Segundo a Organização Mundial da Saúde (OMS), a hipertensão arterial, também conhecida como pressão sanguínea elevada afeta cerca de mil milhões de pessoas em todo o mundo contribuindo para cerca de 9.4 milhões de mortes por ano. A OMS defende que todos podem tomar medidas simples para reduzir o risco de hipertensão, realçando que a medição regular da mesma é uma das medidas mais eficazes. Na última década, os sistemas de monitorização de saúde têm atraído atenção considerável dos programadores. Consequentemente estima-se que 95.000 aplicações de saúde estão disponíveis e mais de 200 milhões de pessoas já fizeram o download dessas aplicações. O desenvolvimento desta dissertação teve em mente a criação de um desses sistemas de monitorização de saúde, através da conceção de uma aplicação destinada a dispositivos móveis capaz registar e armazenar os valores da pressão arterial de um indivíduo para posteriormente ser analisada por um profissional de saúde que fará uma análise e devolverá feedback sobre a mesma ao utente a partir de uma aplicação web. O utilizador deste sistema terá acesso a esse feedback na sua aplicação criando um acesso mais simples e diminuindo a distância entre utente e profissional de saúde.
The highest possible level of health is a fundamental right for any individual. However it is one of the global challenges for humanity. According to the World Health Organization (WHO), hypertension, also known as high blood pressure, affects approximately 1 billion people worldwide contributing to about 9.4 million deaths every year, increasing the risk of kidney insufficiency and blindness. WHO states that everyone can take simple steps to reduce the risk of hypertension, affirming that regular measurement of the blood pressure is one of the most effective methods. Lately health monitoring systems have attracted considerable attention of researchers and software developers. Consequently it is estimated that around 95,000 health applications are available and more than 200 million people have downloaded these applications into their smartphones. The development of this system had in mind the development of one of those health monitoring systems through the design of an application for mobile devices which can record and store blood pressure values of an individual to be later analyzed by a health professional that returns a feedback to the patient. The patient/user of this system will have access to this feedback in his application, creating a simpler access and decreasing the distance between health system user and health care professional.
Carvalho, Nuno Miguel Moreira da Silva de. "Sonda de ultrassons para aplicações agropecuárias." Master's thesis, 2013. http://hdl.handle.net/10348/6917.
Повний текст джерелаAtendendo ao crescente interesse na produção e consumo de carne animal, torna-se evidente a necessidade de recorrer a metodologias e tecnologias mais eficientes, que com menores custos faculte uma boa análise da composição corporal e qualidade da carne. Ao longo dos anos, têm sido desenvolvidos novos sistemas de detecção de gordura animal, contudo os ultrassons continuam a ser a principal metodologia utilizada na avaliação da carne do animal. Por vezes os laboratórios que efectuam tais avaliações, utilizam máquinas antigas com más performances nos resultados, revelando deste modo algumas incertezas associadas, sendo mesmo necessário recorrer ao meio de apalpe ou visto a olho nu para a confirmação das análises realizadas. De forma a responder aos principais problemas inerentes à tecnologia existente e tendo em conta a qualidade exigida na produção de carne, o objectivo deste trabalho é o desenvolvimento de uma tecnologia, de baixo custo, mais eficiente e mais rápida na análise da qualidade carne animal, nomeadamente quanto à sua composição. Para tal, optou-se pela utilização de uma tecnologia inovadora, lançada recentemente no mercado, designada por FPGA (Field-programmable gate array). A concepção do sistema, para recepção de dados, foi realizada pela FPGA conjuntamente com as sondas de ultrassons. As medições obtidas foram calculadas pelo tempo de voo (TOF), técnica já utilizada em múltiplos trabalhos e com alto grau de fiabilidade, que consiste na detecção da sua reflexão (eco) após envio do sinal, permitindo assim detectar os "obstáculos"e saber a distância dos mesmos. Também foram realizados ensaios em que foram registados as formas de onda dos ecos.
Given the growing interest in meat production and consumption, there is an increasing need for the use of more efficient methodologies and technologies to provide a good analysis of composition and quality of the meat at lower costs. In recent years, new systems have been developed for adipose detection. The main technique being used is the ultrasound technique for a better analysis of the animal. Nowadays, in most laboratories, such tests are done using old machines with bad performances, thus with some uncertainties. To minimize them, often this method is followed by palpation and observation that allows an assumption of its characteristics. This thesis tackles above mentioned problems by developing a more efficient and low cost technology to analyze the animal composition: both in terms of adipose and animal meat quality. Is it intended to be a faster method to better comply with the quality requirements in livestock production. Consequently, in order to develop such technology and after a careful study, an newness technology, recently in the market and with a high acceptance, called FPGA (Field-programmable gate array). For the ultrasound reading TOF (time of flight) technique was used since it is recognized in other studies as being reliable. This technique consists in sending a signal and waiting for the eco reflected, allowing to detect the presence of obstacles and also its distance.
Santos, Dário Jorge dos. "Sistema integrado de comunicações para uma rede de monitorização e controlo de tráfego." Master's thesis, 2016. http://hdl.handle.net/10348/5850.
Повний текст джерелаTorna-se cada vez mais difícil falar de sustentabilidade urbana sem abordar o problema do tráfego rodoviário. É reconhecida a falta de mobilidade que existe quer nas deslocações quer no ato de estacionar, daí a importância de se criarem sistemas que visem facilitar a circulação rodoviária em cidades. Uma das possíveis soluções passa pela monitorização dos lugares de estacionamento recorrendo a tecnologias de redes de comunicação. O CAN e o IEEE 802.15.4 são dois protocolos de comunicação que se têm revelado bastante flexíveis na implementação de sistemas de monitorização e controlo. Tendo por base estes protocolos, foi implementado um sistema de comunicações que permite uma monitorização dos lugares de estacionamento em áreas urbanas. Ao longo do documento é apresentado o processo de implementação do sistema, utilizando cada um dos protocolos, assim como o desenvolvimento do hardware, firmware e software necessários ao seu funcionamento.
It is almost impossible to talk about urban sustainability without addressing the problem of road traffic. As it is well known there is a lack of mobility, both during the displacements and at the time of parking. Therefore it is important to develop systems that aim to help road traffic in cities. One possible solution involves the monitorization of parking spaces using communication networks technologies. CAN and IEEE 802.15.4 communication protocols have been quite flexible for implementing monitorization and control systems. Based on these protocols, a communications system which allows to monitor parking spaces in urban areas has been developed. Throughout this document it is presented the implementation of the system, using these protocols, as well as the development of the hardware, firmware and software needed for the correct system operation.
Dias, Fernando Maurício Teixeira de Sousa. "Otimização de redes elétricas de distribuição: planeamento e operação." Doctoral thesis, 2015. http://hdl.handle.net/10348/5164.
Повний текст джерелаNas últimas décadas o setor da energia elétrica tem assistido a profundas alterações quer na sua vertente técnica quer na vertente económica. Hoje, no setor elétrico, assiste-se à crescente influência económica na sua gestão relegando para um plano secundário as questões técnicas. O crescente aumento da carga, o aproveitamento da produção distribuída gerida por entidades privadas, a evolução tecnológica ao serviço dos equipamentos das redes, os meios sofisticados de controlo, a liberalização do setor elétrico, a atualização da regulamentação que conduz a requisitos cada vez mais exigentes, as tendências futuras tais como a evolução dos veículos elétricos e as smart grids fazem com que a gestão da rede elétrica, cada vez mais complexa, seja um desafio constante por parte de todos os seus intervenientes. Neste contexto, o planeamento e a exploração das redes elétricas, em especial a rede de distribuição de MT, apresentam-se como fulcrais de forma a garantir a satisfação dos requisitos técnicos impostos pelo natural funcionamento do sistema e pela regulamentação mas, também, na obtenção do retorno financeiro dos investimentos efetuados em torno da rede elétrica. Dada a complexidade do problema e atendendo à necessidade de tratar informação imprecisa, vaga e em certos casos ambígua, de satisfazer diversos objetivos que, em muitas situações, encontram-se em conflito entre si, é necessário recorrer a avançadas técnicas de otimização que utilizam complexos algoritmos matemáticos auxiliados por potentes meios informáticos (hardware e software) com vista à obtenção da melhor solução para o problema. Este problema do planeamento e da operação das redes de distribuição em MT, dada a sua relevância para a sociedade, tem sido do interesse das operadoras e de diversos investigadores desde meados do século XX. Em numerosas referências de revistas técnicocientíficas são sugeridas metodologias, mais ou menos aproximadas à realidade, e abordagens algorítmicas com vista à melhoria das soluções do problema. Nesta tese, o problema é formulado com as não linearidades inerentes ao tratamento matemático e é resolvido por recurso a técnicas determinísticas. É apresentada a formulação matemática do problema do planeamento e do problema da reconfiguração bem como todo o conjunto de restrições técnicas a que a rede deve obedecer. Dado que o tipo de problema de decisão tratado pode ser relacionado hierarquicamente também pode ser decomposto em subproblemas. A técnica de decomposição utilizada foi a Decomposição de Benders. Para o problema do planeamento consideraram-se três objetivos: minimização do custo das perdas, minimização do custo da energia não fornecida e minimização do custo de investimento em baterias de condensadores; no caso do problema da operação, e de forma a obter-se a reconfiguração ótima da rede de distribuição, na sequência de uma avaria num dos seus componentes, considerou-se a minimização das perdas e da energia não fornecida. Os resultados apresentados nesta tese refletem a aplicação da metodologia definida para o planeamento e para a reconfiguração em duas redes: uma rede de teste 70 barramentos e uma rede real de 201 barramentos. Face aos resultados, comprovou-se a adequação, robustez e versatilidade da metodologia que poderá ser um auxílio muito importante para a tomada de decisão quer ao nível do planeamento quer ao nível da operação de redes de distribuição de energia elétrica em MT.
In last decades the electricity sector has witnessed profound changes both in its technical and economic aspects. Nowadays, we watch the growing of economic power influence in management of the electricity industry. The load increasing, the use of distributed generation, managed by private entities, the technological evolution of network equipment and the sophisticated means of control, the liberalization of the electricity sector, the updating laws, the future trends such as the development of electric vehicles and smart grids make the management of the power grid a hard and complex work for all stakeholders. In this context, the planning and operation of electrical networks, in particular the distribution network, are presented as the key to ensure the satisfaction of the technical constraints imposed by the operating and regulation of the system. Also, the planning and operation help in getting the return of financial investments made in the network. Given the complexity of the problem which in many cases have to deal with fuzzy information, vague and in some cases ambiguous and also to satisfy several objective, in many cases, are in conflict with each other, it is necessary to use advanced optimization techniques. Mathematical complex algorithms helped by powerful computer tools (hardware and software) are used to solve that optimization problems in order to obtain the best solution. The distribution networks planning and operation problem given its importance to the society, has been of the interest of operators and researchers since the mid-twentieth century. Numerous references in technical and scientific journals propose methodologies close to the reality and several algorithmic approaches in order to improve the solutions of the problem. In this thesis, the problem is formulated with the nonlinearities inherent to the mathematical treatment and is solved by using deterministic techniques. A mathematical formulation for the planning problem and for the reconfiguration problem, as well as the full set of technical constraints of the network, are presented. Once the type of considered decision problem may be related hierarchically, so it can also be decomposed into sub-problems, the decomposition technique used was the Benders Decomposition. In the planning problem were considered three objectives: minimize the system´s power losses cost; minimize the capacitor placement and size cost, and minimize the energy not supplied cost. In the reconfiguration problem the objectives considered were the minimization of losses cost and minimization of energy not supplied cost in presence of a fault. The results presented in this thesis reflect the application of the proposed methodology for the planning and reconfiguration of two networks: a test network with 70 bus and a real network with 201 bus. The results, proved the adequacy, robustness and versatility of the proposed methodology which can be an important support tool for the decision-making, both in terms of planning and operation of the power distribution networks.
Matos, Hélio Tiago Pereira. "Deteção de ligações clandestinas à rede de saneamento." Master's thesis, 2013. http://hdl.handle.net/10348/7036.
Повний текст джерелаHoje em dia, os avanços tecnológicos são imensos e cada vez mais têm um papel importante nas nossas vidas e no nosso modo de vida. Um dos setores que tem evoluído bastante é o dos microcontroladores e das comunicações entre componentes eletrónicos que vão ser a base deste projeto, e o que o vai tornar fiável e eficaz. Componentes cada vez mais pequenos possuem cada vez mais capacidades, oferecendo ao utilizador um leque enorme de funcionalidades. Dois dos aspetos que mais preocupam no presente é o aspeto ecológico e o aspeto económico, os quais tentaremos, pelo menos, preservar com este sistema de aquisição de dados. Este sistema permite a deteção de ligações clandestinas às vulgarmente conhecidas como redes de saneamento, podendo assim combate-las e ajudar a preservar o meio ambiente assim como reduzir os custos de tratamento das águas, que vem aumentando drasticamente nos últimos anos devido precisamente a estas ligações. O sistema de aquisição de dados construído, usa sensores de temperatura no interior do colector de águas residuais domésticas por forma a medir as temperaturas do esgoto e das caixas de visita. A transmissão dos dados entre os sensores e o sistema de recolha é efetuada usando o IEEE 802.15.4.
Nowadays the technological advances are huge and play an increasingly and important role in our lives and in the way we live. One of the sectors that has seen a great evolution is that of microcontrollers and communications between electronic components. They are the basis of this project and that will make it reliable and effective. Increasingly smaller components have increasingly capabilities, offering to users a huge range of features. Two of the aspects of most concern users at the present are the ecological and the economic aspects, which are the reason for the development of this data acquisition system. This system allows the detection of illegal connections to sewerage networks and thus can help to prevent them and help preserve the environment and reduce the costs of water treatment, which has increased dramatically in recent years because of these connections. The developed data acquisition system uses temperature sensors inside the collector of domestic waste water in order to measure the temperatures of the sewer and manholes. The transmission of data between sensors and collection system is made using the IEEE 802.15.4.