Thèses sur le sujet « AI TECHNIQUE »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : AI TECHNIQUE.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « AI TECHNIQUE ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Hegemann, Lena. « Reciprocal Explanations : An Explanation Technique for Human-AI Partnership in Design Ideation ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281339.

Texte intégral
Résumé :
Advancements in creative artificial intelligence (AI) are leading to systems that can actively work together with designers in tasks such as ideation, i.e. the creation, development, and communication of ideas. In human group work, making suggestions and explaining the reasoning behind them as well as comprehending other group member’s explanations aids reflection, trust, alignment of goals and inspiration through diverse perspectives. Despite their ability to inspire through independent suggestions, state-of-the-art creative AI systems do not leverage these advantages of group work due to missing or one-sided explanations. For other use cases, AI systems that explain their reasoning are already gathering wide research interest. However, there is a knowledge gap on the effects of explanations on creativity. Furthermore, it is unknown whether a user can benefit from also explaining their contributions to an AI system. This thesis investigates whether reciprocal explanations, a novel technique which combines explanations from and to an AI system, improve the designers’ and AI’s joint exploration of ideas. I integrated reciprocal explanations into an AI aided tool for mood board design, a common method for ideation. In our implementation, the AI system uses text to explain which features of its suggestions match or complement the current mood board. Occasionally, it asks for user explanations providing several options for answers that it reacts to by aligning its strategy. A study was conducted with 16 professional designers who used the tool to create mood boards followed by presentations and semi-structured interviews. The study emphasized a need for explanations that make the principles of the system transparent and showed that alignment of goals motivated participants to provide explanations to the system. Also, enabling users to explain their contributions to the AI system facilitated reflection on their own reasons.
Framsteg inom kreativ artificiell intelligens (AI) har lett till system som aktivt kan samarbeta med designers under idéutformningsprocessen, dvs vid skapande, utveckling och kommunikation av idéer. I grupparbete är det viktigt att kunna göra förslag och förklara resonemanget bakom dem, samt förstå de andra gruppmedlemmarnas resonemang. Detta ökar reflektionsförmågan och förtroende hos medlemmarna, samt underlättar sammanjämkning av mål och ger inspiration genom att höra olika perspektiv. Trots att system, baserade på kreativ artificiell intelligens, har förmågan att inspirera genom sina oberoende förslag, utnyttjar de allra senaste kreativa AI-systemen inte dessa fördelar för att facilitera grupparbete. Detta är på grund av AI-systemens bristfälliga förmåga att resonera över sina förslag. Resonemangen är ofta ensidiga, eller saknas totalt. AI-system som kan förklara sina resonemang är redan ett stort forskningsintresse inom många användningsområden. Dock finns det brist på kunskap om AI-systemens påverkan på den kreativa processen. Dessutom är det okänt om en användare verkligen kan dra nytta av möjligheten att kunna förklara sina designbeslut till ett AI-system. Denna avhandling undersöker om ömsesidiga förklaringar, en ny teknik som kombinerar förklaringar från och till ett AI system, kan förbättra designerns och AI:s samarbete under utforskningen av idéer. Jag integrerade ömsesidiga förklaringar i ett AI-hjälpmedel som underlättar skapandet av stämningsplank (eng. mood board), som är en vanlig metod för konceptutveckling. I vår implementering använder AI-systemet textbeskrivningar för att förklara vilka delar av dess förslag som matchar eller kompletterar det nuvarande stämningsplanket. Ibland ber den användaren ge förklaringar, så den kan anpassa sin förslagsstrategi efter användarens önskemål. Vi genomförde en studie med 16 professionella designers som använde verktyget för att skapa stämningsplank. Feedback samlades genom presentationer och semistrukturerade intervjuer. Studien betonade behovet av förklaringar och resonemang som gör principerna bakom AI-systemet transparenta för användaren. Höjd sammanjämkning mellan användarens och systemets mål motiverade deltagarna att ge förklaringar till systemet. Genom att göra det möjligt för användare att förklara sina designbeslut för AI-systemet, förbättrades också användarens reflektionsförmåga över sina val.
Styles APA, Harvard, Vancouver, ISO, etc.
2

GUPTA, NIDHI. « AUTOMATIC GENERATION CONTROL OF INTERCONNECTED MULTI AREA POWER SYSTEM ». Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18414.

Texte intégral
Résumé :
Currently, power system operation and control with AGC are undergoing fundamental changes due to rapidly increasing amount of renewable sources, energy storage system, restructuring and emerging of new types of power generation, consumption and power electronics technologies. Continuous growth in size and complexity, stochastically changing power demands, system modeling errors, alterations in electric power system structures and variations in the system parameters over the time has turned AGC task into a challenging one. Infrastructure of the intelligent power system should effectively support the provision of auxiliary services such as an AGC system from various sources through intelligent schemes. Literature survey shows that performance of AGC of interconnected power system with diverse sources gets improved by changing in controller structure, using intelligent optimization techniques for controller parameters, adding storage system and by considering different participation of diverse sources in multi area power systems. Hence, proposing and implementing new controller approaches using high performance heuristic optimization algorithms to real world problems are always welcomed. Performance of many controllers depends on proper selection of certain algorithms and specific control parameters. Hence, the goal of the present study is to propose different types of new supplementary controller to achieve better dynamic performances in multi-area with diverse source power systems, namely two area power system with and without non-linearity and three area power system with optimal and energy storage system. Based on the extensive literature review on the control designs of AGC of interconnected power system, it has been felt that new control techniques for design of AGC regulators for interconnected power system including vi renewable sources. The main objective of the proposed research work is to design new AGC regulators and develop simple, robust and easy to implement as compared with the available control techniques. The problem of nonlinearity in interconnected power system with diverse sources has also been addressed with suitable control algorithms. The presented work is divided into nine chapters. Chapter 1 deals with the introduction of AGC of power system. Widespread review of the taxonomy of optimization algorithms is presented in this chapter. Chapter 2 presents a critical review of AGC schemes in interconnected multi area power system with diverse sources. Chapter 3 stresses on the modelling of diverse sources power systems under consideration. The main simulation work starts from Chapter 4. In Chapter 4, the study is firstly conducted to propose novel Jaya based AGC of two area interconnected thermal-hydro- gas power system with varying participation of sources. In Chapter 5, novel Jaya based AI technique is further employed on realistic power system by considering non linearities like Governor Dead band (GDB), Generation Rate Constraint (GRC) and Boiler dynamics. The study is done on Jaya based AGC of two area interconnected thermal-hydro-wind and thermal-hydro-diesel power system with and without nonlinearities by considering step load and random perturbation at different control areas. In Chapter 6, designing of Optimal AGC regulator for three different three-area interconnected multi source power systems has been planned. In each power system, optimal AGC regulators have been designed by using different structures of cost weighting matrices (Q an R). vii In Chapter 7, implementation of Superconducting Magnetic Energy Storage System (SMES) in operation and control of AGC of three-area multi source power systems has been studied. Analysis of PSO tuned Integral controller for AGC of three area interconnected multi source power systems with and without SMES by considering step load perturbation at different control areas has bee done. Comparative performance of different bio-inspired artificial technique has been presented on AGC of three area interconnected power system with SMES. Chapter 8, presents AGC of three area multi source interconnected power systems by including and excluding Battery Energy Storage System (BESS) at step load perturbation in different control areas. In Chapter 9 - the performance of different control techniques presented for AGC of multi area interconnected multi source power system has been summarized and the scope of further work in this area has been highlighted.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Giorgianni, Giulia. « Analisi dei principi e dei metodi per la valutazione della sostenibilità dei prodotti e dei processi con un'applicazione ai componenti per l’edilizia ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Trouver le texte intégral
Résumé :
Tale elaborato si pone l’obiettivo di analizzare una tematica oggigiorno molto discussa, ma tuttora per molti versi inesplorata: la sostenibilità. Esso è stato scritto con la volontà di rendere disponibile uno scritto di consultazione che fornisca una panoramica il più possibile completa sugli studi e le metodologie applicative elaborati fino ad ora connessi al tema della sostenibilità. La logica con cui lo scritto è articolato, prevede in primis un inquadramento generale sul tema della sostenibilità, fortemente connesso con il concetto di Life Cycle Thinking, e prosegue concentrando l’attenzione su aspetti via via più specifici. Il focus dell’analisi si concentra infatti sullo studio delle singole tecniche del ciclo di vita e successivamente sulle potenzialità di applicazione delle stesse ad uno specifico settore: quello edilizio. All’interno di questo settore è poi fornito un dettaglio in merito ai materiali ceramici per i quali si è intrapreso un serio percorso verso l’applicazione concreta dei principi dello sviluppo sostenibile. Per consolidare i temi trattati, l’elaborato si concentra infine sull’analisi di due studi applicativi: uno studio di Life Cycle Assessment e uno di Life Cycle Costing realizzati al fine di studiare i profili ambientale ed economico delle piastrelle ceramiche in contrapposizione a quelle in marmo.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hopkins, Colin William. « Plan delegation in a multiagent environment ». Thesis, University of Essex, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235829.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Persson, Martin. « Development of three AI techniques for 2D platform games ». Thesis, Karlstad University, Division for Information Technology, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-1.

Texte intégral
Résumé :

This thesis serves as an introduction to anyone that has an interest in artificial intelligence games and has experience in programming or anyone who knows nothing of computer games but wants to learn about it. The first part will present a brief introduction to AI, then it will give an introduction to games and game programming for someone that has little knowledge about games. This part includes game programming terminology, different game genres and a little history of games. Then there is an introduction of a couple of common techniques used in game AI. The main contribution of this dissertation is in the second part where three techniques that never were properly implemented before 3D games took over the market are introduced and it is explained how they would be done if they were to live up to today’s standards and demands. These are: line of sight, image recognition and pathfinding. These three techniques are used in today’s 3D games so if a 2D game were to be released today the demands on the AI would be much higher then they were ten years ago when 2D games stagnated. The last part is an evaluation of the three discussed topics.

Styles APA, Harvard, Vancouver, ISO, etc.
6

Ko, Kai-Chung. « Protocol test sequence generation and analysis using AI techniques ». Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29192.

Texte intégral
Résumé :
This thesis addresses two major issues in protocol conformance testing: test sequence generation and test result analysis. For test sequence generation, a new approach based on the constraint satisfaction problem (CSP) techniques, which is widely used in the AI community, is presented. This method constructs a unique test sequence for a given FSM by using an initial test sequence, such as a transition tour or an UIO test sequence, and incrementally generating a set of test subsequences which together represent the constraints imposed on the overall structure of the FSM. The new method not only generates test sequence with fault coverage which is at least as good as the one provided by the existing methods, but also allows the implementation under test (IUT) to have a larger number of states than that in the specification. In addition, the new method also lends itself naturally to both test result analysis and fault coverage measurement. For test result analysis, the CSP method uses the observed sequence as the initial sequence, constructs all fault models which satisfy the initial sequence and introduces additional subsequences to pinpoint the IUT fault model. In addition, a second method for test result analysis is proposed, which is originated from a model of diagnostic reasoning from first principle, another well-known AI techniques which produces all minimal diagnoses by considering the overall consistency of the system together with the observation. Unlike the first method, the second method does not require the computation of all fault models explicitly, and hence is considered to be more suitable for large systems. To our knowledge, the proposed methods in this thesis represent the first attempt in applying AI techniques to the problem of protocol test sequence generation and analysis.
Science, Faculty of
Computer Science, Department of
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bishop, J. M. « Anarchic techniques for pattern classification ». Thesis, University of Reading, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234667.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Schinner, Charles Edward 1957. « Electronic manufacturing test cell automation and configuration using AI techniques ». Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/278327.

Texte intégral
Résumé :
This thesis utilizes artificial intelligence techniques and problem specific knowledge to assist in the design of an manufacturing test cell for electronic products. The electronic printed circuit board (PCB) is subjected to one or more functional evaluation(s) during the manufacturing process. The purpose of these evaluations is to assure product quality. This thesis is focused on, with historical knowledge, the configuration of this testing environment and associated fault isolation processes. By using such knowledge, an improvement in the testing efficiency will be realized which will allow the overall product cost to be minimized.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Rafiq, M. Y. « Artificial intelligence techniques for the structural design of buildings ». Thesis, University of Strathclyde, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382446.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Podder, Tanmay. « ANALYSIS & ; STUDY OF AI TECHNIQUES FORAUTOMATIC CONDITION MONITORING OFRAILWAY TRACK INFRASTRUCTURE : Artificial Intelligence Techniques ». Thesis, Högskolan Dalarna, Datateknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:du-4757.

Texte intégral
Résumé :
Since the last decade the problem of surface inspection has been receiving great attention from the scientific community, the quality control and the maintenance of products are key points in several industrial applications.The railway associations spent much money to check the railway infrastructure. The railway infrastructure is a particular field in which the periodical surface inspection can help the operator to prevent critical situations. The maintenance and monitoring of this infrastructure is an important aspect for railway association.That is why the surface inspection of railway also makes importance to the railroad authority to investigate track components, identify problems and finding out the way that how to solve these problems. In railway industry, usually the problems find in railway sleepers, overhead, fastener, rail head, switching and crossing and in ballast section as well. In this thesis work, I have reviewed some research papers based on AI techniques together with NDT techniques which are able to collect data from the test object without making any damage. The research works which I have reviewed and demonstrated that by adopting the AI based system, it is almost possible to solve all the problems and this system is very much reliable and efficient for diagnose problems of this transportation domain. I have reviewed solutions provided by different companies based on AI techniques, their products and reviewed some white papers provided by some of those companies. AI based techniques likemachine vision, stereo vision, laser based techniques and neural network are used in most cases to solve the problems which are performed by the railway engineers.The problems in railway handled by the AI based techniques performed by NDT approach which is a very broad, interdisciplinary field that plays a critical role in assuring that structural components and systems perform their function in a reliable and cost effective fashion. The NDT approach ensures the uniformity, quality and serviceability of materials without causing any damage of that materials is being tested. This testing methods use some way to test product like, Visual and Optical testing, Radiography, Magnetic particle testing, Ultrasonic testing, Penetrate testing, electro mechanic testing and acoustic emission testing etc. The inspection procedure has done periodically because of better maintenance. This inspection procedure done by the railway engineers manually with the aid of AI based techniques.The main idea of thesis work is to demonstrate how the problems can be reduced of thistransportation area based on the works done by different researchers and companies. And I have also provided some ideas and comments according to those works and trying to provide some proposal to use better inspection method where it is needed.The scope of this thesis work is automatic interpretation of data from NDT, with the goal of detecting flaws accurately and efficiently. AI techniques such as neural networks, machine vision, knowledge-based systems and fuzzy logic were applied to a wide spectrum of problems in this area. Another scope is to provide an insight into possible research methods concerning railway sleeper, fastener, ballast and overhead inspection by automatic interpretation of data.In this thesis work, I have discussed about problems which are arise in railway sleepers,fastener, and overhead and ballasted track. For this reason I have reviewed some research papers related with these areas and demonstrated how their systems works and the results of those systems. After all the demonstrations were taking place of the advantages of using AI techniques in contrast with those manual systems exist previously.This work aims to summarize the findings of a large number of research papers deploying artificial intelligence (AI) techniques for the automatic interpretation of data from nondestructive testing (NDT). Problems in rail transport domain are mainly discussed in this work. The overall work of this paper goes to the inspection of railway sleepers, fastener, ballast and overhead.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Aljeri, Noura. « Efficient AI and Prediction Techniques for Smart 5G-enabled Vehicular Networks ». Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41497.

Texte intégral
Résumé :
With the recent growth and wide availability of heterogeneous wireless access technologies, inter-vehicle communication systems are expected to culminate in integrating various wireless standards for the next generation of connected and autonomous vehicles. The role of 5G-enabled vehicular networks has become increasingly important, as current Internet clients and providers have urged robustness and effectiveness in digital services over wireless networks to cope with the latest advances in wireless mobile communication. However, to enable 5G wireless technologies' dense diversity, seamless and reliable wireless communication protocols need to be thoroughly investigated in vehicular networks. 5G-enabled vehicular networks applications and services such as routing, mobility management, and service discovery protocols can integrate mobility-based prediction techniques to elevate those applications' performance with various vehicles, applications, and network measurements. In this thesis, we propose a novel suite of 5G-enabled smart mobility prediction and management schemes and design a roadmap guide to mobility-based predictions for intelligent vehicular network applications and protocols. We present a thorough review and classification of vehicular network architectures and components, in addition to mobility management schemes, benchmarks advantages, and drawbacks. Moreover, multiple mobility-based schemes are proposed, in which vehicles' mobility is managed through the utilization of machine learning prediction and probability analysis techniques. We propose a novel predictive mobility management protocol that incorporates a new networks' infrastructure discovery and selection scheme. Next, we design an efficient handover trigger scheme based on time-series prediction and a novel online neural network-based next roadside unit prediction protocol for smart vehicular networks. Then, we propose an original adaptive predictive location management technique that utilizes vehicle movement projections to estimate the link lifetime between vehicles and infrastructure units, followed by an efficient movement-based collision detection scheme and infrastructure units localization strategy. Last but not least, the proposed techniques have been extensively evaluated and compared to several benchmark schemes with various networks' parameters and environments. Results showed the high potentials of empowering vehicular networks' mobility-based protocols with the vehicles' future projections and the prediction of the network's status.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Koukoulis, Constantinos G. « The application of knowledge based techniques to industrial maintenance problems ». Thesis, Queen Mary, University of London, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327306.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Bartoli, Giacomo. « Edge AI : Deep Learning techniques for Computer Vision applied to embedded systems ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16820/.

Texte intégral
Résumé :
In the last decade, Machine Learning techniques have been used in different fields, ranging from finance to healthcare and even marketing. Amongst all these techniques, the ones adopting a Deep Learning approach were revealed to outperform humans in tasks such as object detection, image classification and speech recognition. This thesis introduces the concept of Edge AI: that is the possibility to build learning models capable of making inference locally, without any dependence on expensive servers or cloud services. A first case study we consider is based on the Google AIY Vision Kit, an intelligent camera equipped with a graphic board to optimize Computer Vision algorithms. Then, we test the performances of CORe50, a dataset for continuous object recognition, on embedded systems. The techniques developed in these chapters will be finally used to solve a challenge within the Audi Autonomous Driving Cup 2018, where a mobile car equipped with a camera, sensors and a graphic board must recognize pedestrians and stop before hitting them.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Korn, Stefan. « The combination of AI modelling techniques for the simulation of manufacturing processes ». Thesis, Glasgow Caledonian University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263139.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Solanke, Abiodun Abdullahi <1983&gt. « Digital Forensics AI : on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10400/1/SOLANKE-ABIODUN-ABDULLAHI-Tesi.pdf.

Texte intégral
Résumé :
Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Norrie, Christian. « Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study ». Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19845.

Texte intégral
Résumé :
Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable Model-Agnostic Explanations) can establish a degree of trust for the user in the medical diagnostics eld. These techniques are evaluated through a user study. User study results suggest that supplementing classications or predictions with a post-hoc visualization increases interpretability by a small margin. Further investigation and research utilizing a user study surveyor interview is suggested to increase interpretability and explainability of machine learning results.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Sugianto, Nehemia. « Responsible AI for Automated Analysis of Integrated Video Surveillance in Public Spaces ». Thesis, Griffith University, 2021. http://hdl.handle.net/10072/409586.

Texte intégral
Résumé :
Understanding customer experience in real-time can potentially support people’s safety and comfort while in public spaces. Existing techniques, such as surveys and interviews, can only analyse data at specific times. Therefore, organisations that manage public spaces, such as local government or business entities, cannot respond immediately when urgent actions are needed. Manual monitoring through surveillance cameras can enable organisation personnel to observe people. However, fatigue and human distraction during constant observation cannot ensure reliable and timely analysis. Artificial intelligence (AI) can automate people observation and analyse their movement and any related properties in real-time. Analysing people’s facial expressions can provide insight into how comfortable they are in a certain area, while analysing crowd density can inform us of the area’s safety level. By observing the long-term patterns of crowd density, movement, and spatial data, the organisation can also gain insight to develop better strategies for improving people’s safety and comfort. There are three challenges to making an AI-enabled video surveillance system work well in public spaces. First is the readiness of AI models to be deployed in public space settings. Existing AI models are designed to work in generic/particular settings and will suffer performance degradation when deployed in a real-world setting. Therefore, the models require further development to tailor them for the specific environment of the targeted deployment setting. Second is the inclusion of AI continual learning capability to adapt the models to the environment. AI continual learning aims to learn from new data collected from cameras to adapt the models to constant visual changes introduced in the setting. Existing continuous learning approaches require long-term data retention and past data, which then raise data privacy issues. Third, most of the existing AI-enabled surveillance systems rely on centralised processing, meaning data are transmitted to a central/cloud machine for video analysis purposes. Such an approach involves data privacy and security risks. Serious data threats, such as data theft, eavesdropping or cyberattack, can potentially occur during data transmission. This study aims to develop an AI-enabled intelligent video surveillance system based on deep learning techniques for public spaces established on responsible AI principles. This study formulates three responsible AI criteria, which become the guidelines to design, develop, and evaluate the system. Based on the criteria, a framework is constructed to scale up the system over time to be readily deployed in a specific real-world environment while respecting people’s privacy. The framework incorporates three AI learning approaches to iteratively refine the AI models within the ethical use of data. First is the AI knowledge transfer approach to adapt existing AI models from generic deployment to specific real-world deployment with limited surveillance datasets. Second is the AI continuous learning approach to continuously adapt AI models to visual changes introduced by the environment without long-period data retention and the need for past data. Third is the AI federated learning approach to limit sensitive and identifiable data transmission by performing computation locally on edge devices rather than transmitting to the central machine. This thesis contributes to the study of responsible AI specifically in the video surveillance context from both technical and non-technical perspectives. It uses three use cases at an international airport as the application context to understand passenger experience in real-time to ensure people’s safety and comfort. A new video surveillance system is developed based on the framework to provide automated people observation in the application context. Based on real deployment using the airport’s selected cameras, the evaluation demonstrates that the system can provide real-time automated video analysis for three use cases while respecting people’s privacy. Based on comprehensive experiments, AI knowledge transfer can be an effective way to address limited surveillance datasets issue by transferring knowledge from similar datasets rather than training from scratch on surveillance datasets. It can be further improved by incrementally transferring knowledge from multi-datasets with smaller gaps rather than a one-stage process. Learning without Forgetting is a viable approach for AI continuous learning in the video surveillance context. It consistently outperforms fine-tuning and joint-training approaches with lower data retention and without the need for past data. AI federated learning can be a feasible solution to allow continuous learning in the video surveillance context without compromising model accuracy. It can obtain comparable accuracy with quicker training time compared to joint-training.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Dept Bus Strategy & Innovation
Griffith Business School
Full Text
Styles APA, Harvard, Vancouver, ISO, etc.
18

Guruswamy, Aarumugam Bhupathi Rajan. « Independent Domain of Symmetric Encryption using Least SignificantBit : Computer Vision, Steganography and Cryptography Techniques ». Thesis, Högskolan Dalarna, Datateknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:du-10063.

Texte intégral
Résumé :
The rapid development of data transfer through internet made it easier to send the data accurate and faster to the destination. There are many transmission media to transfer the data to destination like e-mails; at the same time it is may be easier to modify and misuse the valuable information through hacking. So, in order to transfer the data securely to the destination without any modifications, there are many approaches like cryptography and steganography. This paper deals with the image steganography as well as with the different security issues, general overview of cryptography, steganography and digital watermarking approaches.  The problem of copyright violation of multimedia data has increased due to the enormous growth of computer networks that provides fast and error free transmission of any unauthorized duplicate and possibly manipulated copy of multimedia information. In order to be effective for copyright protection, digital watermark must be robust which are difficult to remove from the object in which they are embedded despite a variety of possible attacks. The message to be send safe and secure, we use watermarking. We use invisible watermarking to embed the message using LSB (Least Significant Bit) steganographic technique. The standard LSB technique embed the message in every pixel, but my contribution for this proposed watermarking, works with the hint for embedding the message only on the image edges alone. If the hacker knows that the system uses LSB technique also, it cannot decrypt correct message. To make my system robust and secure, we added cryptography algorithm as Vigenere square. Whereas the message is transmitted in cipher text and its added advantage to the proposed system. The standard Vigenere square algorithm works with either lower case or upper case. The proposed cryptography algorithm is Vigenere square with extension of numbers also. We can keep the crypto key with combination of characters and numbers. So by using these modifications and updating in this existing algorithm and combination of cryptography and steganography method we develop a secure and strong watermarking method. Performance of this watermarking scheme has been analyzed by evaluating the robustness of the algorithm with PSNR (Peak Signal to Noise Ratio) and MSE (Mean Square Error) against the quality of the image for large amount of data. While coming to see results of the proposed encryption, higher value of 89dB of PSNR with small value of MSE is 0.0017. Then it seems the proposed watermarking system is secure and robust for hiding secure information in any digital system, because this system collect the properties of both steganography and cryptography sciences.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Maqsood, Shahid. « The scheduling of manufacturing systems using Artificial Intelligence (AI) techniques in order to find optimal/near-optimal solutions ». Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/6322.

Texte intégral
Résumé :
This thesis aims to review and analyze the scheduling problem in general and Job Shop Scheduling Problem (JSSP) in particular and the solution techniques applied to these problems. The JSSP is the most general and popular hard combinational optimization problem in manufacturing systems. For the past sixty years, an enormous amount of research has been carried out to solve these problems. The literature review showed the inherent shortcomings of solutions to scheduling problems. This has directed researchers to develop hybrid approaches, as no single technique for scheduling has yet been successful in providing optimal solutions to these difficult problems, with much potential for improvements in the existing techniques. The hybrid approach complements and compensates for the limitations of each individual solution technique for better performance and improves results in solving both static and dynamic production scheduling environments. Over the past years, hybrid approaches have generally outperformed simple Genetic Algorithms (GAs). Therefore, two novel priority heuristic rules are developed: Index Based Heuristic and Hybrid Heuristic. These rules are applied to benchmark JSSP and compared with popular traditional rules. The results show that these new heuristic rules have outperformed the traditional heuristic rules over a wide range of benchmark JSSPs. Furthermore, a hybrid GA is developed as an alternate scheduling approach. The hybrid GA uses the novel heuristic rules in its key steps. The hybrid GA is applied to benchmark JSSPs. The hybrid GA is also tested on benchmark flow shop scheduling problems and industrial case studies. The hybrid GA successfully found solutions to JSSPs and is not problem dependent. The hybrid GA performance across the case studies has proved that the developed scheduling model can be applied to any real-world scheduling problem for achieving optimal or near-optimal solutions. This shows the effectiveness of the hybrid GA in real-world scheduling problems. In conclusion, all the research objectives are achieved. Finaly, the future work for the developed heuristic rules and the hybrid GA are discussed and recommendations are made on the basis of the results.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Sottara, Davide <1981&gt. « Integration of symbolic and connectionist AI techniques in the development of Decision Support Systems applied to biochemical processes ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2972/1/Sottara_Davide_Tesi.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Sottara, Davide <1981&gt. « Integration of symbolic and connectionist AI techniques in the development of Decision Support Systems applied to biochemical processes ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2972/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Saini, Simardeep S. « Mimicking human player strategies in fighting games using game artificial intelligence techniques ». Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16380.

Texte intégral
Résumé :
Fighting videogames (also known as fighting games) are ever growing in popularity and accessibility. The isolated console experiences of 20th century gaming has been replaced by online gaming services that allow gamers to play from almost anywhere in the world with one another. This gives rise to competitive gaming on a global scale enabling them to experience fresh play styles and challenges by playing someone new. Fighting games can typically be played either as a single player experience, or against another human player, whether it is via a network or a traditional multiplayer experience. However, there are two issues with these approaches. First, the single player offering in many fighting games is regarded as being simplistic in design, making the moves by the computer predictable. Secondly, while playing against other human players can be more varied and challenging, this may not always be achievable due to the logistics involved in setting up such a bout. Game Artificial Intelligence could provide a solution to both of these issues, allowing a human player s strategy to be learned and then mimicked by the AI fighter. In this thesis, game AI techniques have been researched to provide a means of mimicking human player strategies in strategic fighting games with multiple parameters. Various techniques and their current usages are surveyed, informing the design of two separate solutions to this problem. The first solution relies solely on leveraging k nearest neighbour classification to identify which move should be executed based on the in-game parameters, resulting in decisions being made at the operational level and being fed from the bottom-up to the strategic level. The second solution utilises a number of existing Artificial Intelligence techniques, including data driven finite state machines, hierarchical clustering and k nearest neighbour classification, in an architecture that makes decisions at the strategic level and feeds them from the top-down to the operational level, resulting in the execution of moves. This design is underpinned by a novel algorithm to aid the mimicking process, which is used to identify patterns and strategies within data collated during bouts between two human players. Both solutions are evaluated quantitatively and qualitatively. A conclusion summarising the findings, as well as future work, is provided. The conclusions highlight the fact that both solutions are proficient in mimicking human strategies, but each has its own strengths depending on the type of strategy played out by the human. More structured, methodical strategies are better mimicked by the data driven finite state machine hybrid architecture, whereas the k nearest neighbour approach is better suited to tactical approaches, or even random button bashing that does not always conform to a pre-defined strategy.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Hatoum, Makram. « Digital watermarking for PDF documents and images : security, robustness and AI-based attack ». Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCD016.

Texte intégral
Résumé :
Le développement technologique a ses avantages et ses inconvénients. Nous pouvons facilement partager et télécharger du contenu numérique en utilisant l’Internet. En outre, les utilisateurs malveillants peuvent aussi modifier, dupliquer et diffuser illégalement tout type d'informations, comme des images et des documents. Par conséquent, nous devons protéger ces contenus et arrêter les pirates. Le but de cette thèse est de protéger les documents PDF et les images en utilisant la technique de tatouage numérique Spread Transform Dither Modulation (STDM), tout en tenant compte des exigences principales de transparence, de robustesse et de sécurité.La méthode de tatouage STDM a un bon niveau de transparence et de robustesse contre les attaques de bruit. La clé principale dans cette méthode de tatouage est le vecteur de projection qui vise à diffuser le message sur un ensemble d'éléments. Cependant, un tel vecteur clé peut être estimée par des utilisateurs non autorisés en utilisant les techniques de séparation BSS (Blind Source Separation). Dans notre première contribution, nous présentons notre méthode de tatouage proposé CAR-STDM (Component Analysis Resistant-STDM), qui garantit la sécurité tout en préservant la transparence et la robustesse contre les attaques de bruit.STDM est également affecté par l'attaque FGA (Fixed Gain Attack). Dans la deuxième contribution, nous présentons notre méthode de tatouage proposé N-STDM qui résiste l'attaque FGA et améliore la robustesse contre l'attaque Additive White Gaussian Noise (AWGN), l'attaque de compression JPEG, et diversité d'attaques de filtrage et géométriques. Les expérimentations ont été menées sur des documents PDF et des images dans le domaine spatial et le domaine fréquentiel.Récemment, l’Apprentissage Profond et les Réseaux de Neurones atteints du développement et d'amélioration notable, en particulier dans le traitement d'image, la segmentation et la classification. Des modèles tels que CNN (Convolutional Neural Network) sont utilisés pour la dé-bruitage des images. CNN a une performance adéquate de dé-bruitage, et il pourrait être nocif pour les images tatouées. Dans la troisième contribution, nous présentons l'effet du FCNN (Fully Convolutional Neural Network), comme une attaque de dé-bruitage, sur les images tatouées. Les méthodes de tatouage STDM et SS (Spread Spectrum) sont utilisés durant les expérimentations pour intégrer les messages dans les images en appliquant plusieurs scénarios. Cette évaluation montre qu'un tel type d'attaque de dé-bruitage préserve la qualité de l'image tout en brisant la robustesse des méthodes de tatouages évalués
Technological development has its pros and cons. Nowadays, we can easily share, download, and upload digital content using the Internet. Also, malicious users can illegally change, duplicate, and distribute any kind of information, such as images and documents. Therefore, we should protect such contents and arrest the perpetrator. The goal of this thesis is to protect PDF documents and images using the Spread Transform Dither Modulation (STDM), as a digital watermarking technique, while taking into consideration the main requirements of transparency, robustness, and security. STDM watermarking scheme achieved a good level of transparency and robustness against noise attacks. The key to this scheme is the projection vector that aims to spreads the embedded message over a set of cover elements. However, such a key vector can be estimated by unauthorized users using the Blind Source Separation (BSS) techniques. In our first contribution, we present our proposed CAR-STDM (Component Analysis Resistant-STDM) watermarking scheme, which guarantees security while preserving the transparency and robustness against noise attacks. STDM is also affected by the Fixed Gain Attack (FGA). In the second contribution, we present our proposed N-STDM watermarking scheme that resists the FGA attack and enhances the robustness against the Additive White Gaussian Noise (AWGN) attack, JPEG compression attack, and variety of filtering and geometric attacks. Experimentations have been conducted distinctly on PDF documents and images in the spatial domain and frequency domain. Recently, Deep Learning and Neural Networks achieved noticeable development and improvement, especially in image processing, segmentation, and classification. Diverse models such as Convolutional Neural Network (CNN) are exploited for modeling image priors for denoising. CNN has a suitable denoising performance, and it could be harmful to watermarked images. In the third contribution, we present the effect of a Fully Convolutional Neural Network (FCNN), as a denoising attack, on watermarked images. STDM and Spread Spectrum (SS) are used as watermarking schemes to embed the watermarks in the images using several scenarios. This evaluation shows that such type of denoising attack preserves the image quality while breaking the robustness of all evaluated watermarked schemes
Styles APA, Harvard, Vancouver, ISO, etc.
24

Eroglu, Levent. « Modelling The Fresh Properties Of Self Compacting Concrete Utilizing Statistical Design Of Experiment Techniques ». Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608124/index.pdf.

Texte intégral
Résumé :
Self compacting concrete (SCC) is first developed in Japan in the late 1980s in order to overcome the consolidation problems associated with the presence of congested reinforcement. It is also termed as a high performance concrete, as it can flow under its own weight and completely fill the formworks. As the fresh properties of SCC are quite important, mix design of a SCC is performed by considering various workability related fresh properties. Therefore, a well designed SCC should satisfy all requirements of a hardened concrete, besides its superior workability properties. The aim of this research is to assess the effects of some basic ingredients of SCC on the fresh properties of SCC. This will be performed by applying design of experiment techniques and obtaining significant statistical models, which will give valuable information about the effects of the model parameters on the rheology and fresh state characteristics of SCC. In this research program, four different variables
use of fly ash replacement, use of high range water reducing admixture (HRWRA), use of viscosity modifying admixtures (VMA) and water-cementitious material ratio, are considered as the variables of the experimental design. Central Composite Design (CCD), a design of experiment technique, is employed throughout the experimental program and a total of 21 mixtures of concrete are cast. Slump flow, V-funnel, L-box, sieve segregation, initial and final setting time tests are performed, furthermore
to investigate the effects of these variables to the rheology of SCC, relative plastic viscosity and relative yield stress, which are the parameters of Bingham Model are measured with the help of a concrete rheometer. As a result of the experimental program, the fresh state properties of SCC are expressed by mathematical equations. Those equations are then used in order to explain the effects of fly ash replacement, HRWRA and VMA concentration, and the w/cm ratio on the fresh state properties of SCC. According to the derived models, it is stated that the water-cementitious material ratio of the concrete mixture is the most effective parameter on the flowability and passing ability of SCC beside the other parameters utilized in this research as its coefficient was the highest in the related models.
Styles APA, Harvard, Vancouver, ISO, etc.
25

MONTEMURRO, MARILISA. « Algorithms for cancer genome data analysis - Learning techniques for ITH modeling and gene fusion classification ». Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2970978.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Olsson, Johan. « A Client-Server Solution for Detecting Guns in School Environment using Deep Learning Techniques ». Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162476.

Texte intégral
Résumé :
With the progress of deep learning methods the last couple of years, object detection related tasks are improving rapidly. Using object detection for detecting guns in schools remove the need for human supervision and hopefully reduces police response time. This paper investigates how a gun detection system can be built by reading frames locally and using a server for detection. The detector is based on a pre-trained SSD model and through transfer learning is taught to recognize guns. The detector obtained an Average Precision of 51.1% and the server response time for a frame of size 1920 x 1080 was 480 ms, but could be scaled down to 240 x 135 to reach 210 ms, without affecting the accuracy. A non-gun class was implemented to reduce the number of false positives and on a set of 300 images containing 165 guns, the number of false positives dropped from 21 to 11.
Styles APA, Harvard, Vancouver, ISO, etc.
27

TISATO, Flavia. « Study on Modern and Contemporary works of Art through non invasive integrated physical techniques ». Doctoral thesis, Università degli studi di Ferrara, 2014. http://hdl.handle.net/11392/2388948.

Texte intégral
Résumé :
During my PhD I developed two parallel and complementary topics, concerning both works of art and materials. The first one, focused on non-invasive investigations on works of art (ancient and contemporary), was aimed to deepen conservative state, material composition, painting techniques and the early detection of any deterioration. This latter goal also guided the study of pictorial and restoration materials, mainly aimed at their characterization from the optical point of view. Diagnostic activities made use of different methods of investigation. Among image techniques, photography and macrophotography in diffuse, specular and raking light, ultraviolet fluorescent, image spectroscopy, wide band infrared reflectography, digital and differential K-edge radiography. To get as much information as possible, to be properly integrated with other data, punctual diagnostic techniques, such as reflectance spectrophotometry, colorimetry and X Ray fluorescence, were also used.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Khan, Muhammad. « A self-optimised cloud radio access network for emerging 5G architectures ». Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16050.

Texte intégral
Résumé :
Network densification has become a dominant theme for capacity enhancement in cellular networks. However, it increases the operational complexity and expenditure for mobile network operators. Consequently, the essential features of Self-Organising Networks (SON) are considered to ensure the economic viability of the emerging cellular networks. This thesis focuses on quantifying the benefits of self-organisation in Cloud Radio Access Network (C-RAN) by proposing a flexible, energy efficient, and capacity optimised system. The Base Band Unit (BBU) and Remote Radio Head (RRH) map is formulated as an optimisation problem. A self-optimised C-RAN (SOCRAN) is proposed which hosts Genetic Algorithm (GA) and Discrete-Particle-Swarm-Optimisation algorithm (DPSO), developed for optimisation. Computational results based on different network scenarios demonstrate that DPSO delivers excellent performances for the key performance indicators compared to GA. The percentage of blocked users is reduced from 10.523% to 0.409% in a medium sized network scenario and 5.394% to 0.56% in a vast network scenario. Furthermore, an efficient resource utilisation scheme is proposed based on the concept of Cell Differentiation and Integration (CDI). The two-stage CDI scheme semi-statically scales the number of BBUs and RRHs to serve an offered load and dynamically defines the optimum BBU-RRH mapping to avoid unbalanced network scenarios. Computational results demonstrate significant throughput improvement in a CDI-enabled C-RAN compared to a fixed C-RAN, i.e., an average throughput increase of 45.53% and an average blocked users decrease of 23.149% is experienced. A power model is proposed to estimate the overall power consumption of C-RAN. Approximately 16% power reduction is calculated in a CDI-enabled C-RAN when compared to a fixed C-RAN, both serving the same geographical area. Moreover, a Divide-and-Sort load balancing scheme is proposed and compared to the SOCRAN scheme. Results show excellent performances by the Divide-and-Sort algorithm in small networks when compared to SOCRAN and K-mean clustering algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Zadeh, Saman Akbar. « Application of advanced algorithms and statistical techniques for weed-plant discrimination ». Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2020. https://ro.ecu.edu.au/theses/2352.

Texte intégral
Résumé :
Precision agriculture requires automated systems for weed detection as weeds compete with the crop for water, nutrients, and light. The purpose of this study is to investigate the use of machine learning methods to classify weeds/crops in agriculture. Statistical methods, support vector machines, convolutional neural networks (CNNs) are introduced, investigated and optimized as classifiers to provide high accuracy at high vehicular speed for weed detection. Initially, Support Vector Machine (SVM) algorithms are developed for weed-crop discrimination and their accuracies are compared with a conventional data-aggregation method based on the evaluation of discrete Normalised Difference Vegetation Indices (NDVIs) at two different wavelengths. The results of this work show that the discrimination performance of the Gaussian kernel SVM algorithm, with either raw reflected intensities or NDVI values being used as inputs, provides better discrimination accuracy than the conventional discrete NDVI-based aggregation algorithm. Then, we investigate a fast statistical method for CNN parameter optimization, which can be applied in many CNN applications and provides more explainable results. This study specifically applies Taguchi based experimental designs for network optimization in a basic network, a simplified inception network and a simplified Resnet network, and conducts a comparison analysis to assess their respective performance and then to select the hyper parameters and networks that facilitate faster training and provide better accuracy. Results show that, for all investigated CNN architectures, there is a measurable improvement in accuracy in comparison with un-optimized CNNs, and that the Inception network yields the highest improvement (~ 6%) in accuracy compared to simple CNN (~ 5%) and Resnet CNN counterparts (~ 2%). Aimed at achieving weed-crop classification in real-time at high speeds, while maintaining high accuracy, the algorithms are uploaded on both a small embedded NVIDIA Jetson TX1 board for real-time precision agricultural applications, and a larger high throughput GeForce GTX 1080Ti board for aerial crop analysis applications. Experimental results show that for a simplified CNN algorithm implemented on a Jetson TX1 board, an improvement in detection speed of thirty times (60 km/hr) can be achieved by using spectral reflectance data rather than imaging data. Furthermore, with an Inception algorithm implemented on a GeForce GTX 1080Ti board for aerial weed detection, an improvement in detection speed of 11 times (~2300 km/hr) can be achieved, while maintaining an adequate detection accuracy above 80%. These high speeds are attained by reducing the data size, choosing spectral components with high information contents at lower resolution, pre-processing efficiently, optimizing the deep learning networks through the use of simplified faster networks for feature detection and classification, and optimizing computational power with available power and embedded resources, to identify the best fit hardware platforms.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Rego, Máñez Albert. « Intelligent multimedia flow transmission through heterogeneous networks using cognitive software defined networks ». Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/160483.

Texte intégral
Résumé :
[ES] La presente tesis aborda el problema del encaminamiento en las redes definidas por software (SDN). Específicamente, aborda el problema del diseño de un protocolo de encaminamiento basado en inteligencia artificial (AI) para garantizar la calidad de servicio (QoS) en transmisiones multimedia. En la primera parte del trabajo, el concepto de SDN es introducido. Su arquitectura, protocolos y ventajas son comentados. A continuación, el estado del arte es presentado, donde diversos trabajos acerca de QoS, encaminamiento, SDN y AI son detallados. En el siguiente capítulo, el controlador SDN, el cual juega un papel central en la arquitectura propuesta, es presentado. Se detalla el diseño del controlador y se compara su rendimiento con otro controlador comúnmente utilizado. Más tarde, se describe las propuestas de encaminamiento. Primero, se aborda la modificación de un protocolo de encaminamiento tradicional. Esta modificación tiene como objetivo adaptar el protocolo de encaminamiento tradicional a las redes SDN, centrado en las transmisiones multimedia. A continuación, la propuesta final es descrita. Sus mensajes, arquitectura y algoritmos son mostrados. Referente a la AI, el capítulo 5 detalla el módulo de la arquitectura que la implementa, junto con los métodos inteligentes usados en la propuesta de encaminamiento. Además, el algoritmo inteligente de decisión de rutas es descrito y la propuesta es comparada con el protocolo de encaminamiento tradicional y con su adaptación a las redes SDN, mostrando un incremento de la calidad final de la transmisión. Finalmente, se muestra y se describe algunas aplicaciones basadas en la propuesta. Las aplicaciones son presentadas para demostrar que la solución presentada en la tesis está diseñada para trabajar en redes heterogéneas.
[CA] La present tesi tracta el problema de l'encaminament en les xarxes definides per programari (SDN). Específicament, tracta el problema del disseny d'un protocol d'encaminament basat en intel·ligència artificial (AI) per a garantir la qualitat de servici (QoS) en les transmissions multimèdia. En la primera part del treball, s'introdueix les xarxes SDN. Es comenten la seva arquitectura, els protocols i els avantatges. A continuació, l'estat de l'art és presentat, on es detellen els diversos treballs al voltant de QoS, encaminament, SDN i AI. Al següent capítol, el controlador SDN, el qual juga un paper central a l'arquitectura proposta, és presentat. Es detalla el disseny del controlador i es compara el seu rendiment amb altre controlador utilitzat comunament. Més endavant, es descriuen les propostes d'encaminament. Primer, s'aborda la modificació d'un protocol d'encaminament tradicional. Aquesta modificació té com a objectiu adaptar el protocol d'encaminament tradicional a les xarxes SDN, centrat a les transmissions multimèdia. A continuació, la proposta final és descrita. Els seus missatges, arquitectura i algoritmes són mostrats. Pel que fa a l'AI, el capítol 5 detalla el mòdul de l'arquitectura que la implementa, junt amb els mètodes intel·ligents usats en la proposta d'encaminament. A més a més, l'algoritme intel·ligent de decisió de rutes és descrit i la proposta és comparada amb el protocol d'encaminament tradicional i amb la seva adaptació a les xarxes SDN, mostrant un increment de la qualitat final de la transmissió. Finalment, es mostra i es descriuen algunes aplicacions basades en la proposta. Les aplicacions són presentades per a demostrar que la solució presentada en la tesi és dissenyada per a treballar en xarxes heterogènies.
[EN] This thesis addresses the problem of routing in Software Defined Networks (SDN). Specifically, the problem of designing a routing protocol based on Artificial Intelligence (AI) for ensuring Quality of Service (QoS) in multimedia transmissions. In the first part of the work, SDN is introduced. Its architecture, protocols and advantages are discussed. Then, the state of the art is presented, where several works regarding QoS, routing, SDN and AI are detailed. In the next chapter, the SDN controller, which plays the central role in the proposed architecture, is presented. The design of the controller is detailed and its performance compared to another common controller. Later, the routing proposals are described. First, a modification of a traditional routing protocol is discussed. This modification intends to adapt a traditional routing protocol to SDN, focused on multimedia transmissions. Then, the final proposal is described. Its messages, architecture and algorithms are depicted. As regards AI, chapter 5 details the module of the architecture that implements it, along with all the intelligent methods used in the routing proposal. Furthermore, the intelligent route decision algorithm is described and the final proposal is compared to the traditional routing protocol and its adaptation to SDN, showing an increment of the end quality of the transmission. Finally, some applications based on the routing proposal are described. The applications are presented to demonstrate that the proposed solution can work with heterogeneous networks.
Rego Máñez, A. (2020). Intelligent multimedia flow transmission through heterogeneous networks using cognitive software defined networks [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160483
TESIS
Styles APA, Harvard, Vancouver, ISO, etc.
31

Oliver, Desmond Mark. « Cultural appropriation in Messiaen's rhythmic language ». Thesis, University of Oxford, 2016. http://ora.ox.ac.uk/objects/uuid:54799b39-3185-4db8-9111-77a8b284b2e7.

Texte intégral
Résumé :
Bruhn (2008) and Griffiths (1978) have referred in passing to Messiaen's use of non-Western content as an appropriation, but a consideration of its potential moral and aesthetic failings within the scope of modern literature on artistic cultural appropriation is an underexplored topic. Messiaen's first encounter with India came during his student years, by way of a Sanskrit version of Saṅgītaratnākara (c. 1240 CE) written by the thirteenth-century Hindu musicologist Śārṅgadeva. I examine Messiaen's use of Indian deśītālas within a cultural appropriation context. Non-Western music provided a safe space for him to explore the familiar, and served as validation for previously held creative interests, prompting the expansion and development of rhythmic techniques from the unfamiliar. Chapter 1 examines the different forms of artistic cultural appropriation, drawing on the ideas of James O. Young and Conrad G. Brunk (2012) and Bruce H. Ziff and Pratima V. Rao (1997). I consider the impact of power dynamic inequality between 'insider' and 'outsider' cultures. I evaluate the relation between aesthetic errors and authenticity. Chapter 2 considers the internal and external factors and that prompted Messiaen to draw on non-Western rhythm. I examine Messiaen's appropriation of Indian rhythm in relation to Bloomian poetic misreading, and whether his appropriation of Indian rhythm reveals an authentic intention. Chapter 3 analyses Messiaen's interpretation of Śārṅgadeva's 120 deśītālas and its underlying Hindu symbolism. Chapter 4 contextualises Messiaen's Japanese poem Sept haïkaï (1962) in relation to other European Orientalist artworks of the late-nineteenth and early-twentieth centuries, and also in relation to Michael Sullivan's (1987: 209) three-tiered definitions of japonism.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Marroquín, Cortez Roberto Enrique. « Context-aware intelligent video analysis for the management of smart buildings ». Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCK040/document.

Texte intégral
Résumé :
Les systèmes de vision artificielle sont aujourd'hui limités à l'extraction de données issues de ce que les caméras « voient ». Cependant, la compréhension de ce qu'elles voient peut être enrichie en associant la connaissance du contexte et la connaissance d'interprétation d'un humain.Dans ces travaux de thèse, nous proposons une approche associant des algorithmes de vision atificielle à une modélisation sémantique du contexte d'acquisition.Cette approche permet de réaliser un raisonnement sur la connaissance extraite des images par les caméras en temps réel. Ce raisonnement offre une réponse aux problèmes d'occlusion et d'erreurs de détections inhérents aux algorithmes de vision artificielle. Le système complet permet d'offrir un ensemble de services intelligents (guidage, comptage...) tout en respectant la vie privée des personnes observées. Ces travaux forment la première étape du développement d'un bâtiment intelligent qui peut automatiquement réagir et évoluer en observant l'activité de ces usagers, i.e., un bâtiment intelligent qui prend en compte les informations contextuelles.Le résultat, nommé WiseNET, est une intelligence artificielle en charge des décisions au niveau du bâtiment (qui pourrait être étendu à un groupe de bâtiments ou même a l'échelle d'un ville intelligente). Elle est aussi capable de dialoguer avec l'utilisateur ou l'administrateur humain de manière explicite
To date, computer vision systems are limited to extract digital data of what the cameras "see". However, the meaning of what they observe could be greatly enhanced by environment and human-skills knowledge.In this work, we propose a new approach to cross-fertilize computer vision with contextual information, based on semantic modelization defined by an expert.This approach extracts the knowledge from images and uses it to perform real-time reasoning according to the contextual information, events of interest and logic rules. The reasoning with image knowledge allows to overcome some problems of computer vision such as occlusion and missed detections and to offer services such as people guidance and people counting. The proposed approach is the first step to develop an "all-seeing" smart building that can automatically react according to its evolving information, i.e., a context-aware smart building.The proposed framework, named WiseNET, is an artificial intelligence (AI) that is in charge of taking decisions in a smart building (which can be extended to a group of buildings or even a smart city). This AI enables the communication between the building itself and its users to be achieved by using a language understandable by humans
Styles APA, Harvard, Vancouver, ISO, etc.
33

Teng, Sin Yong. « Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries ». Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Texte intégral
Résumé :
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Roy, Sayan, et Soumya Ranjan Sahoo. « Path Planning of Mobile Agents using AI Technique ». Thesis, 2007. http://ethesis.nitrkl.ac.in/84/1/10303036.pdf.

Texte intégral
Résumé :
In this paper, we study coordinated motion in a swarm robotic system, called a swarm-bot. A swarm-bot is a self-assembling and self-organizing. Artifact composed of a swarm of s-bots, mobile robots with the ability to connect to and is connect from each other. The swarm-bot concept is particularly suited for tasks that require all-terrain navigation abilities, such as space exploration or rescue in collapsed buildings. As a first step toward the development of more complex control strategies, we investigate the case in which a swarm-bot has to explore an arena while avoiding falling into holes. In such a scenario, individual s-bots have sensory–motor limitations that prevent them navigating efficiently. These limitations can be overcome if the s-bots are made to cooperate. In particular, we exploit the s-bots’ ability to physically connect to each other. In order to synthesize the s-bots’ controller, we rely on artificial evolution, which we show to be a powerful tool for the production of simple and effective solutions to the hole avoidance task.
Styles APA, Harvard, Vancouver, ISO, etc.
35

GIU, HONG-SHENG, et 邱宏昇. « The application of AI technique in designing CAI courseware ». Thesis, 1986. http://ndltd.ncl.edu.tw/handle/70903572532892088755.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Tsai, Chung-Huei, et 蔡中暉. « DIAGNOSIS OF DAMAGE IN RC STRUCTURE BASED ON STRUCTURAL RESPONSES VIA THE AI TECHNIQUE ». Thesis, 2000. http://ndltd.ncl.edu.tw/handle/87587896286413367945.

Texte intégral
Résumé :
博士
國立成功大學
土木工程學系
88
This dissertation develops a feasible diagnostic model for reinforced concrete (RC) structures through the Artificial Intelligence (AI) technique, based on structural responses, to assess the severity and location of defects. Four kinds of structural response, i.e., acceleration time history (ATH), displacement time history (DTH), natural frequencies (NF), and static displacement (SD), are separately serve as the input characteristics of the neural network (NN) in the diagnostic model. A simply supported RC beam with a specified size and assumed defects is theoretically analyzed by a finite element program to produce the structural responses. The structural responses are then combined with relative damage conditions to generate training and testing numerical examples, necessary to assess the damage to the RC structure by using the NN. Two stages of diagnostic procedure are then used for the NN application to identify the damage scenarios of the relevant structures. Furthermore, several structural responses, as mentioned above, are measured from tests that also try to demonstrate the ANN base diagnostic model as presented herein, and whether it can be successfully applied to real structures. A test sample of RC beams with various extents of artificial damage is constructed and tested to diagnose the magnitude and location of damage by using well trained NNs. Finally, this study attempts to perform an objective and synthetic conclusion for various damage diagnostic results which subject to a certain location of each test RC beam from the NNs. Fuzzy logic is then applied to reduce differences between situations, especially when seemingly conflicting damage levels exist. Moreover, fuzzy logic can linguistically state the final diagnostic results that actually express or reflect the real state of the test RC beam. Therefore, this study successfully fabricates a feasible and efficient diagnostic model, which will be needed for real world damage assessment applications.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Wu, Chin-Hui, et 吳智暉. « University-level Automated Course Scheduling by Integrating AI Technique and Group Decision Support System - the Preceding Process ». Thesis, 1994. http://ndltd.ncl.edu.tw/handle/25388860988100308858.

Texte intégral
Résumé :
碩士
大葉大學
電機工程研究所
82
Computing university course schedules is very hard. Course scheduling is basically a multiple constraint satisfaction problem, in which the determination of a solution is NP- complete. The approaches oriented to operations research simplified the problem to facilitate mathematical model building and to reduce computation time. The AI/expert-system- oriented approaches took advantage of powerful configuration tools and supplied reasoning methods, but did not completely solve the conflict problem between multiple constraints. Via literature review and system analysis, this research proposes simple heuristic rules to guide ''generate, test and debug'' strategy to automate ng. A prototype system has been developed, tested and evaluated. Course-scheduling by heuristic rules can reduce computation time significantly. And the huristic rules themselves are easier to understand than mathematical models.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Lay, Young-Jinn, et 賴永進. « University-level Automated Course Scheduling by Integrating AI Technique and Group Decision Support System - Group Negotiation Timetabling ». Thesis, 1994. http://ndltd.ncl.edu.tw/handle/88277612855266149559.

Texte intégral
Résumé :
碩士
大葉大學
電機工程研究所
82
University-level course scheduling is basically a multiple constraint satisfaction problem. It needs to rely on a preceding process to get a feasible solution satisfactory to almost all constraints and on a nogotiation process to achieve a all- satisfying solution. Researches in autometed course scheduling proposed various algorithms, empirical rules and reasoning thods. Proposals were differentiated by computation time and memory space usage, but they were not guaranteed to succeed in finding a solution. The final stage in course scheduling is achieved by negotiation, precisely, a group decision process. This research proposes a course-specific group decision support system to ease the inherent negotiation activities required for the course scheduling issues. A course- specific group decision support system needs some major functions as information query, group negotiation, course adjustment, course scheduling, explanation, constraint relaxation and system help. A prototype system under this general architecture has been developed, tested and evaluated. The testing and the evaluation of the system has gained positive public opinions.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Chang, Ching-Yang, et 張清陽. « Integration of Adaptive IIR Filtering and AI Technique for Fault Detection and Diagnosis of the Mass Flow Controller ». Thesis, 1999. http://ndltd.ncl.edu.tw/handle/83676220485172086458.

Texte intégral
Résumé :
碩士
國立交通大學
電機與控制工程系
87
In this thesis, we combine the techniques of adaptive IIR filtering, fuzzy inference, and Dempster-Shafer theory to create an on-line real-time fault detection and diagnosis system which is tested for mass flow controller (MFC) operated in the semiconductor manufacturing. The characteristic of the MFC is modelled as a time-varying IIR filter which is realized using the normalized lattice structure in order to prevent unstable modelling. The IIR coefficients are apdated using the well-known recurisive least square (RLS) algorithm for increasing the convergence speed.The resulting IIR coefficients are used to extract the physically meaningful parameters, the damping factor, the nature frequency, and the steady-state error, as the features or symptoms for fault detection and isolation. We then apply the fuzzy inference and Dempster-Shafer theory techniques to realize the functions of the fault detect and isolation. We also demonstrate the experimental results of the presented diagnosis system for the MFC.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Liu, Ching-Yung, et 劉靜勇. « A Study Using AI-based Heuristic Technique in Dynamic Simulation System Multi-objective Optimization-A Case Study of Movable Scaffolding System ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/68486791412122403258.

Texte intégral
Résumé :
碩士
國立雲林科技大學
營建工程系碩士班
95
Construct engineering has the extremely high not duplication because of the document demand and the environmental factor. Therefore the simulation most likely aims at an individual case in the application of construction, can''t be long-term and valid of improvement the engineering productivity is poor, the machine tool idle rate and material stock over high condition. This research attempted in view of the recent years gradually to step into the automation, the mechanized production job practice, construct a set of re-usable production to predict and the optimization of resource combine system to improve over a long period of time. Movable Scaffolding System (MSS) namely to have circulation construction way the high mechanized. Therefore, the progress concert of its lower part structure, the supply and allotment of the resource seem to be particularly important more. This research develops a set of MSS resources combination fuzzy multiobjective inference system using the SIMPROCESS. With dynamic state of the way present process and resource of the MSS structure engineering to supply behavior. Inducts the fuzzy inference, carry on the estimate of system target to revise because of the imperfection caused by the system complicated and artificial indetermination. Tabu Search and Artificial Neural Networks combined to set up inside SIMPROCESS mix of search mechanism, may measures under the double objective to time and cost trade-off, searches for the MSS project most suitable resources combination. Use by the set of MSS actual case, further certificate model of logic rationality and credibility. Construct a database of the user interface to link system model finally, the database will record importation data of each simulation automatically, for the convenience of managing and improving over a long period of time.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Sonkar, Rakesh Kumar. « Navigation of Automatic Vehicle using AI Techniques ». Thesis, 2013. http://ethesis.nitrkl.ac.in/5414/1/211ME1164_(1).pdf.

Texte intégral
Résumé :
In the field of mobile robot navigation have been studied as important task for the new generation of mobile robot i.e. Corobot. For this mobile robot navigation has been viewed for unknown environment. We consider the 4-wheeled vehicle (Corobot) for Path Planning, an autonomous robot and an obstacle and collision avoidance to be used in sensor based robot. We propose that the predefined distance from the robot to target and make the robot follow the target at this distance and improve the trajectory tracking characteristics. The robot will then navigate among these obstacles without hitting them and reach the specified goal point. For these goal achieving we use different techniques radial basis function and back-propagation algorithm under the study of neural network. In this Corobot a robotic arm are assembled and the kinematic analyses of Corobot arm and help of Phidget Control Panel a wheeled to be moved in both forward and reverse direction by 2-motor controller have to be done. Under kinematic analysis propose the relationships between the positions and orientation of the links of a manipulator. In these studies an artificial techniques and their control strategy are shown with potential applications in the fields of industry, security, defense, investigation, and others. Here finally, the simulation result using the webot neural network has been done and this result is compared with experimental data for different training pattern.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Kao, Lin-Yung, et 高琳詠. « Application of AI Techniques in Optimizing TLA Workflow ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/00485335100283805608.

Texte intégral
Résumé :
碩士
中國醫藥大學
醫務管理學研究所碩士班
97
Total laboratory automation (TLA) is the system which integrates laboratory instruments under a unified control with little or without human intervention. It has been demonstrated to be efficient in reducing operational costs and reducing working time, especially when integrated with consolidated network. Recently, a central laboratory equipped with TLA system was set up in central Taiwan area as a platform for performance evaluation. The preliminary study showed that the testing and processing time have been reduced for about 60% and the number of personnel has been decreased from 60 to 45 since its operation in March, 2006. However, it still needs to further enhance the workflow performance to meet the increasing number of samples when more and more hospitals are requesting the services of TLA. Currently, the collected specimens from satellite hospitals were sorted randomly before entering the TLA system, which greatly decreases its working efficiency. The objective of this investigation is to design a genetic algorithm by using MATLAB toolbox to find the best solution for optimizing the TLA workflow, thereby increasing its efficiency. The experiment was done based on 5 batches of specimen. The result shows that application of genetic algorithm in arranging specimen sequences resulting in a decrease of 16% in average of TLA operation time. In conclusion, genetic algorithm is useful in increasing TLA efficiency by arranging the sequences of blood specimens before loading to the TLS system.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Kumar, Ravindra C. « An investigation on application of AI techniques on GIS ». Thesis, 1995. http://hdl.handle.net/2009/710.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Moura, Ana Carolina Ribeiro. « Fostering motivation through AI techniques in educational serious games ». Master's thesis, 2017. https://repositorio-aberto.up.pt/handle/10216/107057.

Texte intégral
Résumé :
O termo jogo sério refere-se a jogos que têm um impacto maior do que o entretenimento e têm crescido ao longo dos últimos anos. Os jogos sérios geralmente são desenvolvidos para um público-alvo e um objetivo de ensino . Isso resulta em um desenvolvimento de alto custo para uma pequena audiência. Um dos problemas mais relevantes em jogos sérios é a necessidade de adaptar e equilibrar o jogo, esse equilíbrio é restrito a uma pessoa e não pode ser extrapolado para uma audiência maior, transformando o pequeno público em apenas algumas pessoas.Para ter um controle eficiente da aprendizagem, é necessário compreender a relação latente entre a capacidade cognitiva, motivação e desempenho de cada pessoa. Quanto mais personalizado o material dado a cada aluno, melhor será a sua aprendizagem, porque o desafio é adaptado às suas necessidades.Machine Learning, especialmente reinforcement learning (RL), pode ser usado para geração de comportamento NPC automatizado e pode ser aplicado a um agente que controle a dificuldade de um jogo em um ambiente desconhecido e não supervisionado. Por essa razão, a aplicação de algoritmos como Q-learning pode ajudar na criação de curvas de aprendizagem personalizadas num jogo.O objetivo geral desta tese é explorar como a Inteligência Artificial pode ajudar a monitorar e adaptar um jogo em tempo real às necessidades e ao perfil de um jogador. Especificamente, queremos estudar o estado da arte no contexto de jogos sérios, bem como de jogos adaptativos. Pretendemos criar um jogo com adaptação em tempo real na área da Matemática, que pode criar um perfil confiável de qualquer jogador dentro do nosso público-alvo adaptar-se às necessidades desse jogador. Idealmente, este será o próximo passo no e-learning e no desenvolvimento de jogos sério, pois podemos expandir um jogo para um público maior sem gastar mais recursos.Há uma ampla gama de trabalho em jogos sérios e como eles são a resposta para motivar os alunos, no entanto, é necessário manter um estado de flow no aluno para obter melhores resultados. Alguns artigos também exploram como a AI pode ajudar a manter esse estado, no entanto não há uma resposta concreta a essa necessidade e nenhum estudo definitivo de como fazê-lo. O trabalho de pesquisa ajuda-nos a definir alguns parâmetros necessários para o sucesso deste trabalho de dissertação.Este trabalho começa analisando o estado da arte em educação e jogos, bem como IA usada nestes jogos. Depois, haverá o layout e o plano do design do jogo que foi feito com a ajuda de um especialista. Depois disso, a exploração da integração dos algoritmos Q-learning no jogo, fornecemos algumas alterações ao Q-learning normal. Finalmente, há a análise dos dados coletados do público-alvo e conclusões.
The term serious game refers to games that have a bigger impact than entertainment and have been growing over the last years. Serious games usually are developed for a target audience and a target teaching goal. This results in a high cost development for a small audience. One of the most relevant problems in serious games is the need to adapt and balance the game, this balance is restricted to one person and it can't be extrapolated to a bigger audience, turning the small audience to just a few peoples.In order to have an efficient control of learning, it is necessary to understand the latent relation between the cognitive capacity, motivation and performance of each person. The more personalized the material given to each student, the better their learning will be, because the balance is tailored to your needs.Machine learning, especially reinforcement learning (RL) can be used for automated NPC behaviour generation and it can be applied to an agent that controls the difficulty of a game in an unknown, unsupervised environment. For that reason, the application of algorithms like Q-learning may help on the creation of personalized learning curves in a game.The broad objective of this thesis is exploring how Artificial Intelligence can help monitor and adapt a game in real time to a player's needs and profile. Specifically, we want to study the state-of-the-art in the context of serious games as well of adaptative games. We aim to create a game with real time adaptation in the area of Mathematiques, that can create a reliable profile of any player inside our target audience and adapt to the needs of that profile. Ideally, this will be the next step on e-learning and serious games development as we can expand our audience to a bigger number with the same resources. There is a broad range of work in serious games and how they are the answer to motivate the students, however it is need to keep a flow state in the student for better results. Some papers also explore how AI can help with these questions but there is no concrete answer to this need and no definite study of how to do it. Nonetheless, the research work help us define some parameters need for the success of this dissertation work. This work starts by analysing the state of art in education and games, as well AI adapted to these games. After there will be the layout and plan of the game design that was made with the help of an expert. After that the exploration of the integration of Q-learning algorithms within the game we provide some alterations to the normal Q-learning. Finally there is the analyse of the data collected from the target audience and conclusions.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Moura, Ana Carolina Ribeiro. « Fostering motivation through AI techniques in educational serious games ». Dissertação, 2017. https://repositorio-aberto.up.pt/handle/10216/107057.

Texte intégral
Résumé :
O termo jogo sério refere-se a jogos que têm um impacto maior do que o entretenimento e têm crescido ao longo dos últimos anos. Os jogos sérios geralmente são desenvolvidos para um público-alvo e um objetivo de ensino . Isso resulta em um desenvolvimento de alto custo para uma pequena audiência. Um dos problemas mais relevantes em jogos sérios é a necessidade de adaptar e equilibrar o jogo, esse equilíbrio é restrito a uma pessoa e não pode ser extrapolado para uma audiência maior, transformando o pequeno público em apenas algumas pessoas.Para ter um controle eficiente da aprendizagem, é necessário compreender a relação latente entre a capacidade cognitiva, motivação e desempenho de cada pessoa. Quanto mais personalizado o material dado a cada aluno, melhor será a sua aprendizagem, porque o desafio é adaptado às suas necessidades.Machine Learning, especialmente reinforcement learning (RL), pode ser usado para geração de comportamento NPC automatizado e pode ser aplicado a um agente que controle a dificuldade de um jogo em um ambiente desconhecido e não supervisionado. Por essa razão, a aplicação de algoritmos como Q-learning pode ajudar na criação de curvas de aprendizagem personalizadas num jogo.O objetivo geral desta tese é explorar como a Inteligência Artificial pode ajudar a monitorar e adaptar um jogo em tempo real às necessidades e ao perfil de um jogador. Especificamente, queremos estudar o estado da arte no contexto de jogos sérios, bem como de jogos adaptativos. Pretendemos criar um jogo com adaptação em tempo real na área da Matemática, que pode criar um perfil confiável de qualquer jogador dentro do nosso público-alvo adaptar-se às necessidades desse jogador. Idealmente, este será o próximo passo no e-learning e no desenvolvimento de jogos sério, pois podemos expandir um jogo para um público maior sem gastar mais recursos.Há uma ampla gama de trabalho em jogos sérios e como eles são a resposta para motivar os alunos, no entanto, é necessário manter um estado de flow no aluno para obter melhores resultados. Alguns artigos também exploram como a AI pode ajudar a manter esse estado, no entanto não há uma resposta concreta a essa necessidade e nenhum estudo definitivo de como fazê-lo. O trabalho de pesquisa ajuda-nos a definir alguns parâmetros necessários para o sucesso deste trabalho de dissertação.Este trabalho começa analisando o estado da arte em educação e jogos, bem como IA usada nestes jogos. Depois, haverá o layout e o plano do design do jogo que foi feito com a ajuda de um especialista. Depois disso, a exploração da integração dos algoritmos Q-learning no jogo, fornecemos algumas alterações ao Q-learning normal. Finalmente, há a análise dos dados coletados do público-alvo e conclusões.
The term serious game refers to games that have a bigger impact than entertainment and have been growing over the last years. Serious games usually are developed for a target audience and a target teaching goal. This results in a high cost development for a small audience. One of the most relevant problems in serious games is the need to adapt and balance the game, this balance is restricted to one person and it can't be extrapolated to a bigger audience, turning the small audience to just a few peoples.In order to have an efficient control of learning, it is necessary to understand the latent relation between the cognitive capacity, motivation and performance of each person. The more personalized the material given to each student, the better their learning will be, because the balance is tailored to your needs.Machine learning, especially reinforcement learning (RL) can be used for automated NPC behaviour generation and it can be applied to an agent that controls the difficulty of a game in an unknown, unsupervised environment. For that reason, the application of algorithms like Q-learning may help on the creation of personalized learning curves in a game.The broad objective of this thesis is exploring how Artificial Intelligence can help monitor and adapt a game in real time to a player's needs and profile. Specifically, we want to study the state-of-the-art in the context of serious games as well of adaptative games. We aim to create a game with real time adaptation in the area of Mathematiques, that can create a reliable profile of any player inside our target audience and adapt to the needs of that profile. Ideally, this will be the next step on e-learning and serious games development as we can expand our audience to a bigger number with the same resources. There is a broad range of work in serious games and how they are the answer to motivate the students, however it is need to keep a flow state in the student for better results. Some papers also explore how AI can help with these questions but there is no concrete answer to this need and no definite study of how to do it. Nonetheless, the research work help us define some parameters need for the success of this dissertation work. This work starts by analysing the state of art in education and games, as well AI adapted to these games. After there will be the layout and plan of the game design that was made with the help of an expert. After that the exploration of the integration of Q-learning algorithms within the game we provide some alterations to the normal Q-learning. Finally there is the analyse of the data collected from the target audience and conclusions.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Datta, Sandip, et Sabir Kumar Samad. « Data mining of machine design elements using AI techniques ». Thesis, 2007. http://ethesis.nitrkl.ac.in/4232/1/Data_Mining_of_Machine_Design_Elements_Using_AI_Techniques.pdf.

Texte intégral
Résumé :
Data Mining is the process of extracting knowledge, hidden from large volume of raw data. AI is about simulating human intelligence. The project aims at proving that Data mining methods can be realized in Mechanical Engineering Industries. To prove this we have created a Database containing design elements and have made the process to retrieved design elements as per the requirements. The same when applied to any industry, it can be proved beneficial and also cut down a lot of time wasted due to human in efficiency. This includes several steps: 1. Design a Database which stores data, 2. using asp.net code to retrieve the data, Data mining techniques viable tools for determining interesting patterns, clustering the parameter space, detecting anomalies in the simulation results, and for designing improved physical models. In mechanical industries, this technology can be used for a variety of jobs like forecasting, file management, providing information regarding the availability of material in production processes and also failure analysis in maintainance industries. The project mainly deals with building a huge database, collecting data from data-book, using SQL server 2000. retrieval is done using asp.net programming on WINDOWS 2003 Enterprise Edition platform.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Levow, Gina-Anne. « Corpus-Based Techniques for Word Sense Disambiguation ». 1998. http://hdl.handle.net/1721.1/5934.

Texte intégral
Résumé :
The need for robust and easily extensible systems for word sense disambiguation coupled with successes in training systems for a variety of tasks using large on-line corpora has led to extensive research into corpus-based statistical approaches to this problem. Promising results have been achieved by vector space representations of context, clustering combined with a semantic knowledge base, and decision lists based on collocational relations. We evaluate these techniques with respect to three important criteria: how their definition of context affects their ability to incorporate different types of disambiguating information, how they define similarity among senses, and how easily they can generalize to new senses. The strengths and weaknesses of these systems provide guidance for future systems which must capture and model a variety of disambiguating information, both syntactic and semantic.
Styles APA, Harvard, Vancouver, ISO, etc.
48

JIANG, YUAN-QI, et 江元麒. « Sclving production line balancing problems by using system simulation and AI techniques ». Thesis, 1992. http://ndltd.ncl.edu.tw/handle/79047436714595743585.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Jha, Alok Kumar. « Intelligent Control and Path Planning of Multiple Mobile Robots Using Hybrid Ai Techniques ». Thesis, 2016. http://ethesis.nitrkl.ac.in/7416/1/2016_PhD_AKJha_510ME109.pdf.

Texte intégral
Résumé :
This work reports the problem of intelligent control and path planning of multiple mobile robots. Soft computing methods, based on three main approaches i.e. 1) Bacterial Foraging Optimization Algorithm, 2) Radial Basis Function Network and 3) Bees Algorithm are presented. Initially, Bacterial foraging Optimization Algorithm (BFOA) with constant step size is analyzed for the navigation of mobile robots. Then the step size has been made adaptive to develop an Adaptive Bacterial Foraging Optimization (ABFO) controller. Further, another controller using radial basis function neural network has been developed for the mobile robot navigation. Number of training patterns are intended to train the RBFN controller for different conditions arises during the navigation. Moreover, Bees Algorithm has been used for the path planning of the mobile robots in unknown environments. A new fitness function has been used to perform the essential navigational tasks effectively and efficiently. In addition to the selected standalone approaches, hybrid models are also proposed to improve the ability of independent navigation. Five hybrid models have been presented and analyzed for navigation of one, two and four mobile robots in various scenarios. Comparisons have been made for the distance travelled and time taken by the robots in simulation and real time. Further, all the proposed approaches are found capable of solving the basic issues of path planning for mobile robots while doing navigation. The controllers have been designed, developed and analyzed for various situations analogous to possible applications of the robots in indoor environments. Computer simulations are presented for all cases with single and multiple mobile robots in different environments to show the effectiveness of the proposed controllers. Furthermore, various exercises have been performed, analyzed and compared in physical environments to exhibit the effectiveness of the developed controllers.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Nedunuri, Srinivas. « Theory and techniques for synthesizing efficient breadth-first search algorithms ». Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-08-6242.

Texte intégral
Résumé :
The development of efficient algorithms to solve a wide variety of combinatorial and planning problems is a significant achievement in computer science. Traditionally each algorithm is developed individually, based on flashes of insight or experience, and then (optionally) verified for correctness. While computer science has formalized the analysis and verification of algorithms, the process of algorithm development remains largely ad-hoc. The ad-hoc nature of algorithm development is especially limiting when developing algorithms for a family of related problems. Guided program synthesis is an existing methodology for systematic development of algorithms. Specific algorithms are viewed as instances of very general algorithm schemas. For example, the Global Search schema generalizes traditional branch-and-bound search, and includes both depth-first and breadth-first strategies. Algorithm development involves systematic specialization of the algorithm schema based on problem-specific constraints to create efficient algorithms that are correct by construction, obviating the need for a separate verification step. Guided program synthesis has been applied to a wide range of algorithms, but there is still no systematic process for the synthesis of large search programs such as AI planners. Our first contribution is the specialization of Global Search to a class we call Efficient Breadth-First Search (EBFS), by incorporating dominance relations to constrain the size of the frontier of the search to be polynomially bounded. Dominance relations allow two search spaces to be compared to determine whether one dominates the other, thus allowing the dominated space to be eliminated from the search. We further show that EBFS is an effective characterization of greedy algorithms, when the breadth bound is set to one. Surprisingly, the resulting characterization is more general than the well-known characterization of greedy algorithms, namely the Greedy Algorithm parametrized over algebraic structures called greedoids. Our second contribution is a methodology for systematically deriving dominance relations, not just for individual problems but for families of related problems. The techniques are illustrated on numerous well-known problems. Combining this with the program schema for EBFS results in efficient greedy algorithms. Our third contribution is application of the theory and methodology to the practical problem of synthesizing fast planners. Nearly all the state-of-the-art planners in the planning literature are heuristic domain-independent planners. They generally do not scale well and their space requirements also become quite prohibitive. Planners such as TLPlan that incorporate domain-specific information in the form of control rules are orders of magnitude faster. However, devising the control rules is labor-intensive task and requires domain expertise and insight. The correctness of the rules is also not guaranteed. We introduce a method by which domain-specific dominance relations can be systematically derived, which can then be turned into control rules, and demonstrate the method on a planning problem (Logistics).
text
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie