To see the other types of publications on this topic, follow the link: Artificial intelligence (ML/DL).

Dissertations / Theses on the topic 'Artificial intelligence (ML/DL)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'Artificial intelligence (ML/DL).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Laguili, Oumaima. "Smart management of combined electric water heaters and self-consumption photovoltaic solar panels (SmartECS)." Electronic Thesis or Diss., Perpignan, 2024. https://theses-public.univ-perp.fr/2024PERP0045.pdf.

Full text
Abstract:
Alors que le secteur du bâtiment se montre de plus en plus économe en énergie, les besoins en eau chaude sanitaire (ECS) augmentent, en particulier dans les logements récents. De ce fait, il apparait nécessaire d'améliorer l'efficacité des solutions mises en œuvre pour la production d'ECS, de mieux comprendre les besoins en ECS et d'impliquer l'usager dans la prise de décision. Le projet traite du développement d'algorithmes pour le contrôle/commande « intelligent » d'installations associant chauffe-eau électrique et panneaux solaires photovoltaïques en autoconsommation. Sera mise en œuvre une stratégie fondée sur la théorie de la commande prédictive, mettant à profit les outils de l'apprentissage automatique. Cette stratégie sera généralisée aux systèmes « multi-chauffe-eau », mutualisant une production solaire photovoltaïque, parle développement d'une commande distribuée et hiérarchisée. Une expérimentation permettra d'évaluer les conditions d'acceptabilité de la solution développée et l'impact de l'information sur la prise de décision
While the building sector is increasingly energy efficient, the needs in domestic hot water (DHW) is increasing, especially in newer homes. Therefore, improvement of efficiency in the production of DHW, a better understanding of the needs in DHW, and user involvement in the decision-making process are necessary. The project deals with the development of algorithms for the smart control of combined electric water heaters and self-consumption photovoltaic solar panels. A model-based predictive control strategy will be developed and implemented, leveraging machine learning tools. The strategy will be generalized to multi-water heater systems, sharing photovoltaic solar production, through the development of a distributed and hierarchical control approach. An experiment will make it possible to assess the conditions of acceptability of the developed solution and the impact of information on decision-making
APA, Harvard, Vancouver, ISO, and other styles
2

Giuliani, Luca. "Extending the Moving Targets Method for Injecting Constraints in Machine Learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23885/.

Full text
Abstract:
Informed Machine Learning is an umbrella term that comprises a set of methodologies in which domain knowledge is injected into a data-driven system in order to improve its level of accuracy, satisfy some external constraint, and in general serve the purposes of explainability and reliability. The said topid has been widely explored in the literature by means of many different techniques. Moving Targets is one such a technique particularly focused on constraint satisfaction: it is based on decomposition and bi-level optimization and proceeds by iteratively refining the target labels through a master step which is in charge of enforcing the constraints, while the training phase is delegated to a learner. In this work, we extend the algorithm in order to deal with semi-supervised learning and soft constraints. In particular, we focus our empirical evaluation on both regression and classification tasks involving monotonicity shape constraints. We demonstrate that our method is robust with respect to its hyperparameters, as well as being able to generalize very well while reducing the number of violations on the enforced constraints. Additionally, the method can even outperform, both in terms of accuracy and constraint satisfaction, other state-of-the-art techniques such as Lattice Models and Semantic-based Regularization with a Lagrangian Dual approach for automatic hyperparameter tuning.
APA, Harvard, Vancouver, ISO, and other styles
3

Lundin, Lowe. "Artificial Intelligence for Data Center Power Consumption Optimisation." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447627.

Full text
Abstract:
The aim of the project was to implement a machine learning model to optimise the power consumption of Ericsson’s Kista data center. The approach taken was to use a Reinforcement Learning agent trained in a simulation environment based on data specific to the data center. In this manner, the machine learning model could find interactions between parameters, both general and site specific in ways that a sophisticated algorithm designed by a human never could. In this work it was found that a neural network can effectively mimic a real data center and that the Reinforcement Learning policy "TD3" could, within the simulated environment, consistently and convincingly outperform the control policy currently in use at Ericsson’s Kista data center.
APA, Harvard, Vancouver, ISO, and other styles
4

Karlsson, Frida. "The opportunities of applying Artificial Intelligence in strategic sourcing." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281306.

Full text
Abstract:
Artificial Intelligence technology has become increasingly important from a business perspective. In strategic sourcing, the technology has not been explored much. However, 67% of CPO:s in a survey showed that AI is one of their top priorities the next 10 years. AI can be used to identify patterns, predict prices and provide support in decision making. A qualitative case study has been performed in a strategic sourcing function at a large size global industrial company where the purpose has been to investigate how applicable AI is in the strategic sourcing process at The Case Company. In order to achieve the purpose of this study, it has been important to understand the strategic sourcing process and understand what AI technology is and what it is capable of in strategic sourcing. Based on the empirical data collection combined with literature, opportunities of applying AI in strategic sourcing have been identified and key areas for an implementation have been suggested. These include Forecasting, Spend Analysis & Savings Tracking, Supplier Risk Management, Supplier Identification & Selection, RFQ process, Negotiation process, Contract Management and Supplier Performance Management. These key areas have followed the framework identified in the literature study while identifying and adding new factors. It also seemed important to consider factors such as challenges and risks, readiness and maturity as well as factors that seems to be important to consider in order to enable an implementation. To assess how mature and ready the strategic sourcing function is for an implementation, some of the previous digital projects including AI technologies have been mapped and analysed. Based on the identified key areas of opportunities of applying AI, use cases and corresponding benefits of applying AI have been suggested. A guideline including important factors to consider if applying the technology has also been provided. However, it has been concluded that there might be beneficial to start with a smaller use case and then scale it up. Also as the strategic sourcing function has been establishing a spend analytics platform for the indirect team, there might be a good start to evaluate that project and then apply AI on top of the existing solution. Other factors to consider are ensuring data quality and security, align with top management as well as demonstrate the advantages AI can provide in terms of increased efficiency and cost savings. The entire strategic sourcing function should be involved in an AI project and the focus should not only be on technological aspect but also on soft factors including change management and working agile in order to successfully apply AI in strategic sourcing.
Artificiell Intelligens har blivit allt viktigare ur ett affärsperspektiv. När det gäller strategiskt inköp har tekniken inte undersökts lika mycket tidigare. Hursomhelst, 67% av alla tillfrågade CPO:er i en enkät ansåg att AI är en av deras topprioriteringar de kommande tio åren. AI kan exempelvis identifiera mönster, förutspå priser samt ge support inom beslutsfattning. En kvalitativ fallstudie har utförts i en strategisk inköpsfunktion hos ett globalt industriföretag där syftet har varit att undersöka hur tillämpbart AI är i strategiskt inköp hos Case-Företaget. För att uppnå syftet med denna studie har det varit viktigt att förstå vad den strategiska inköpsprocessen omfattas av samt vad AI-teknologi är och vad den är kapabel till inom strategiskt inköp. Därför har litteraturstudien gjorts för att undersöka hur man använt AI inom strategiskt inköp tidigare och vilka fördelar som finns. Baserat på empirisk datainsamling kombinerat med litteratur har nyckelområden för att applicera AI inom strategiskt inköp föreslagits inkluderat forecasting, spendanalys & besparingsspårning, riskhantering av leverantörer, leverantörsidentifikation och val, RFQ-processen, förhandlingsprocessen, kontrakthantering samt uppföljning av leverantörsprestation. Dessa nyckelområden har följt det ramverk som skapats i litteraturstudien samtidigt som nya faktorer har identifierats och lagts till då de ansetts som viktiga. För att tillämpa AI i strategiska inköpsprocessen måste Case-Företaget överväga andra aspekter än var i inköpsprocessen de kan dra nytta av AI mest. Faktorer som utmaningar och risker, beredskap och mognad samt faktorer som ansetts viktiga att beakta för att möjliggöra en implementering har identifierats. För att bedöma hur mogen och redo den strategiska inköpsfunktionen hos Case-Företaget är för en implementering har några av de tidigare digitala projekten inklusive AI-teknik kartlagts och analyserats. Det har emellertid konstaterats att det kan vara fördelaktigt för strategiskt inköp att börja med ett mindre användningsområde och sedan skala upp det. Eftersom strategiska inköpsfunktionen har implementerat en spendanalys plattform kan det vara en bra start att utvärdera det projektet och sedan tillämpa AI ovanpå den befintliga lösningen. Andra faktorer att beakta är att försäkra datakvalitet och säkerhet, involvera ledningen samt lyfta vilka fördelar AI kan ge i form av ökad effektivitet och kostnadsbesparingar. Därtill är det viktigt att inkludera hela strategiska inköps-funktionen samt att inte endast beakta den tekniska aspekten utan också mjuka faktorer så som change management och agila metoder.
APA, Harvard, Vancouver, ISO, and other styles
5

Djaidja, Taki Eddine Toufik. "Advancing the Security of 5G and Beyond Vehicular Networks through AI/DL." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCK009.

Full text
Abstract:
L'émergence des réseaux de cinquième génération (5G) et des réseaux véhiculaire (V2X) a ouvert une ère de connectivité et de services associés sans précédent. Ces réseaux permettent des interactions fluides entre les véhicules, l'infrastructure, et bien plus encore, en fournissant une gamme de services à travers des tranches de réseau (slices), chacune adaptée aux besoins spécifiques de ceux-ci. Les générations futures sont même censées apporter de nouvelles avancées à ces réseaux. Cependant, ce progrès remarquable les expose à une multitude de menaces en matière de cybersécurité, dont bon nombre sont difficiles à détecter et à atténuer efficacement avec les contre mesures actuelles. Cela souligne la nécessité de mettre en oeuvre de nouveaux mécanismes avancés de détection d'intrusion pour garantir l'intégrité, la confidentialité et la disponibilité des données et des services.Un domaine suscitant un intérêt croissant à la fois dans le monde universitaire qu'industriel est l'Intelligence Artificielle (IA), en particulier son application pour faire face aux menaces en cybersécurité. Notamment, les réseaux neuronaux (RN) ont montré des promesses dans ce contexte, même si les solutions basées sur l'IA sont accompagnées de défis majeurs.Ces défis peuvent être résumés comme des préoccupations concernant l'efficacité et l'efficience. Le premier concerne le besoin des Systèmes de Détection d'Intrusions (SDI) de détecter avec précision les menaces, tandis que le second implique d'atteindre l'efficacité en termes de temps et la détection précoce des menaces.Cette thèse représente l'aboutissement de nos recherches sur l'investigation des défis susmentionnés des SDI basés sur l'IA pour les systemes 5G en général et en particulier 5G-V2X. Nous avons entamé notre recherche en réalisant une revue de la littérature existante. Tout au long de cette thèse, nous explorons l'utilisation des systèmes d'inférence floue (SIF) et des RN, en mettant particulièrement l'accent sur cette derniere technique. Nous avons utilisé des techniques de pointe en apprentissage, notamment l'apprentissage profond (AP), en intégrant des réseaux neuronaux récurrents et des mécanismes d'attention. Ces techniques sont utilisées de manière innovante pour réaliser des progrès significatifs dans la résolution des préoccupations liées à l'amélioration de l'efficacité et de l'efficience des SDI. De plus, nos recherches explorent des défis supplémentaires liés à la confidentialité des données lors de l'utilisation des SDIs basés sur l'AP. Nous y parvenons en exploitant les algorithmes d'apprentissage fédéré (AF) les plus récents
The emergence of Fifth Generation (5G) and Vehicle-to-Everything (V2X) networks has ushered in an era of unparalleled connectivity and associated services. These networks facilitate seamless interactions among vehicles, infrastructure, and more, providing a range of services through network slices, each tailored to specific requirements. Future generations are even expected to bring further advancements to these networks. However, this remarkable progress also exposes them to a myriad of security threats, many of which current measures struggle to detect and mitigate effectively. This underscores the need for advanced intrusion detection mechanisms to ensure the integrity, confidentiality, and availability of data and services.One area of increasing interest in both academia and industry spheres is Artificial Intelligence (AI), particularly its application in addressing cybersecurity threats. Notably, neural networks (NNs) have demonstrated promise in this context, although AI-based solutions do come with inherent challenges. These challenges can be summarized as concerns about effectiveness and efficiency. The former pertains to the need for Intrusion Detection Systems (IDSs) to accurately detect threats, while the latter involves achieving time efficiency and early threat detection.This dissertation represents the culmination of our research findings on investigating the aforementioned challenges of AI-based IDSs in 5G systems in general and 5G-V2X in particular. We initiated our investigation by conducting a comprehensive review of the existing literature. Throughout this thesis, we explore the utilization of Fuzzy Inference Systems (FISs) and NNs, with a specific emphasis on the latter. We leveraged state-of-the-art NN learning, referred to as Deep Learning (DL), including the incorporation of recurrent neural networks and attention mechanisms. These techniques are innovatively harnessed to making significant progress in addressing the concerns of enhancing the effectiveness and efficiency of IDSs. Moreover, our research delves into additional challenges related to data privacy when employing DL-based IDSs. We achieve this by leveraging and experimenting state-of-the-art federated learning (FL) algorithms
APA, Harvard, Vancouver, ISO, and other styles
6

Nystad, Marcus, and Lukas Lindblom. "Artificial Intelligence in the Pulp and Paper Industry : Current State and Future Trends." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279574.

Full text
Abstract:
The advancements in Artificial Intelligence (AI) have received large attention in recent years and increased awareness has led to massive societal benefits and new opportunities for industries able to capitalize on these emerging technologies. The pulp and paper industry is going through one of the most considerable transformations into Industry 4.0. Integrating AI technology in the manufacturing process of the pulp and paper industry has shown great potential, but there are uncertainties which direction companies are heading. This study is an investigation of the pulp and paper industry in collaboration with IBM that aims to fill a gap between academia and the progress companies are making. More specifically, this thesis is a multiple case study of the current state and barriers of AI technology in the Swedish pulp and paper industry, the future trends and expectations of AI and the way organizations are managing AI initiatives Semi-structured interviews were conducted with 11 participants from three perspectives and the data was thematically coded. Our analysis shows that the use of AI varies, and companies are primarily experimenting with a still immature technology. Several trends and areas with future potential were identified and it was shown that digital innovation management is highly regarded. We conclude that there are several barriers hindering further use of AI. However, continued progress with AI will provide large benefit long term in areas such as predictive maintenance and process optimization. Several measures taken to support initiatives with AI were identified and discussed. We encourage managers to take appropriate actions in the continued work toward AI integration and encourage further research in the area of potential reworks in R&D.
Framgångarna inom Artificiell Intelligens (AI) har fått stor uppmärksamhet de senaste åren och ökad medvetenhet har lett till stora fördelar för samhället liksom nya möjligheter för industrier som tar vara på dessa nya teknologier. Pappers- och massa industrin genomgår en av de mest omfattande transformationerna mot Industri 4.0. Integreringen av AI-teknologi i industrins tillverkningsprocesser has visat stor potential, men också osäkerhet kring vilken riktning företag är på väg mot. Denna studie är en undersökning av den svenska pappers- och massaindustrin, i samarbete med IBM, som syftar till att minska gapet mellan akademin och framstegen företag inom industrin tar. Mer specifikt är denna uppsats en kombinerad fallstudie av det nuvarande läget, barriärerna till AI-teknik i den svenska pappers- och massa industrin, de framtida trenderna och förväntningarna på AI och metoderna företag använder för att stötta AI-initiativ. Semi-strukturerade intervjuer genomfördes med 11 deltagare från tre olika perspektiv och datan var tematiskt kodad. Vår analys visar att användning av AI varierar och företag experimenterar huvudsakligen med omogen teknik. Flera trender och områden med potential för framtiden identifierades och det visades att digital innovationshantering är högt ansedd. Vi sammanfattar med att det finns flera barriärer som hindrar fortsatt användning av AI. Fortsatt arbete med AI-tekniken kommer leda till stora fördelar på lång sikt inom områden som prediktivt underhåll och fortsatt processoptimering. Flera åtgärder för att stötta AI-initiativ var identifierade och diskuterades. Vi uppmuntrar industrin att genomföra lämpliga åtgärder i det fortsatta arbetet mot AI-integration och uppmuntrar fortsatt forskning inom potentiella omstruktureringar inom FoU.
APA, Harvard, Vancouver, ISO, and other styles
7

Hanski, Jari, and Kaan Baris Biçak. "An Evaluation of the Unity Machine Learning Agents Toolkit in Dense and Sparse Reward Video Game Environments." Thesis, Uppsala universitet, Institutionen för speldesign, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444982.

Full text
Abstract:
In computer games, one use case for artificial intelligence is used to create interesting problems for the player. To do this new techniques such as reinforcement learning allows game developers to create artificial intelligence agents with human-like or superhuman abilities. The Unity ML-agents toolkit is a plugin that provides game developers with access to reinforcement algorithms without expertise in machine learning. In this paper, we compare reinforcement learning methods and provide empirical training data from two different environments. First, we describe the chosen reinforcement methods and then explain the design of both training environments. We compared the benefits in both dense and sparse rewards environments. The reinforcement learning methods were evaluated by comparing the training speed and cumulative rewards of the agents. The goal was to evaluate how much the combination of extrinsic and intrinsic rewards accelerated the training process in the sparse rewards environment. We hope this study helps game developers utilize reinforcement learning more effectively, saving time during the training process by choosing the most fitting training method for their video game environment. The results show that when training reinforcement agents in sparse rewards environments the agents trained faster with the combination of extrinsic and intrinsic rewards. And when training an agent in a sparse reward environment with only extrinsic rewards the agent failed to learn to complete the task.
APA, Harvard, Vancouver, ISO, and other styles
8

Klingvall, Emelie. "Artificiell intelligens som ett beslutsstöd inom mammografi : En kvalitativ studie om radiologers perspektiv på icke-tekniska utmaningar." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18768.

Full text
Abstract:
Artificiell intelligence (AI) har blivit vanligare att använda för att stödja människor i deras beslutsfattande. Maskininlärning (ML) är ett delområde inom AI som har börjat användas mer inom hälso-och sjukvården. Patientdata ökar inom vården och ett AI-system kan behandla denna ökade datamängd, vilket vidare kan utveckla ett beslutsstöd som hjälper läkarna. AI-tekniken blir vanligare att använda inom radiologin och specifikt inom mammografin som ett beslutsstöd. Användning av AI-teknik inom mammografin medför fördelar men det finns även utmaningar som inte har något med tekniken att göra.Icke-tekniska utmaningar är viktiga att se över för att generera en lyckad praxis. Studiens syfte var därför att undersöka icke-tekniska utmaningar vid användning av AI som ett beslutsstöd inom mammografi ur ett radiologiskt perspektiv. Radiologer med erfarenhet av mammografi intervjuades i syfte att öka kunskapen kring deras syn på användningen.Resultatet från studien identifierade och utvecklade de icke-tekniska utmaningarna utifrån temana: ansvar, mänskliga förmågor, acceptans, utbildning/kunskap och samarbete. Resultatet indikerade även på att inom dessa teman finns icke-tekniska utmaningar med tillhörande aspekter som är mer framträdande än andra. Studien ökar kunskaperna kring radiologers syn på användningen och bidrar till framtida forskning för samtliga berörda aktörer. Forskning kan ta hänsyn till dessa icke-tekniska utmaningar redan innan tekniken är implementerad i syfte att minska risken för komplikationer.
Artificial intelligence (AI) has become more commonly used to support people when making decisions. Machine learning (ML) is a sub-area of AI that has become more frequently used in health care. Patient data is increasing in healthcare and an AI system can help to process this increased amount of data, which further can develop a decision support that can help doctors. AI technology is becoming more common to use in radiology and specifically in mammography, as a decision support. The usage of AI technology in mammography has many benefits, but there are also challenges that are not connected to technology.Non-technical challenges are important to consider and review in order to generate a successful practice. The purpose of this thesis is therefore to review non-technical challenges when using AI as a decision support in mammography from a radiological perspective. Radiologists with experience in mammography were interviewed in order to increase knowledge about their views on the usage.The results identified and developed the non-technical challenges based on themes: responsibility, human abilities, acceptance, education/knowledge and collaboration. The study also found indications within these themes that there are non-technical challenges with associated aspects that are more prominent than others. This study emphasizes and increases the knowledge of radiologists views on the usage of AI and contributes to future research for all the actors involved. Future research can address these non-technical challenges even before the technology is implemented to reduce the risk of complications.
APA, Harvard, Vancouver, ISO, and other styles
9

Bengtsson, Theodor, and Jonas Hägerlöf. "Stora mängder användardata för produktutveckling : Möjligheter och utmaningar vid integrering av stora mängder användardata i produktutvecklingsprocesser." Thesis, KTH, Integrerad produktutveckling, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297966.

Full text
Abstract:
Teknikutvecklingen har bidragit till ökad mängd användardata företag har tillgång till och väntas fortsätta öka. Företag som integrerar användardata i sina produktutvecklingsprocesser väntas uppnå konkurrensfördelar. Arbetets syfte handlar om att undersöka möjligheter och utmaningar vid integrering av stora mängder användardata. Genom att besvara två frågeställningar fastställer undersökningen arbetets syfte, där även konsekvenser för beslutsfattande behandlas. Arbetsprocessen inleddes med en litteraturstudie som låg till grund för både problematiseringen och syftet som identifierar ett gap i forskningen kring användardata i produktutvecklingsprocesser. Genom litteraturstudien skapades en bredare förståelse för ämnet. Den empiriska delen utgjordes av en kvalitativ semistrukturerad intervjustudie med fyra deltagande företag och lika många respondenter med kunskap inom området. Genom kodning av materialet identifierades områden bland respondenterna som bidrog med insikter som behandlats för att bidra till forskningsområdet. Resultaten belyser möjligheter och utmaningar företag står inför vid integrering av storamängder användardata i produktutvecklingsprocesser. Studien framhåller användaren som central i produktutvecklingen, där ökad data möjliggör komplexa dataanalyser. Effektivanalys av data möjliggör snabbare itereringsprocesser och repetitiva jobb kan ersättas av mer stimulerande. Därtill blir beslutsunderlag mer omfattande och kan generera nya strategier och utformningar av erbjudanden. Studien fastställer även att ökad mängd data ställer krav på företag, där relevansen i datan är viktig och processer för hantering måste kunna definiera relevant data. Vidare måste företag mogna i rollen att integrera användardata. För att beslutsunderlag från användardata ska vara säkert bör kvalitativa och kvantitativa analyser främjas att samverka för att bekräfta varandras identifierade mönster. Integrering av stora mängder användardata i produktutvecklingsprocesser fastställs av denna studie kräva att kompetens erhålls för att i processer för hantering av data kunna säkerställa relevans genom att definiera vilken data som ska samlas in. Vid lyckad integrering uppnår företag som integrerar användardata konkurrensfördelar och kapitaliseringsmöjligheter som är långsiktigt gynnsamma.
The technology development has contributed to an increased amount of user data companies have access to and is expected to continue to increase. Companies that integrate user data into their product development processes are expected to gain competitive advantages. The purpose of the work is to investigate opportunities and challenges when integrating large amounts of user data. By answering two questions, the study determines the purpose of the work, where the consequences for decision­ making also are addressed. The work process began with a literature study that formed the basis for both the problematization and the purpose that identifies a gap in the research about user data in product development processes. The literature study created a broader understanding of the subject. The empirical part consisted of a qualitative semi­structured interview study with four participating companies and an equal number of respondents with knowledge in the field. Coding of the material identified areas among the respondents which contributed within sights that were processed to contribute to the research area.The results highlight opportunities and challenges companies face when integrating large amounts of user data into product development processes. The study highlights the user as central to product development, where increased data enables complex data analysis. Efficient analysis of data enables faster iteration processes and repetitive jobs can be replaced by more stimulating. In addition, the basis for decision-making becomes more extensive and can generate new strategies and designs for offers. The study also determines that increased data places demands on companies, where the relevance of the data is important and processes for handling must be able to define the relevant data. Furthermore, companies need to mature in the role of integrating user data. In order to ensure the safe basis for decision­making from user data, qualitative and quantitative analyses should be promoted to work together to confirm each other’s identified patterns. The integration of large amounts of user data into product development processes is determined by this study to require the acquisition of competence in order to ensure relevance in data management processes by defining which data to collect. With successful integration, companies that integrate user data achieve competitive advantages and capitalization opportunities that are long­-term beneficial.
APA, Harvard, Vancouver, ISO, and other styles
10

Pouy, Léo. "OpenNas : un cadre adaptable de recherche automatique d'architecture neuronale." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG089.

Full text
Abstract:
Lors de la création d'un modèle de réseau de neurones, l'étape dite du "fine-tuning" est incontournable. Lors de ce fine-tuning, le développeur du réseau de neurones doit ajuster les hyperparamètres et l'architecture du réseau pour que ce-dernier puisse répondre au cahier des charges. Cette étape est longue, fastidieuse, et nécessite de l'expérience de la part du développeur. Ainsi, pour permettre la création plus facile de réseaux de neurones, il existe une discipline, l'"Automatic Machine Learning" (Auto-ML), qui cherche à automatiser la création de Machine Learning. Cette thèse s'inscrit dans cette démarche d'automatisation et propose une méthode pour créer et optimiser des architectures de réseaux de neurones (Neural Architecture Search). Pour ce faire, un nouvel espace de recherche basé sur l'imbrication de blocs à été formalisé. Cet espace permet de créer un réseau de neurones à partir de blocs élémentaires connectés en série ou en parallèle pour former des blocs composés qui peuvent eux-mêmes être connectés afin de former un réseau encore plus complexe. Cet espace de recherche à l'avantage d'être facilement personnalisable afin de pouvoir influencer la recherche automatique vers des types d'architectures (VGG, Inception, ResNet, etc.) et contrôler le temps d'optimisation. De plus il n'est pas contraint à un algorithme d'optimisation en particulier. Dans cette thèse, la formalisation de l'espace de recherche est tout d'abord décrite, ainsi que des techniques dîtes d'"encodage" afin de représenter un réseau de l'espace de recherche par un entier naturel (ou une liste d'entiers naturels). Puis, des stratégies d'optimisation applicables à cet espace de recherche sont proposées. Enfin, des expérimentations de recherches d'architectures neuronales sur différents jeux de données et avec différents objectifs en utilisant l'outil développé (nommé OpenNas) sont présentées
When creating a neural network, the "fine-tuning" stage is essential. During this fine-tuning, the neural network developer must adjust the hyperparameters and the architecture of the network so that it meets the targets. This is a time-consuming and tedious phase, and requires experience on the part of the developer. So, to make it easier to create neural networks, there is a discipline called Automatic Machine Learning (Auto-ML), which seeks to automate the creation of Machine Learning. This thesis is part of this Auto-ML approach and proposes a method for creating and optimizing neural network architectures (Neural Architecture Search, NAS). To this end, a new search space based on block imbrication has been formalized. This space makes it possible to create a neural network from elementary blocks connected in series or in parallel to form compound blocks which can themselves be connected to form an even more complex network. The advantage of this search space is that it can be easily customized to influence the NAS for specific architectures (VGG, Inception, ResNet, etc.) and control the optimization time. Moreover, it is not constrained to any particular optimization algorithm. In this thesis, the formalization of the search space is first described, along with encoding techniques to represent a network from the search space by a natural number (or a list of natural numbers). Optimization strategies applicable to this search space are then proposed. Finally, neural architecture search experiments on different datasets and with different objectives using the developed tool (named OpenNas) are presented
APA, Harvard, Vancouver, ISO, and other styles
11

Stellmar, Justin. "Predicting the Deformation of 3D Printed ABS Plastic Using Machine Learning Regressions." Youngstown State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1587462911261523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Reiling, Anthony J. "Convolutional Neural Network Optimization Using Genetic Algorithms." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1512662981172387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ahlm, Kristoffer. "IDENTIFIKATION AV RISKINDIKATORER I FINANSIELL INFORMATION MED HJÄLP AV AI/ML : Ökade möjligheter för myndigheter att förebygga ekonomisk brottslighet." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184818.

Full text
Abstract:
Ekonomisk brottslighet är mer lukrativt jämfört med annan brottslighet som narkotika, häleri och människohandel. Tidiga åtgärder som försvårar att kriminella kan använda företag för brottsliga syften gör att stora kostnader för samhället kan undvikas. En genomgång av litteraturen visade också att det finns stora brister i samarbetet mellan svenska myndigheter för att upptäcka grov ekonomisk brottslighet. Idag uppdagas brotten först ofta efter att en konkurs inletts. I studier har maskininlärningsmodeller prövats för att kunna upptäcka ekonomisk brottslighet och några svenska myndigheter använder maskininlärningsmodeller för att upptäcka brott men mer avancerade metoder används idag av danska myndigheter. Bolagsverket har idag ett omfattande register för bolag i Sverige och denna studie syftar till att undersöka om maskininlärning kan användas för att identifiera misstänkta bolag, genom att använda digitalt inlämnade årsredovisningar och information ur bolagsverkets register för att kunna träna klassificeringsmodeller att identifiera misstänkta bolag. För att träna modellen så har stämningsansökningar inhämtats från Ekobrottsmyndigheten som kunnat kopplas till specifika bolag av de inlämnade årsredovisningar. Principalkomponentanalys används för att visuellt visa på skillnader mellan grupperna misstänkta och icke misstänkta bolag och analyserna visade på ett överlapp mellan grupperna och ingen tydlig klustring av grupperna. Data var obalanserat med 38 misstänkta bolag av totalt 1009 bolag och därför användes översamplingstekniken SMOTE för att skapa mer syntetiskt data och för att öka antalet i gruppen misstänkta. Två maskininlärningsmodeller Random Forest och Stödvektormaskin (SVM) jämfördes i en 10 fold korsvalidering. Där båda uppvisade en recall på runt 0.91 men där Random Forest hade en mycket högre precision och med högre accuracy. Random Forest valdes och tränades på nytt och uppvisades en recall på 0.75 när den testades på osett data bestående av 8 misstänkta av 202 bolag. Ett sänkt tröskelvärde resulterade i en högre recall men med en större antal felklassificerade bolag. Studien visar tydligt problemet med obalans i data och de utmaningar man ställs inför med mindre data. Ett större data hade möjligjort ett strängare urval på brottstyper som hade kunnat ge en mer robust modell som skulle kunna användas av bolagsverket för att lättare kunna identifiera misstänkta bolag i deras register.
Economic crimes are more lucrative compared to other crimes as drugs, selling of stolen gods, trafficing. Early preventions that make it more difficult for criminals to use companies for criminal purposes can reduce large costs for sociaty. A litterature study showed that there are large weaknesses in the collaboration between Swedish authorities to detect serious economic crimes.Today most crimes among companies that commit fraud are found after a company has declared bancruptcy. In studies, machine learning models have been tested to detect economic crimes and some swedish authorites are now using machine learning methods to detect different crimes and more advanced methods are used by the danish authorites. Bolagsverket has a large register of companies in Sweden and the aim of this study is to investigate if machinelearning can be used to detect on annual reports that have been digitaly submited and information in Bolagsverket’s register to be able to train classificationsmodels and identify companies that are suspicious. To be able to train the model lawsuits have been collected from the Swedish Economic Crime Authority that can be connected to specific companies through their digitally submited annual report. Principal component analysis is used to visually show differences between the groups suspect companies and not suspected companies and the analysis show that there is an overlap between the groups and no clear clustering between the groups. Because the dataset was unbalanced with 38 suspicious companies out of 1009 companies the oversampling tecnique SMOTE was used to create more synthethic data and more suspects in the dataset. The two machinelearnings models Random Forest and support vector machine (SVM) was compared in a 10 fold crossvalidation. Both models showed a recall on around 0.91 but Random Forest had a much higher precision with a higher accuracy. Random Forest was chosen and was trained again and showed a recall on 0.75 when it was tested on unseen data with 8 suspects out of 202 companies. Lowering the treshold resulted in a higher recall but with a larger portion of wrongly classfied companies. The study shows clearly the problem with an unbalanced dataset and the challanges with a small dataset. A larger dataset could have made it possible to make a more selective selection of certain crimes that could have resulted in a more robust model that could be used by Bolagsverket to easier identify suspicous companies in their register.
APA, Harvard, Vancouver, ISO, and other styles
14

Olivieri, Emily, and Loredana Isacsson. "Exploring guidelines for human-centred design in the wake of AI capabilities : A qualitative study." Thesis, Tekniska Högskolan, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-48072.

Full text
Abstract:
Purpose – Artificial Intelligence has seen important growth in the digital area in recent years. Our aim is to explore possible guidelines that make use of AI advances to design good user experiences for digital products. Method – The proposed methods to gather the necessary qualitative data to support our claim involve open-ended interviews with UX/UI Designers working in the industry, in order to gain a deeper understanding of their thoughts and experiences. In addition, a literature review is conducted to identify the knowledge gap and build the base of our new theory. Findings – Our findings suggest a need to embrace new technological developments in favour of enhancing UX designers’ workflow. Additionally, basic AI and ML knowledge is needed to utilise these capabilities to their full potential. Indeed, a crucial area of impact where AI can augment a designer’s reach is personalization. Together with smart algorithms, designers may target their creations to specific user needs and demands. UX designers even have the opportunity for innovation as mundane tasks are automated by intelligent assistants, which broadens the possibility of acquiring further skills to enhance their work. One result, that is both innovative and unexpected, is the notion that AI and ML can augment a designer’s creativity by taking over mundane tasks, as well as, providing assistance with certain graphics and inputs. Implications – These results indicate that AI and ML may potentially impact the UX industry in a positive manner, as long as designers make use of the technology for the benefit of the user in true human-centred practice. Limitations – Nevertheless, our study presents its own unique limitations due to the scope and time frame of this dissertation, we are bound to the knowledge gathered from a small sample of professionals in Sweden. Presented guidelines are a suggestion based on our research and not a definitive workflow.
APA, Harvard, Vancouver, ISO, and other styles
15

Mele, Matteo. "Convolutional Neural Networks for the Classification of Olive Oil Geographical Origin." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
This work proposed a deep learning approach to a multi-class classification problem. In particular, our project goal is to establish whether there is a connection between olive oil molecular composition and its geographical origin. To accomplish this, we implement a method to transform structured data into meaningful images (exploring the existing literature) and developed a fine-tuned Convolutional Neural Network able to perform the classification. We implement a series of tailored techniques to improve the model.
APA, Harvard, Vancouver, ISO, and other styles
16

Appelstål, Michael. "Multimodal Model for Construction Site Aversion Classification." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-421011.

Full text
Abstract:
Aversion on construction sites can be everything from missingmaterial, fire hazards, or insufficient cleaning. These aversionsappear very often on construction sites and the construction companyneeds to report and take care of them in order for the site to runcorrectly. The reports consist of an image of the aversion and atext describing the aversion. Report categorization is currentlydone manually which is both time and cost-ineffective. The task for this thesis was to implement and evaluate an automaticmultimodal machine learning classifier for the reported aversionsthat utilized both the image and text data from the reports. Themodel presented is a late-fusion model consisting of a Swedish BERTtext classifier and a VGG16 for image classification. The results showed that an automated classifier is feasible for thistask and could be used in real life to make the classification taskmore time and cost-efficient. The model scored a 66.2% accuracy and89.7% top-5 accuracy on the task and the experiments revealed someareas of improvement on the data and model that could be furtherexplored to potentially improve the performance.
APA, Harvard, Vancouver, ISO, and other styles
17

Lagerkvist, Love. "Neural Novelty — How Machine Learning Does Interactive Generative Literature." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-21222.

Full text
Abstract:
Every day, machine learning (ML) and artificial intelligence (AI) embeds itself further into domestic and industrial technologies. Interaction de- signers have historically struggled to engage directly with the subject, facing a shortage of appropriate methods and abstractions. There is a need to find ways though which interaction design practitioners might integrate ML into their work, in order to democratize and diversify the field. This thesis proposes a mode of inquiry that considers the inter- active qualities of what machine learning does, as opposed the tech- nical specifications of what machine learning is. A shift in focus from the technicality of ML to the artifacts it creates allows the interaction designer to situate its existing skill set, affording it to engage with ma- chine learning as a design material. A Research-through-Design pro- cess explores different methodological adaptions, evaluated through user feedback and an-in depth case analysis. An elaborated design experiment, Multiverse, examines the novel, non-anthropomorphic aesthetic qualities of generative literature. It prototypes interactions with bidirectional literature and studies how these transform the reader into a cybertextual “user-reader”. The thesis ends with a discussion on the implications of machine written literature and proposes a number of future investigations into the research space unfolded through the prototype.
APA, Harvard, Vancouver, ISO, and other styles
18

Sazonau, Viachaslau. "General terminology induction in description logics." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/general-terminology-induction-in-description-logics(63142865-d610-4041-84fa-764af1759554).html.

Full text
Abstract:
In computer science, an ontology is a machine-processable representation of knowledge about some domain. Ontologies are encoded in ontology languages, such as the Web Ontology Language (OWL) based on Description Logics (DLs). An ontology is a set of logical statements, called axioms. Some axioms make universal statements, e.g. all fathers are men, while others record data, i.e. facts about specific individuals, e.g. Bob is a father. A set of universal statements is called TBox, as it encodes terminology, i.e. schema-level conceptual relationships, and a set of facts is called ABox, as it encodes instance-level assertions. Ontologies are extensively developed and widely used in domains such as biology and medicine. Manual engineering of a TBox is a difficult task that includes modelling conceptual relationships of the domain and encoding those relationships in the ontology language, e.g. OWL. Hence, it requires the knowledge of domain experts and skills of ontology engineers combined together. In order to assist engineering of TBoxes and potentially automate it, acquisition (or induction) of axioms from data has attracted research attention and is usually called Ontology Learning (OL). This thesis investigates the problem of OL from general principles. We formulate it as General Terminology Induction that aims at acquiring general, expressive TBox axioms (called general terminology) from data. The thesis addresses and investigates in depth two main questions: how to rigorously evaluate the quality of general TBox axioms and how to efficiently construct them. We design an approach for General Terminology Induction and implement it in an algorithm called DL-Miner. We extensively evaluate DL-Miner, compare it with other approaches, and run case studies together with domain experts to gain insight into its potential applications. The thesis should be of interest to ontology developers seeking automated means to facilitate building or enriching ontologies. In addition, as our experiments show, DL-Miner can deliver valuable insights into the data, i.e. can be useful for data analysis and debugging.
APA, Harvard, Vancouver, ISO, and other styles
19

Lagerkvist, Love. "Computation as Strange Material : Excursions into Critical Accidents." Thesis, Malmö universitet, Institutionen för konst, kultur och kommunikation (K3), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43639.

Full text
Abstract:
Waking up in a world where everyone carries a miniature supercomputer, interaction designers find themselves in their forerunners dreams. Faced with the reality of planetary-scale we have to confront the task of articulating approaches responsive this accidental ubiquity of computation. This thesis attempts such a formulation by defining computation as a strange material, a plasticity shaped equally by its technical properties and the mode of production by which is its continuously re-produced. The definition is applied through a methodology of excursions — participatory explorations into two seemingly disparate sites of computation, connected in they ways they manifest a labor of care. First, we visit the social infrastructures that constitute the Linux kernel, examining strangle entanglements of programming and care in the world's largest design process. This is followed by a tour into the thorny lands of artificial intelligence, situated in the smart replies of LinkedIn. Here, we investigate the fluctuating border between the artificial and the human with participants performing AI, formulating new Turing tests in the process. These excursions afford an understanding of computation as fundamentally re-produced through interaction, a strange kind of affective work the understanding of which is crucial if we ambition to disarm the critical accidents of our present future.
APA, Harvard, Vancouver, ISO, and other styles
20

MASSOUD, RANA. "Eco-friendly Naturalistic Vehicular Sensing and Driving Behaviour Profiling." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1003486.

Full text
Abstract:
Internet of Things (IoT) technologies are spurring of serious games that support training directly in the field. This PhD implements field user performance evaluators usable in reality-enhanced serious games (RESGs) for promoting fuel-efficient driving. This work proposes two modules – that have been implemented by processing information related to fuel-efficient driving – to be employed as real-time virtual sensors in RESGS. The first module estimates and assesses instantly fuel consumption, where I compared the performance of three configured machine learning algorithms, support vector regression, random forest and artificial neural networks. The experiments show that the algorithms have similar performance and random forest slightly outperforms the others. The second module provides instant recommendations using fuzzy logic when inefficient driving patterns are detected. For the game design, I resorted to the on-board diagnostics II standard interface to diagnostic circulating information on vehicular buses for a wide diffusion of a game, avoiding sticking to manufacturer proprietary solutions. The approach has been implemented and tested with data from the enviroCar server site. The data is not calibrated for a specific car model and is recorded in different driving environments, which made the work challenging and robust for real-world conditions. The proposed approach to virtual sensor design is general and thus applicable to various application domains other than fuel-efficient driving. An important word of caution concerns users’ privacy, as the modules rely on sensitive data, and provide information that by no means should be misused.
APA, Harvard, Vancouver, ISO, and other styles
21

Gattoni, Giacomo. "Improving the reliability of recurrent neural networks while dealing with bad data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
In practical applications, machine learning and deep learning models can have difficulty in achieving generalization, especially when dealing with training samples that are either noisy or limited in quantity. Standard neural networks do not guarantee the monotonicity of the input features with respect to the output, therefore they lack interpretability and predictability when it is known a priori that the input-output relationship should be monotonic. This problem can be encountered in the CPG industry, where it is not possible to ensure that a deep learning model will learn the increasing monotonic relationship between promotional mechanics and sales. To overcome this issue, it is proposed the combined usage of recurrent neural networks, a type of artificial neural networks specifically designed to deal with data structured as sequences, with lattice networks, conceived to guarantee monotonicity of the desired input features with respect to the output. The proposed architecture has proven to be more reliable when new samples are fed to the neural network, demonstrating its ability to infer the evolution of the sales depending on the promotions, even when it is trained on bad data.
APA, Harvard, Vancouver, ISO, and other styles
22

Schultze, Jakob. "Digital transformation: How does physician’s work become affected by the use of digital health technologies?" Thesis, Mittuniversitetet, Institutionen för data- och systemvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-41260.

Full text
Abstract:
Digital transformation is evolving, and it is driving at the helm of the digital evolution. The amount of information accessible to us has revolutionized the way we gather information. Mobile technology and the immediate and ubiquitous access to information has changed how we engage with services including healthcare. Digital technology and digital transformation have afforded people the ability to self-manage in different ways than face-to-face and paper-based methods through different technologies. This study focuses on exploring the use of the most commonly used digital health technologies in the healthcare sector and how it affects physicians’ daily routine practice. The study presents findings from a qualitative methodology involving semi-structured, personal interviews with physicians from Sweden and a physician from Spain. The interviews capture what physicians feel towards digital transformation, digital health technologies and how it affects their work. In a field where a lack of information regarding how physicians work is affected by digital health technologies, this study reveals a general aspect of how reality looks for physicians. A new way of conducting medicine and the changed role of the physician is presented along with the societal implications for physicians and the healthcare sector. The findings demonstrate that physicians’ role, work and the digital transformation in healthcare on a societal level are important in shaping the future for the healthcare industry and the role of the physician in this future.
Den digitala transformationen växer och den drivs vid rodret för den digitala utvecklingen. Mängden information som är tillgänglig för oss har revolutionerat hur vi samlar in information. Mobila tekniker och den omedelbara och allmänt förekommande tillgången till information har förändrat hur vi tillhandahåller oss tjänster inklusive inom vården. Digital teknik och digital transformation har gett människor möjlighet att kontrollera sig själv och sin egen hälsa på olika sätt än ansikte mot ansikte och pappersbaserade metoder genom olika tekniker. Denna studie fokuserar på att utforska användningen av de vanligaste digitala hälsoteknologierna inom hälso- och sjukvårdssektorn och hur det påverkar läkarnas dagliga rutin. Studien presenterar resultat från en kvalitativ metod som involverar semistrukturerade, personliga intervjuer med läkare från Sverige och en läkare från Spanien. Intervjuerna fångar vad läkare tycker om digital transformation, digital hälsoteknik och hur det påverkar deras arbete. I ett fält där brist på information om hur läkare arbetar påverkas av digital hälsoteknik avslöjar denna studie en allmän aspekt av hur verkligheten ser ut för läkare. Ett nytt sätt att bedriva medicin och läkarens förändrade roll presenteras tillsammans med de samhälleliga konsekvenserna för läkare och vårdsektorn. Resultaten visar att läkarnas roll, arbete och den digitala transformationen inom hälso- och sjukvården på samhällsnivå är viktiga för att utforma framtiden för vårdindustrin och läkarens roll i framtiden.
APA, Harvard, Vancouver, ISO, and other styles
23

Talevi, Luca, and Luca Talevi. "“Decodifica di intenzioni di movimento dalla corteccia parietale posteriore di macaco attraverso il paradigma Deep Learning”." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17846/.

Full text
Abstract:
Le Brain Computer Interfaces (BCI) invasive permettono di restituire la mobilità a pazienti che hanno perso il controllo degli arti: ciò avviene attraverso la decodifica di segnali bioelettrici prelevati da aree corticali di interesse al fine di guidare un arto prostetico. La decodifica dei segnali neurali è quindi un punto critico nelle BCI, richiedendo lo sviluppo di algoritmi performanti, affidabili e robusti. Tali requisiti sono soddisfatti in numerosi campi dalle Deep Neural Networks, algoritmi adattivi le cui performance scalano con la quantità di dati forniti, allineandosi con il crescente numero di elettrodi degli impianti. Impiegando segnali pre-registrati dalla corteccia di due macachi durante movimenti di reach-to-grasp verso 5 oggetti differenti, ho testato tre basilari esempi notevoli di DNN – una rete densa multistrato, una Convolutional Neural Network (CNN) ed una Recurrent NN (RNN) – nel compito di discriminare in maniera continua e real-time l’intenzione di movimento verso ciascun oggetto. In particolare, è stata testata la capacità di ciascun modello di decodificare una generica intenzione (single-class), la performance della migliore rete risultante nel discriminarle (multi-class) con o senza metodi di ensemble learning e la sua risposta ad un degrado del segnale in ingresso. Per agevolarne il confronto, ciascuna rete è stata costruita e sottoposta a ricerca iperparametrica seguendo criteri comuni. L’architettura CNN ha ottenuto risultati particolarmente interessanti, ottenendo F-Score superiori a 0.6 ed AUC superiori a 0.9 nel caso single-class con metà dei parametri delle altre reti e tuttavia maggior robustezza. Ha inoltre mostrato una relazione quasi-lineare con il degrado del segnale, priva di crolli prestazionali imprevedibili. Le DNN impiegate si sono rivelate performanti e robuste malgrado la semplicità, rendendo eventuali architetture progettate ad-hoc promettenti nello stabilire un nuovo stato dell’arte nel controllo neuroprotesico.
APA, Harvard, Vancouver, ISO, and other styles
24

Busacca, Fabio Antonino. "AI for Resource Allocation and Resource Allocation for AI: a two-fold paradigm at the network edge." Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/573371.

Full text
Abstract:
5G-and-beyond and Internet of Things (IoT) technologies are pushing a shift from the classic cloud-centric view of the network to a new edge-centric vision. In such a perspective, the computation, communication and storage resources are moved closer to the user, to the benefit of network responsiveness/latency, and of an improved context-awareness, that is, the ability to tailor the network services to the live user's experience. However, these improvements do not come for free: edge networks are highly constrained, and do not match the resource abundance of their cloud counterparts. In such a perspective, the proper management of the few available resources is of crucial importance to improve the network performance in terms of responsiveness, throughput, and power consumption. However, networks in the so-called Age of Big Data result from the dynamic interactions of massive amounts of heterogeneous devices. As a consequence, traditional model-based Resource Allocation algorithms fail to cope with this dynamic and complex networks, and are being replaced by more flexible AI-based techniques as a result. In such a way, it is possible to design intelligent resource allocation frameworks, able to quickly adapt to the everchanging dynamics of the network edge, and to best exploit the few available resources. Hence, Artificial Intelligence (AI), and, more specifically Machine Learning (ML) techniques, can clearly play a fundamental role in boosting and supporting resource allocation techniques at the edge. But can AI/ML benefit from optimal Resource Allocation? Recently, the evolution towards Distributed and Federated Learning approaches, i.e. where the learning process takes place in parallel at several devices, has brought important advantages in terms of reduction of the computational load of the ML algorithms, in the amount of information transmitted by the network nodes, and in terms of privacy. However, the scarceness of energy, processing, and, possibly, communication resources at the edge, especially in the IoT case, calls for proper resource management frameworks. In such a view, the available resources should be assigned to reduce the learning time, while also keeping an eye on the energy consumption of the network nodes. According to this perspective, a two-fold paradigm can emerge at the network edge, where AI can boost the performance of Resource Allocation, and, vice versa, optimal Resource Allocation techniques can speed up the learning process of AI algorithms. Part I of this work of thesis explores the first topic, i.e. the usage of AI to support Resource Allocation at the edge, with a specific focus on two use-cases, namely UAV-assisted cellular networks, and vehicular networks. Part II deals instead with the topic of Resource Allocation for AI, and, specifically, with the case of the integration between Federated Learning techniques and the LoRa LPWAN protocol. The designed integration framework has been validated on both simulation environments, and, most importantly, on the Colosseum platform, the biggest channel emulator in the world.
APA, Harvard, Vancouver, ISO, and other styles
25

Shrivastwa, Ritu Ranjan. "Enhancements in Embedded Systems Security using Machine Learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT051.

Full text
Abstract:
La liste des appareils connectés (ou IoT) s’allonge avec le temps, de même que leur vulnérabilité face aux attaques ciblées provenant du réseau ou de l’accès physique, communément appelées attaques Cyber Physique (CPS). Alors que les capteurs visant à détecter les attaques, et les techniques d’obscurcissement existent pour contrecarrer et améliorer la sécurité, il est possible de contourner ces contre-mesures avec des équipements et des méthodologies d’attaque sophistiqués, comme le montre la littérature récente. De plus, la conception des systèmes intégrés est soumise aux contraintes de complexité et évolutivité, ce qui rend difficile l’adjonction d’un mécanisme de détection complexe contre les attaques CPS. Une solution pour améliorer la sécurité est d’utiliser l’Intelligence Artificielle (IA) (au niveau logiciel et matériel) pour surveiller le comportement des données en interne à partir de divers capteurs. L’approche IA permettrait d’analyser le comportement général du système à l’aide des capteurs , afin de détecter toute activité aberrante, et de proposer une réaction appropriée en cas d’attaque. L’intelligence artificielle dans le domaine de la sécurité matérielle n’est pas encore très utilisée en raison du comportement probabiliste. Ce travail vise à établir une preuve de concept visant à montrer l’efficacité de l’IA en matière de sécurité.Une partie de l’étude consiste à comparer et choisir différentes techniques d’apprentissage automatique (Machine Learning ML) et leurs cas d’utilisation dans la sécurité matérielle. Plusieurs études de cas seront considérées pour analyser finement l’intérêt et de l’ IA sur les systèmes intégrés. Les applications seront notamment l’utilisation des PUF (Physically Unclonable Function), la fusion de capteurs, les attaques par canal caché (SCA), la détection de chevaux de Troie, l’intégrité du flux de contrôle, etc
The list of connected devices (or IoT) is growing longer with time and so is the intense vulnerability to security of the devices against targeted attacks originating from network or physical penetration, popularly known as Cyber Physical Security (CPS) attacks. While security sensors and obfuscation techniques exist to counteract and enhance security, it is possible to fool these classical security countermeasures with sophisticated attack equipment and methodologies as shown in recent literature. Additionally, end node embedded systems design is bound by area and is required to be scalable, thus, making it difficult to adjoin complex sensing mechanism against cyberphysical attacks. The solution may lie in Artificial Intelligence (AI) security core (soft or hard) to monitor data behaviour internally from various components. Additionally the AI core can monitor the overall device behaviour, including attached sensors, to detect any outlier activity and provide a smart sensing approach to attacks. AI in hardware security domain is still not widely acceptable due to the probabilistic behaviour of the advanced deep learning techniques, there have been works showing practical implementations for the same. This work is targeted to establish a proof of concept and build trust of AI in security by detailed analysis of different Machine Learning (ML) techniques and their use cases in hardware security followed by a series of case studies to provide practical framework and guidelines to use AI in various embedded security fronts. Applications can be in PUFpredictability assessment, sensor fusion, Side Channel Attacks (SCA), Hardware Trojan detection, Control flow integrity, Adversarial AI, etc
APA, Harvard, Vancouver, ISO, and other styles
26

Narmack, Kirilll. "Dynamic Speed Adaptation for Curves using Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233545.

Full text
Abstract:
The vehicles of tomorrow will be more sophisticated, intelligent and safe than the vehicles of today. The future is leaning towards fully autonomous vehicles. This degree project provides a data driven solution for a speed adaptation system that can be used to compute a vehicle speed for curves, suitable for the underlying driving style of the driver, road properties and weather conditions. A speed adaptation system for curves aims to compute a vehicle speed suitable for curves that can be used in Advanced Driver Assistance Systems (ADAS) or in Autonomous Driving (AD) applications. This degree project was carried out at Volvo Car Corporation. Literature in the field of speed adaptation systems and factors affecting the vehicle speed in curves was reviewed. Naturalistic driving data was both collected by driving and extracted from Volvo's data base and further processed. A novel speed adaptation system for curves was invented, implemented and evaluated. This speed adaptation system is able to compute a vehicle speed suitable for the underlying driving style of the driver, road properties and weather conditions. Two different artificial neural networks and two mathematical models were used to compute the desired vehicle speed in curves. These methods were compared and evaluated.
Morgondagens fordon kommer att vara mer sofistikerade, intelligenta och säkra än dagens fordon. Framtiden lutar mot fullständigt autonoma fordon. Detta examensarbete tillhandahåller en datadriven lösning för ett hastighetsanpassningssystem som kan beräkna ett fordons hastighet i kurvor som är lämpligt för förarens körstil, vägens egenskaper och rådande väder. Ett hastighetsanpassningssystem för kurvor har som mål att beräkna en fordonshastighet för kurvor som kan användas i Advanced Driver Assistance Systems (ADAS) eller Autonomous Driving (AD) applikationer. Detta examensarbete utfördes på Volvo Car Corporation. Litteratur kring hastighetsanpassningssystem samt faktorer som påverkar ett fordons hastighet i kurvor studerades. Naturalistisk bilkörningsdata samlades genom att köra bil samt extraherades från Volvos databas och bearbetades. Ett nytt hastighetsanpassningssystem uppfanns, implementerades samt utvärderades. Hastighetsanpassningssystemet visade sig vara kapabelt till att beräkna en lämplig fordonshastighet för förarens körstil under rådande väderförhållanden och vägens egenskaper. Två olika artificiella neuronnätverk samt två matematiska modeller användes för att beräkna fordonets hastighet. Dessa metoder jämfördes och utvärderades.
APA, Harvard, Vancouver, ISO, and other styles
27

Furtado, Maria Leonor Caetano Soares. "Retrogressive thaw slump identification using U-Net and satellite image inputs: remote sensing imagery segmentation usingdeep learning techniques." Master's thesis, 2022. http://hdl.handle.net/10362/134134.

Full text
Abstract:
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data Science
Global warming has been a topic of discussion for many decades, however its impact on the thaw of permafrost and vice-versa has not been very well captured or documented in the past. This may be due to most permafrost being in the Arctic and similarly vast remote areas, which makes data collection difficult and costly. A partial solution to this problem is the use of Remote Sensing imagery, which has been widely used for decades in documenting the changes in permafrost regions. Despite its many benefits, this methodology still required a manual assessment of images, which could be a slow and laborious task for researchers. Over the last decade, the growth of Deep Learning has helped address these limitations. The use of Deep Learning on Remote Sensing imagery has risen in popularity, mainly due to the increased availability and scale of Remote Sensing data. This has been fuelled in the last few years by open-source multi-spectral high spatial resolution data, such as the Sentinel-2 data used in this project. Notwithstanding the growth of Deep Learning for Remote Sensing Imagery, its use for the particular case of identifying the thaw of permafrost, addressed in this project, has not been widely studied. To address this gap, the semantic segmentation model proposed in this project performs pixel-wise classification on the satellite images for the identification of Retrogressive Thaw Slumps (RTSs), using a U-Net architecture. In this project, the successful identification of RTSs using Satellite Images is achieved with an average of 95% Dice score for the 39 test images evaluated, concluding that it is possible to pre-process said images and achieve satisfactory results using 10-meter spatial resolution and as little as 4 spectral bands. Since these landforms can be a proxy for the thaw of permafrost, the aim is that this project can help make progress towards the mitigation of the impact of such a powerful geophysical phenomenon.
O aquecimento global tem sido tópico de discussão nas últimas décadas. Apesar deste debate, o impacto do aquecimento global no degelo do pergelissolo e vice-versa não está amplamente estudado nem documentado. Uma das causas que pode ter levado a esta escassez de estudos é o facto do pergelissolo se encontrar no Ártico ou em regiões igualmente remotas e inacessíveis, o que faz com que a recolha de dados seja difícil e com custos elevados. Uma das soluções parciais para este problema, usada há várias décadas, é a recolha de imagens de satélite para estudar as mudanças nas regiões de pergelissolo. Apesar dos inúmeros benefícios, esta técnica requer uma análise detalhada das imagens adquiridas, o que, por conseguinte, se traduz num processo exaustivo e demorado quando é feito manualmente por cientistas. Ao longo das últimas décadas, o crescimento de “Deep Learning” propõe resolver estas limitações. O uso desta ferramenta para a análise de imagens de satélite tem crescido em popularidade, em particular devido ao aumento da quantidade e disponibilidade de dados. Este aumento de dados tem sido sustentado em grande parte pela disponibilização, na modalidade de “open-source” de dados de sensores multiespectrais de alta resolução espacial, como aqueles usados neste projeto, provenientes da missão “Sentinel-2”. No entanto, apesar de um crescimento do uso de “Deep Learning” na análise de imagens de satélite a sua aplicação concreta especificamente na análise do degelo do pergelissolo, abordada neste projeto, não tem sido amplamente estudado. Para abordar esta lacuna, o modelo de “semantic segmentation” proposto neste projeto, classifica cada pixel nas imagens de satélite para identificar "Retrogressive Thaw Slumps (RTSs)”, usando a arquitetura “U-Net”. Neste projeto, a identificação de RTSs usando imagens de satélite é bem sucedida, conseguindo um “Dice Score” médio de 95%, nas 39 imagens de teste analisadas. Este resultado levou a conclusão que é possível processar imagens de satélite e atingir resultados satisfatórios usando imagens com 10 metros de resolução espacial e apenas 4 bandas espectrais. Como estas formas de relevo são uma boa indicação do degelo do Pergelissolo, a esperança é que este projeto possa ajudar na mitigação do impacto deste poderoso fenómeno geofísico.
APA, Harvard, Vancouver, ISO, and other styles
28

Rasool, Raihan Ur. "CyberPulse: A Security Framework for Software-Defined Networks." Thesis, 2020. https://vuir.vu.edu.au/42172/.

Full text
Abstract:
Software-Defined Networking (SDN) technology provides a new perspective in traditional network management by separating infrastructure plane from the control plane which facilitates a higher level of programmability and management. While centralized control provides lucrative benefits, the control channel becomes a bottleneck and home to numerous attacks. We conduct a detailed study and find that crossfire Link Flooding Attacks (LFA) are one of the most lethal attacks for SDN due to the utilization of low-rate traffic and persistent attacking nature. LFAs can be launched by the malicious adversaries to congest the control plane with low-rate traffic which can obstruct the flow rule installation and can ultimately bring down the whole network. Similarly, the adversary can employ bots to generate low-rate traffic to congest the control channel, and ultimately bring down the control plane and data plane connection causing service disruption. We present a systematic and comparative study on the vulnerabilities of LFAs on all the SDN planes, elaborate in detail the LFA types, techniques, and their behavior in all the variant of SDN. We then illustrate the importance of a defense mechanism employing a distributed strategy against LFAs and propose a Machine Learning (ML) based framework namely CyberPulse. Its detailed design, components, and their interaction, working principles, implementation, and in-depth evaluation are presented subsequently. This research presents a novel approach to write anomaly patterns and makes a significant contribution by developing a pattern-matching engine as the first line of defense against known attacks at a line-speed. The second important contribution is the effective detection and mitigation of LFAs in SDN through deep learning techniques. We perform twofold experiments to classify and mitigate LFAs. In the initial experimental setup, we utilize Artificial Neural Networks backward propagation technique to effectively classify the incoming traffic. In the second set of experiments, we employ a holistic approach in which CyberPulse demonstrates algorithm agnostic behavior and employs a pre-trained ML repository for precise classification. As an important scientific contribution, CyberPulse framework has been developed ground up using modern software engineering principles and hence provides very limited bandwidth and computational overhead. It has several useful features such as large-scale network-level monitoring, real-time network status information, and support for a wide variety of ML algorithms. An extensive evaluation is performed using Floodlight open-source controller which shows that CyberPulse offers limited bandwidth and computational overhead and proactively detect and defend against LFA in real-time. This thesis contributes to the state-of-the-art by presenting a novel framework for the defense, detection, and mitigation of LFA in SDN by utilizing ML-based classification techniques. Existing solutions in the area mandate complex hardware for detection and defense, but our presented solution offers a unique advantage in the sense that it operates on real-time traffic scenario as well as it utilizes multiple ML classification algorithms for LFA traffic classification without necessitating complex and expensive hardware. In the future, we plan to implement it on a large testbed and extend it by training on multiple datasets for multiple types of attacks.
APA, Harvard, Vancouver, ISO, and other styles
29

Gustav, Lindström, and Lerbom Ludvig. "AI - ett framtida verktyg för terrorism och organiserad brottslighet? : En framtidsstudie." Thesis, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44890.

Full text
Abstract:
This paper explores the future of Artificial Intelligence (AI) and how it can be used by organised crime orterrorist organisations. It exploresthe fundamentals of AI, its history and how its use is affecting the waypolice operate. The paper shows how the development rate of AI is increasing and predicts how it willcontinue to evolve based on different parameters. A study of different types of AI shows the different usesthese systems have, and their potential misuse in the near future. By using the six pillars approach, aprediction concerning AI and the development of Artificial General Intelligence (AGI) is explored, andits ramifications to our society. The results show that in a world with AGI, AI-enabled crime as we knowit would cease to exist, but up until that point, the use of AI in crime will continue to impact our daily livesand security
Denna uppsats undersöker framtiden för AI och hur den kan användas av organiserad brottslighet ellerterroristorganisationer. Den utforskar grunderna för AI, dess historia och hur dess användning påverkarpolisens verksamhet. Uppsatsen visar hur utvecklingshastigheten för AI ökar och förutsäger hur denkommer att fortsätta utvecklas baserat på olika parametrar. En studie av olika typer av AI visar de olikaanvändningsområdena dessa system har och deras potentiella missbruk inom en snar framtid. Genom attanvända metoden sex pelare undersöks en förutsägelse om AI och utvecklingen av Artificiell Generellintelligens (AGI) och dess konsekvenser för vårt samhälle. Resultaten visar att i en värld med AGIkommer AI-aktiverad brottslighet som vi vet att den skulle upphöra att existera, men fram till den tidenkommer användningen av AI i brottslighet att fortsätta att påverka vårt dagliga liv och säkerhet.
APA, Harvard, Vancouver, ISO, and other styles
30

Muwawa, Jean Nestor Dahj. "Data mining and predictive analytics application on cellular networks to monitor and optimize quality of service and customer experience." Diss., 2018. http://hdl.handle.net/10500/25875.

Full text
Abstract:
This research study focuses on the application models of Data Mining and Machine Learning covering cellular network traffic, in the objective to arm Mobile Network Operators with full view of performance branches (Services, Device, Subscribers). The purpose is to optimize and minimize the time to detect service and subscriber patterns behaviour. Different data mining techniques and predictive algorithms have been applied on real cellular network datasets to uncover different data usage patterns using specific Key Performance Indicators (KPIs) and Key Quality Indicators (KQI). The following tools will be used to develop the concept: RStudio for Machine Learning and process visualization, Apache Spark, SparkSQL for data and big data processing and clicData for service Visualization. Two use cases have been studied during this research. In the first study, the process of Data and predictive Analytics are fully applied in the field of Telecommunications to efficiently address users’ experience, in the goal of increasing customer loyalty and decreasing churn or customer attrition. Using real cellular network transactions, prediction analytics are used to predict customers who are likely to churn, which can result in revenue loss. Prediction algorithms and models including Classification Tree, Random Forest, Neural Networks and Gradient boosting have been used with an exploratory Data Analysis, determining relationship between predicting variables. The data is segmented in to two, a training set to train the model and a testing set to test the model. The evaluation of the best performing model is based on the prediction accuracy, sensitivity, specificity and the Confusion Matrix on the test set. The second use case analyses Service Quality Management using modern data mining techniques and the advantages of in-memory big data processing with Apache Spark and SparkSQL to save cost on tool investment; thus, a low-cost Service Quality Management model is proposed and analyzed. With increase in Smart phone adoption, access to mobile internet services, applications such as streaming, interactive chats require a certain service level to ensure customer satisfaction. As a result, an SQM framework is developed with Service Quality Index (SQI) and Key Performance Index (KPI). The research concludes with recommendations and future studies around modern technology applications in Telecommunications including Internet of Things (IoT), Cloud and recommender systems.
Cellular networks have evolved and are still evolving, from traditional GSM (Global System for Mobile Communication) Circuit switched which only supported voice services and extremely low data rate, to LTE all Packet networks accommodating high speed data used for various service applications such as video streaming, video conferencing, heavy torrent download; and for say in a near future the roll-out of the Fifth generation (5G) cellular networks, intended to support complex technologies such as IoT (Internet of Things), High Definition video streaming and projected to cater massive amount of data. With high demand on network services and easy access to mobile phones, billions of transactions are performed by subscribers. The transactions appear in the form of SMSs, Handovers, voice calls, web browsing activities, video and audio streaming, heavy downloads and uploads. Nevertheless, the stormy growth in data traffic and the high requirements of new services introduce bigger challenges to Mobile Network Operators (NMOs) in analysing the big data traffic flowing in the network. Therefore, Quality of Service (QoS) and Quality of Experience (QoE) turn in to a challenge. Inefficiency in mining, analysing data and applying predictive intelligence on network traffic can produce high rate of unhappy customers or subscribers, loss on revenue and negative services’ perspective. Researchers and Service Providers are investing in Data mining, Machine Learning and AI (Artificial Intelligence) methods to manage services and experience. This research study focuses on the application models of Data Mining and Machine Learning covering network traffic, in the objective to arm Mobile Network Operators with full view of performance branches (Services, Device, Subscribers). The purpose is to optimize and minimize the time to detect service and subscriber patterns behaviour. Different data mining techniques and predictive algorithms will be applied on cellular network datasets to uncover different data usage patterns using specific Key Performance Indicators (KPIs) and Key Quality Indicators (KQI). The following tools will be used to develop the concept: R-Studio for Machine Learning, Apache Spark, SparkSQL for data processing and clicData for Visualization.
Electrical and Mining Engineering
M. Tech (Electrical Engineering)
APA, Harvard, Vancouver, ISO, and other styles
31

Shields, Philip John. "Nurse-led ontology construction: A design science approach." Thesis, 2016. https://vuir.vu.edu.au/32620/.

Full text
Abstract:
Most nursing quality studies based on the structure-process-outcome paradigm have concentrated on structure-outcome associations and have not explained the nursing process domain. This thesis turns the spotlight on the process domain and visualises nursing processes or ‘what nurses do’ by using ‘semantics’ which underpin Linking Of Data (LOD) technologies such as ontologies. Ontology construction has considerable limitations that make direct input of nursing process semantics difficult. Consequently, nursing ontologies being constructed to date use nursing process semantics collected by non-clinicians. These ontologies may have undesirable clinical implications when they are used to map nurse processes to patient outcomes. To address this issue, this thesis places nurses at the centre of semantic collection and ontology construction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography