To see the other types of publications on this topic, follow the link: Other Artificial Intelligence.

Dissertations / Theses on the topic 'Other Artificial Intelligence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Other Artificial Intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Vaseigaran, Ajanth, and Gobi Sripathy. "Artificial Intelligence in Healthcare." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296643.

Full text
Abstract:
Healthcare systems play a critical role in ensuring people's health. Establishing accurate diagnoses is a vital element of this process. As sources highlight misdiagnoses and missed diagnoses as a common issue, a solution must be sought. Diagnostic errors are common in the emergency departments, which has been recognized as a stressful work environment. Today's industries are forced to deal with rapidly changing technological advances that result in reshaped systems, products, and services. Artificial Intelligence (AI) is one of such technologies that can work as a solution to diagnosis issues but comes with technical, ethical and legal challenges. Hence, the thesis intends to investigate how AI can affect the accuracy of diagnosis as well as how its integration in healthcare relates to the technical, ethical and legal aspects. The thesis begins with a literature review, which serves as a theoretical foundation and allows for a conceptual framework to be formed. The conceptual framework is used to select interviewees, which results in 12 interviews with professors, researchers, doctors and politicians. In addition, a survey is conducted to obtain the general public’s opinion on the matter. The findings present that AI is already mature enough to make more accurate diagnoses than doctors as well as release burden from medical practitioners in the form of administrative tasks. One obstacle is the incomplete data available since laws hinder sharing of patient data. Furthermore, the AI algorithms must be fit for all social minorities and not demonstrate racial discrimination. The European AI Alliance was established in 2018 with the aim to keep the technology in check. Similar initiatives can be created on a national- and regional level to maintain some form of control over its proper use.
Sjukvårdssystem utgör en avgörande roll för att säkerställa människors välmående och hälsa. Att fastställa korrekta diagnoser är en viktig del av denna process. Enligt källor är feldiagnoser och uteblivna diagnoser ett vanligt problem och bör därför lösas. Diagnostiska fel är vanligt förekommande på akutmottagningar, vilka karaktäriseras som en stressig arbetsmiljö. Dagens industrier tvingas hantera snabbt föränderliga tekniska framsteg som resulterar i omformade system, produkter och tjänster. Artificiell Intelligens (AI) är en av sådana tekniker som kan fungera som en lösning på diagnosfrågor. Dock kommer den med tekniska, etiska och legala utmaningar. Examensarbetet avser därför att undersöka hur AI kan påverka diagnosens precision samt hur integrationen i vården relaterar till de tekniska, etiska och legala aspekterna. Rapporten inleds med en litteraturstudie, vilket fungerar som en teoretisk grund och bidrar till att skapa ett konceptuellt ramverk. Det konceptuella ramverket används för att välja intervjupersoner, vilket resulterar i 12 intervjuer med professorer, forskare, läkare och politiker. Dessutom genomförs en enkätundersökning för att få allmänhetens åsikt i frågan. Rapportens resultat visar att AI redan är tillräckligt utvecklad för att göra en mer precisionssäker diagnos än en läkare samt kan avlasta läkare i form av administrativa uppgifter. Ett hinder är att den data som finns tillgänglig är ofullständig på grund av lagar som hindrar delning av patientdata. AI-algoritmerna måste dessutom vara lämpliga för alla sociala minoriteter och inte leda till rasdiskriminering. European AI Alliance grundades 2018 med målet att hålla tekniken i schack i förhållande till de etiska och legala aspekterna. Liknande initiativ kan skapas på nationell och regional nivå för att bibehålla någon form av kontroll över dess korrekta användning.
APA, Harvard, Vancouver, ISO, and other styles
2

Micael, Frideros. "Artificial Intelligence : Progress in business and society." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16483.

Full text
Abstract:
This thesis investigates the progress of corporate implementations of Artificial Intelligence and discusses the effects that this might have on the corporate sector as well as some implications on a societal level. The analysis is based on data from surveys conducted by Accenture, Bain & Company, Capgemini Digital Transformation Institute, Deloitte, Gartner Inc., McKinsey Global Institute and MIT Sloan Management Review & Boston Consulting Group. Over the last 3 years the adoptions of Artificial Intelligence have increased 2-3 times and the trend is expected to continue in the coming years as well, since 40-55 % of the surveyed companies are in the initial stages of AI adoption. Further, the growth rate in AI investments has been even more radical and increased 15-20 times over the last 7 years. Companies who have implemented AI report significant benefits and companies with a proactive strategy for Artificial Intelligence report higher profit margins than their industry competitors. Further the data indicates that companies that are successful in implementing AI have better general organizational capabilities, higher data & skills readiness and more AI focus in leadership and strategic planning. Another result in the study is that most managers in companies implementing AI expect that the technology will enable them to enter new markets and also that new competitors will enter their market. This will probably lead to increased competition and the results from other technology transitions indicate that this might force more companies to adopt AI to stay competitive. Regarding competence strategy some theorists have argued that companies without AI experience should compensate by acquiring a high tech start-up with the needed technology and competence. However, the data indicates that the most limiting factors for companies without AI experience are related to leadership and technical capabilities, not access to competence. It is only in later stages of the adoption process that access to competence becomes the primary limiting factor. The data gives mixed indications on AI consequences on employment. Half of the companies implementing AI expect job losses in the organization in the coming 3 years, while almost a third expects AI to lead to new jobs. However, the data also suggests that existing employees will need to change their skill sets. Therefor both the public and private sector will need to adapt and find ways to support employees that need to re-educate themselves.
APA, Harvard, Vancouver, ISO, and other styles
3

Ramsahai, Roland Ryan. "Causal inference with instruments and other supplementary variables." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:df2961da-0843-421f-8be4-66a92e6b0d13.

Full text
Abstract:
Instrumental variables have been used for a long time in the econometrics literature for the identification of the causal effect of one random variable, B, on another, C, in the presence of unobserved confounders. In the classical continuous linear model, the causal effect can be point identified by studying the regression of C on A and B on A, where A is the instrument. An instrument is an instance of a supplementary variable which is not of interest in itself but aids identification of causal effects. The method of instrumental variables is extended here to generalised linear models, for which only bounds on the causal effect can be computed. For the discrete instrumental variable model, bounds have been derived in the literature for the causal effect of B on C in terms of the joint distribution of (A,B,C). Using an approach based on convex polytopes, bounds are computed here in terms of the pairwise (A,B) and (A,C) distributions, in direct analogy to the classic use but without the linearity assumption. The bounding technique is also adapted to instrumental models with stronger and weaker assumptions. The computation produces constraints which can be used to invalidate the model. In the literature, constraints of this type are usually tested by checking whether the relative frequencies satisfy them. This is unsatisfactory from a statistical point of view as it ignores the sampling uncertainty of the data. Given the constraints for a model, a proper likelihood analysis is conducted to develop a significance test for the validity of the instrumental model and a bootstrap algorithm for computing confidence intervals for the causal effect. Applications are presented to illustrate the methods and the advantage of a rigorous statistical approach. The use of covariates and intermediate variables for improving the efficiency of causal estimators is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Haixiang. "Artificial Intelligence Models for Large Scale Buildings Energy Consumption Analysis." Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-00658767.

Full text
Abstract:
The energy performance in buildings is influenced by many factors, such as ambient weather conditions, building structure and characteristics, occupancy and their behaviors, the operation of sub-level components like Heating, Ventilation and Air-Conditioning (HVAC) system. This complex property makes the prediction, analysis, or fault detection/diagnosis of building energy consumption very difficult to accurately and quickly perform. This thesis mainly focuses on up-to-date artificial intelligence models with the applications to solve these problems. First, we review recently developed models for solving these problems, including detailed and simplified engineering methods, statistical methods and artificial intelligence methods. Then we simulate energy consumption profiles for single and multiple buildings, and based on these datasets, support vector machine models are trained and tested to do the prediction. The results from extensive experiments demonstrate high prediction accuracy and robustness of these models. Second, Recursive Deterministic Perceptron (RDP) neural network model is used to detect and diagnose faulty building energy consumption. The abnormal consumption is simulated by manually introducing performance degradation to electric devices. In the experiment, RDP model shows very high detection ability. A new approach is proposed to diagnose faults. It is based on the evaluation of RDP models, each of which is able to detect an equipment fault.Third, we investigate how the selection of subsets of features influences the model performance. The optimal features are selected based on the feasibility of obtaining them and on the scores they provide under the evaluation of two filter methods. Experimental results confirm the validity of the selected subset and show that the proposed feature selection method can guarantee the model accuracy and reduces the computational time.One challenge of predicting building energy consumption is to accelerate model training when the dataset is very large. This thesis proposes an efficient parallel implementation of support vector machines based on decomposition method for solving such problems. The parallelization is performed on the most time-consuming work of training, i.e., to update the gradient vector f. The inner problems are dealt by sequential minimal optimization solver. The underlying parallelism is conducted by the shared memory version of Map-Reduce paradigm, making the system particularly suitable to be applied to multi-core and multiprocessor systems. Experimental results show that our implementation offers a high speed increase compared to Libsvm, and it is superior to the state-of-the-art MPI implementation Pisvm in both speed and storage requirement.
APA, Harvard, Vancouver, ISO, and other styles
5

Nordahl, Per. "Attitudes to decision-making under risk supported by artificial intelligence and humans : Perceived risk, reliability and acceptance." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-29384.

Full text
Abstract:
The purpose of this investigation was to explore how decision situations with varying degrees of perceived risk affect people’s attitudes to human and artificial intelligence (AI) decision-making support. While previous studies have focused on the trust, fairness, reliability and fear of artificial intelligence, robots and algorithms in relation to decision support, the risk inherent in the decision situation has been largely ignored. An online survey with a mixed approach was conducted to investigate artificial intelligence and human decision support in risky situations. Two scenarios were presented to the survey participants. In the scenario where the perceived situational risk was low, selecting a restaurant, people expressed a positive attitude towards relying on and accepting recommendations provided by an AI. In contrast, in the perceived high-risk scenario, purchasing a home, people expressed an equal reluctance to rely on or accept both AI and human recommendations. The limitations of this investigation are primarily related to the challenges of creating a common understanding of concepts such as AI and a relatively homogenous survey group. The implication of this study is that AI may currently be best applied to situations characterized by perceived low risk if the intention is to convince people to rely on and accept AI recommendations, and in the future if AI becomes autonomous, to accept decisions.
APA, Harvard, Vancouver, ISO, and other styles
6

Dittmar, George William. "Object Detection and Recognition in Natural Settings." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/926.

Full text
Abstract:
Much research as of late has focused on biologically inspired vision models that are based on our understanding of how the visual cortex processes information. One prominent example of such a system is HMAX [17]. HMAX attempts to simulate the biological process for object recognition in cortex based on the model proposed by Hubel & Wiesel [10]. This thesis investigates the ability of an HMAX-like system (GLIMPSE [20]) to perform object-detection in cluttered natural scenes. I evaluate these results using the StreetScenes database from MIT [1, 8]. This thesis addresses three questions: (1) Can the GLIMPSE-based object detection system replicate the results on object-detection reported by Bileschi using HMAX? (2) Which features computed by GLIMPSE lead to the best object-detection performance? (3) What effect does elimination of clutter in the training sets have on the performance of our system? As part of this thesis, I built an object detection and recognition system using GLIMPSE [20] and demonstrate that it approximately replicates the results reported in Bileschi's thesis. In addition, I found that extracting and combining features from GLIMPSE using different layers of the HMAX model gives the best overall invariance to position, scale and translation for recognition tasks, but comes with a much higher computational overhead. Further contributions include the creation of modified training and test sets based on the StreetScenes database, with removed clutter in the training data and extending the annotations for the detection task to cover more objects of interest that were not in the original annotations of the database.
APA, Harvard, Vancouver, ISO, and other styles
7

Liliequist, Erik. "Artificial Intelligence - Are there any social obstacles? : An empirical study of social obstacles." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229506.

Full text
Abstract:
Artificial Intelligence is currently one of the most talked about topics with regard to technical development. The possibilities are enormous and it might revolutionize how we live our lives. There are talk of robots and AI removing the need for human workers. At the same time there are also those who view this as deeply troublesome. Either from an individual perspective, asking the question what we should do once we do not need to work more? Or from an existential perspective, raising issues of what responsibilities we have as humans and what it means to be human? This study does not aim to answer these grand questions, but rather shift the focus to the near future of three to five years. Yet, there is still a focus on the social aspects of the development of AI. What are the perceived greatest social issues and obstacles for a continued implementation of AI solutions in society? To answer these question interviews have been conducted with representatives for the Swedish society, ranging from politicians, union and employers’ organizations to philosophers and AI researchers. Further a literature study has been made of similar studies, comparing and reflecting their findings with the views of the interviewees. In short, the interviewees have a very positive view of AI in the near future, believing that a continued implementation would go relatively smoothly. Yet, they pointed to a few key obstacles that might need to be addressed. Mainly there is a risk of increased polarization of wages and power due to AI, although stressed that it depends on how we use the technology rather than the technology itself. Another obstacle was connected to individual uncertainty of the development of AI, causing a fear of what might happen. Further, several different ethical issues were raised. There was an agreement that we need to address these as soon as possible, but they did not view this as an obstacle.
APA, Harvard, Vancouver, ISO, and other styles
8

Mendonca, Sean Christopher. "WRITING FOR EACH OTHER: DYNAMIC QUEST GENERATION USING IN SESSION PLAYER BEHAVIORS IN MMORPG." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2146.

Full text
Abstract:
Role-playing games (RPGs) rely on interesting and varied experiences to maintain player attention. These experiences are often provided through quests, which give players tasks that are used to advance stories or events unfolding in the game. Traditional quests in video games require very specific conditions to be met, and for participating members to advance them by carrying out pre-defined actions. These types of quests are generated with perfect knowledge of the game world and are able to force desired behaviors out of the relevant non-player characters (NPCs). This becomes a major issue in massive multiplayer online (MMO) when other players can often disrupt the conditions needed for quests to unfold in a believable and immersive way, leading to the absence of a genuine multiplayer RPG experience. Our proposed solution is to dynamically create quests from real-time information on the unscripted actions of other NPCs and players in a game. This thesis shows that it is possible to create logical quests without global information knowledge, pre-defined story-trees, or prescribed player and NPC behavior. This allows players to become involved in storylines without having to perform any specific actions. Results are shown through a game scenario created from the Panoptyk Engine, a game engine in early development designed to test AI reasoning with information and the removal of the distinction between NPC and human players. We focus on quests issued by the NPC faction leaders of several in-game groups known as factions. Our generated quests are created logically from the pre-defined personality of each NPC leader, their memory of previous events, and information given to them by in-game sources. Long-spanning conflicts are seen to emerge from factions issuing quests against each other; these conflicts can be represented in a coherent narrative. A user study shows that players felt quests were logical, that players were able to recognize quests were based on events happening in the game, and that players experienced follow-up consequences from their actions in quests.
APA, Harvard, Vancouver, ISO, and other styles
9

McCullough, Kevin. "EXPLORING THE RELATIONSHIP OF THE CLOSENESS OF A GENETIC ALGORITHM’S CHROMOSOME ENCODING TO ITS PROBLEM SPACE." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/247.

Full text
Abstract:
For historical reasons, implementers of genetic algorithms often use a haploid binary primitive type for chromosome encoding. I will demonstrate that one can reduce development effort and achieve higher fitness by designing a genetic algorithm with an encoding scheme that closely matches the problem space. I will show that implicit parallelism does not result in binary encoded chromosomes obtaining higher fitness scores than other encodings. I will also show that Hamming distances should be understood as part of the relationship between the closeness of an encoding to the problem instead of assuming they should always be held constant. Closeness to the problem includes leveraging structures that are intended to model a specific aspect of the environment. I will show that diploid chromosomes leverage abeyance to benefit their adaptability in dynamic environments. Finally, I will show that if not all of the parts of the GA are close to the problem, the benefits of the parts that are can be negated by the parts that are not
APA, Harvard, Vancouver, ISO, and other styles
10

Ates, Mehmet. "Artificial intelligence in banking : A case study of the introduction of a virtual assistant into customer service." Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-41144.

Full text
Abstract:
The usage of artificial intelligence in banking is an important theme within entrepreneurial research. The purpose of the study was to analyse the motivations, challenges and opportunities for Swedish banking institutes to implement artificial intelligence based solutions into their customer service process. The research is based on a case study of the Swedish banking institute Swedbank AB, who introduced an AI based virtual assistant (Nina) to deal with customer requests. For the qualitative study, interviews with Swedish banking customer and experts were conducted. Further, to understand the managerial motivations of Swedbank, a theory of Moore (2008) regarding innovation management was applied. The findings display that Nina improved the service spectrum of Swedbank with the potential of decreasing costs, while maintaining customer satisfaction. Further, the results displayed a high acceptance of new technologies from the customer perspective. This provides the foundation for Swedbank to introduce further artificial intelligence based services. Banking institutes and other service oriented organisations with high customer interaction can use the implications of the thesis when considering to more effectively handle customer requests.
APA, Harvard, Vancouver, ISO, and other styles
11

Hiester, Luke. "File Fragment Classification Using Neural Networks with Lossless Representations." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/honors/454.

Full text
Abstract:
This study explores the use of neural networks as universal models for classifying file fragments. This approach differs from previous work in its lossless feature representation, with fragments’ bits as direct input, and its use of feedforward, recurrent, and convolutional networks as classifiers, whereas previous work has only tested feedforward networks. Due to the study’s exploratory nature, the models were not directly evaluated in a practical setting; rather, easily reproducible experiments were performed to attempt to answer the initial question of whether this approach is worthwhile to pursue further, especially due to its high computational cost. The experiments tested classification of fragments of homogeneous file types as an idealized case, rather than using a realistic set of types, because the types of interest are highly application-dependent. The recurrent networks achieved 98 percent accuracy in distinguishing 4 file types, suggesting that this approach may be capable of yielding models with sufficient performance for practical applications. The potential applications depend mainly on the model performance gains achievable by future work but include binary mapping, deep packet inspection, and file carving.
APA, Harvard, Vancouver, ISO, and other styles
12

Colon, Matthew J. "Controlling the Uncontrollable: A New Approach to Digital Storytelling using Autonomous Virtual Actors and Environmental Manipulation." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/261.

Full text
Abstract:
In most video games today that focus on a single story, scripting languages are used for controlling the artificial intelligence of the virtual actors. While scripting is a great tool for reliably performing a story, it has many disadvantages; mainly, it is limited by only being able to respond to those situations that were explicitly declared, causing unreliable responses to unknown situations, and the believability of the virtual actor is hindered by possible conflicts between scripted actions and appropriate responses as perceived by the viewer. This paper presents a novel method of storytelling by manipulating the environment, whether physically or the agent's perception of it, around the goals and behaviors of the virtual actor in order to advance the story rather than controlling the virtual actor explicitly. The virtual actor in this method is completely autonomous and the environment is manipulated by a story manager so that the virtual actor chooses to satisfy its goals in accordance with the direction of the story. Comparisons are made between scripting, traditional autonomy, Lionhead Studio's Black & White, Mateas and Stern's Façade, and autonomy with environmental manipulation in terms of design, performance, believability, and reusability. It was concluded that molding an environment around a virtual actor with the help of a story manager gives the actor the ability to reliably perform both event-based stories while preserving the believability and reusability of the actor and environment. While autonomous actors have traditionally been used solely for emergent storytelling, this new storytelling method enables them to be used reliably and efficiently to tell event-based stories as well while reaping the benefits of their autonomous nature. In addition, the separation of the virtual actors from the environment and story manager in terms of design promotes a cleaner, reusable architecture that also allows for independent development and improvement. By modeling artificial intelligence design after Herbert Simon's “artifact,” emphasizing the encapsulation of the inner mechanisms of virtual actors, the next era of digital storytelling can be driven by the design and development of reusable storytelling components and the interaction between the virtual actor and its environment.
APA, Harvard, Vancouver, ISO, and other styles
13

Brandt, Mathias, and Stefan Stefansson. "The personality venture capitalists look for in an entrepreneur : An artificial intelligence approach to personality analysis." Thesis, KTH, Industriell Marknadsföring och Entreprenörskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230956.

Full text
Abstract:
To date, the usual analysis of an entrepreneur personality is primarily a gut feeling of theventure capitalist and is hard to codify. This paper aims to explore in a qualitative way what itis about the characteristics and the personality of the entrepreneur that influences theinvestment made by the venture capitalists. These findings will then be used to discuss if anartificial intelligence application can be used to analyze the personality of entrepreneurs.The primary source of information for this paper is interviews with venture capitalists. Theauthors searched for similarities within the available literature on entrepreneurial personalitiesand found that the majority of the personality traits mentioned by the venture capitalist can befound in the literature.The research findings suggest that all venture capitalist value an entrepreneur that has passionfor what she is doing and has the ability to get the job done. Additionally, most of the venturecapitalist interviewed value an entrepreneur that is coachable, flexible, visionary, and is ableto communicate that vision well.Finally, based on the results, the authors proposed a framework for how an artificialintelligence system can be structured to assess personalities of entrepreneurs.
APA, Harvard, Vancouver, ISO, and other styles
14

Boccio, Jacob C. "Digital Integration." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6183.

Full text
Abstract:
Artificial intelligence is an emerging technology; something far beyond smartphones, cloud integration, or surgical microchip implantation. Utilizing the work of Ray Kurzweil, Nick Bostrom, and Steven Shaviro, this thesis investigates technology and artificial intelligence through the lens of the cinema. It does this by mapping contemporary concepts and the imagined worlds in film as an intersection of reality and fiction that examines issues of individual identity and alienation. I look at a non-linear timeline of films involving machine advancement, machine intelligence, and stages of post-human development; Elysium (2013) and Surrogates (2009) are about technology as an extension of the self, The Terminator franchise (1984-2015), Blade Runner (1982), and Bicentennial Man (1999) portray artificial intelligent androids and cyborgs, Transcendence (2013) is a contemporary depiction of human consciousness fusing with technology, and Chappie and Ex Machina are both released in 2015 are situated in contemporary society with sentient artificial intelligence. Looking at these films portrayals of man’s relationship with machines creates a discourse for contemporary society’s anxiety surrounding technology. I argue that recent film’s depiction of artificial intelligence signals a contemporary change in our perception of technology, urging that we reevaluate the ways that we define our identity.
APA, Harvard, Vancouver, ISO, and other styles
15

Jaitha, Anant. "An Introduction to the Theory and Applications of Bayesian Networks." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1638.

Full text
Abstract:
Bayesian networks are a means to study data. A Bayesian network gives structure to data by creating a graphical system to model the data. It then develops probability distributions over these variables. It explores variables in the problem space and examines the probability distributions related to those variables. It conducts statistical inference over those probability distributions to draw meaning from them. They are good means to explore a large set of data efficiently to make inferences. There are a number of real world applications that already exist and are being actively researched. This paper discusses the theory and applications of Bayesian networks.
APA, Harvard, Vancouver, ISO, and other styles
16

Ling, TR. "An Incremental Learning Method for Data Mining from Large Databases." Thesis, Honours thesis, University of Tasmania, 2006. https://eprints.utas.edu.au/793/1/trling_Honours_Thesis.pdf.

Full text
Abstract:
Knowledge Discovery techniques seek to find new information about a domain through a combination of existing domain knowledge and data examples from the domain. These techniques can either be manually performed by an expert, or automated using software algorithms (Machine Learning). However some domains, such as the clinical field of Lung Function testing, contain volumes of data too vast and detailed for manual analysis to be effective, and existing knowledge too complex for Machine Learning algorithms to be able to adequately discover relevant knowledge. In many cases this data is also unclassified, with no previous analysis having been performed. A better approach for these domains might be to involve a human expert, taking advantage of their expertise to guide the process, and to use Machine Learning techniques to assist the expert in discovering new and meaningful relationships in the data. It is hypothesised that Knowledge Acquisition methods would provide a strong basis for such a Knowledge Discovery method, particularly methods which can provide incremental verification and validation of knowledge as it is obtained. This study examines how the MCRDR (Multiple Classification Ripple- Down Rules) Knowledge Acquisition process can be adapted to develop a new Knowledge Discovery method, Exposed MCRDR, and tests this method in the domain of Lung Function. Preliminary results suggest that the EMCRDR method can be successfully applied to discover new knowledge in a complex domain, and reveal many potential areas of study and development for the MCRDR method.
APA, Harvard, Vancouver, ISO, and other styles
17

Johnsson, Pucic Antonio, and Patrik Mott. "Artificiell intelligens påverkan : En omstrukturering av den digitala aspekten av byggbranschen." Thesis, Uppsala universitet, Institutionen för samhällsbyggnad och industriell teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413014.

Full text
Abstract:
Digitalization is taking place at a high rate and is being implemented to a large extent in society, however, the construction industry is showing a digital degree of development that cannot be compared with other technically dependent industries. The construction industry generates large amounts of money and produces a long construction process that also produces high costs and has an impact on the environment. Therefore, there is a need to analyze and investigate how the digital aids that come with an increased degree of digitalization can make the construction industry more efficient. The efficiency improvements are specified in, among other things, the advent of AI and its potential in the construction industry, as well as the opportunities and challenges that the construction industry faces when implementing a digital development. The study is based on a comparison between a literature study and an interview study that covers issues that deal with the digital implementation carried out by the various organizations and companies as well as the future potential that comes with their initiatives. It has emerged that there is a general benefit in conducting digitalization at an organizational level. Whether the newly developed digital funds can be implemented is questioned, considering that there is a complex relationship between the subcontractors and the clients. The services and the way of execution offered by the subcontractors require investments to implement digital means and thus increase the degree of digitalization. The larger parties have the power and economic potential to increase the degree of digitization within the organization or company where the smaller parties must relate to this development. The uneven digitalization degree is made visible today as the power for pricing the procurement lies with the subcontractors in the regionally priced services that have not undergone further digital development. To turn more money into the construction industry, the industry needs to achieve a global competitive procurement as well as the manufacturing industry. For the smaller parties to be able to distinguish their way of working, a unique implementation of digital means is needed and thus be able to relate to the larger parties. What is needed to be able to offer digital services that will be desired in the future.
Digitaliseringen pågår i hög takt och implementeras i samhället i stor grad, dock påvisar byggbranschen en digital utvecklingsgrad som inte kan jämställas med andra tekniskt beroende branscher. Byggbranschen omsätter stora mängder pengar och producerar en lång byggprocess som likaså producerar stora kostnader och har en inverkan på miljön. Det finns därför ett behov av att analysera och utreda hur de digitala hjälpmedlen som tillkommer vid en ökad digitaliseringsgrad kan effektivisera byggbranschen. Effektiviseringen preciseras i bland annat AI:s tillkomst och dess potential inom byggbranschen samt vilka möjligheter och utmaningar som byggbranschen ställs inför vid utförandet av en digital utveckling. Studien grundas i en jämförelse mellan en litteraturstudie och en intervjustudie som omfattar frågor som behandlar den digitala implementering som de olika organisationerna och företagen har bedrivit samt den framtida potential som tillkommer med deras initiativ. Det har framkommit att det finns en allmän nytta i att bedriva en digitalisering på en organisationsnivå. Huruvida de nyligen framtagna digitala medlen kan implementeras ifrågasätts, menat att det finns en komplex relation mellan de upphandlande underentreprenörerna och beställarna. De erbjudna tjänsterna och utförandesättet som underentreprenörerna erbjuder kräver investeringar för att implementera digitala medel och därmed öka digitaliseringsgraden. De större aktörerna har makten och den ekonomiska potentialen för att öka digitaliseringsgraden inom organisationen eller företaget där de mindre aktörerna måste förhålla sig till denna utveckling. Den ojämna digitaliseringsgraden synliggörs idag då makten för prissättningen av upphandlingen ligger hos underentreprenörerna i de regionalt prissatta tjänsterna som inte undergått en vidare digital utveckling. För att omsätta mer pengar i byggbranschen behöver branschen uppnå en globalt konkurrensmässig upphandling liksom tillverkningsindustrin, vilket anses tillkomma med en mer produktbaserad upphandling. För att de mindre aktörerna ska kunna särskilja sitt arbetssätt krävs en unik implementering av digitala medel och på sätt kunna förhålla sig till de större aktörerna, vilket behövs för att kunna erbjuda digitala tjänster som kommer vara eftertraktade i framtiden.
APA, Harvard, Vancouver, ISO, and other styles
18

Johansson, Anja. "Modelling Expectations and Trust in Virtual Agents." Thesis, Linköping University, Department of Science and Technology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9704.

Full text
Abstract:

Computer graphics has long been the foremost area of advancement in both the gaming and the motion picture industry. Nowadays, as computer graphics is getting difficult to advance any further, other areas begin to interest the developers. One of these areas is artificial intelligence. The gaming industry has begun to create far more intelligent virtual characters that no longer are as predictable as they used to be. Mixing character animation with intelligent agents techniques results in a vastly more interesting experience for the gamer as well as for the developer.

This project focuses on introducing expectational behaviour and trust in intelligent virtual characters. The area is highly interesting as it enables a vastly more complex emotional structure for virtual agents than that of reactive, rational behaviour. Although expectations can indeed be rational, often they are not when it comes to humans. This project studies the effects of expectations on the emotional state of agents and the effect that the emotions have on the reasoning abilities and the action selection mechanism. It also examines how trust influences emotions and vice versa and how trust influences the action selection mechanism.

One of the requirements of this work is that the computations concerning the triggering of emotions have to be done in real-time. While it is possible to do off-line computations for simulations (such as is often done for the movie industry), it is not what we desire here. It is our goal to create interesting virtual characters that can be interacted with in real-time. Therefore, also expectations and trust must be calculated and managed in real-time.

APA, Harvard, Vancouver, ISO, and other styles
19

Rönnqvist, Mats, and Magnus Johansson. "Artificiell Intelligens inom medicinsk bilddiagnostik : En allmän litteraturstudie." Thesis, Luleå tekniska universitet, Institutionen för hälsovetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-72969.

Full text
Abstract:
Bakgrund: Artificiell Intelligens (AI) kommer in i vårt samhälle och våra hem i allt större utsträckning. Inom sjukvården och radiologin kan AI utgöra ett hjälpmedel för både radiologer och röntgensjuksköterskor i deras profession. Forskning om AI fortsätter med oförminskad kraft för att finna allt bättre och mer funktionsdugliga algoritmer som kan anta den utmaningen. Syfte: Syftet med denna litteraturstudie är att sammanställa vid vilka modaliteter AI används som stöd. Metod: Studien utfördes som en allmän litteraturstudie vilket genererade femton artiklar som kvalitetsgranskades och kategoriserades efter analys. Resultat: Beroende på tidpunkt när artiklarna var skrivna varierade metoderna hur träning av AI genomfördes. Det varierade även hur bilderna skulle förbearbetats inför träning. Bilderna måste genomgå brusreducering och segmentering för att AI ska kunna klassificera den sjukliga förändringen. Den processen underlättades i senare versioner av AI där alla dessa moment utfördes på en och samma gång. Slutsats: Stora förändringar kommer att ske inom radiologin och förändringarna kommer sannolikt att påverka alla på en röntgenavdelning. Författarna kan se att utvecklingen bara börjat och forskningen måste fortgå många år framöver.
Background: Artificial Intelligence (AI) increasingly comes in to our society and homes. In the field of medical care and radiology, AI will provide an aid for radiologists and radiographers in their professions. Research on AI continues in finding better and more functional algorithms which can achieve that. Purpose: The purpose of this literature study is to compile facts about modalities using artificial intelligence as support. Method: The study was conducted as a general literature study, which generated fifteen articles that were quality-reviewed and categorized after analysis. Result: Depending on the date when the articles were written the methods varied concerning how training of AI was performed. It also varied how the images were pre-processed before training. The images need to be processed by noise reduction and segmentation for AI in order to be able to classify the pathological change. That process was facilitated in later versions of AI where all these steps were performed at the same time. Conclusion: Major changes may occur in radiology and the changes are likely to affect everyone in an X-ray ward. The authors can see that the development has just begun and research has to continue for many years to come.
APA, Harvard, Vancouver, ISO, and other styles
20

Ghayoumi, Mehdi. "FACIAL EXPRESSION ANALYSIS USING DEEP LEARNING WITH PARTIAL INTEGRATION TO OTHER MODALITIES TO DETECT EMOTION." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1501273062260458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Daghustani, Sara Hussain. "USING AUTOENCODER TO REDUCE THE LENGTH OF THE AUTISM DIAGNOSTIC OBSERVATION SCHEDULE (ADOS)." CSUSB ScholarWorks, 2018. https://scholarworks.lib.csusb.edu/etd/620.

Full text
Abstract:
This thesis uses autoencoders to explore the possibility of reducing the length of the Autism Diagnostic Observation Schedule (ADOS), which is a series of tests and observations used to diagnose autism spectrum disorders in children, adolescents, and adults of different developmental levels. The length of the ADOS, directly and indirectly, causes barriers to its access for many individuals, which means that individuals who need testing are unable to get it. Reducing the length of the ADOS without significantly sacrificing its accuracy would increase its accessibility. The autoencoders used in this thesis have specific connections between layers that mimic the sectional structure of the original ADOS. Autoencoders reduce the length of the ADOS by conducting its dimensionality through combining original variables into new variables. By examining the weights of variables entering the reduced diagnostic, this thesis explores which variables are prioritized and deprioritized by the autoencoder. These information yields insights as to which variables, and underlying concepts, should prioritize in a shorter ADOS. After training, all autoencoders used were able to reduce dimensionality with minimal accuracy losses. Examination of weights yielded many keen insights as to which ADOS variables are the least important to their modules and can thus be eliminated or deprioritized in a reduced diagnostic. In particular, the observation of self-injurious behavior was declared entirely unnecessary in the first three modules of the ADOS, a finding that corroborates other recent experimental results in the domain. This observation suggests that the solutions converged upon by the model have real-world significance.
APA, Harvard, Vancouver, ISO, and other styles
22

Heidlund, Marcus. "IT-konsulters positionering inom artificiell intelligens nu och i framtiden." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34693.

Full text
Abstract:
Artificial intelligence is a concept based on making machines think. The disciplines that study this range from behavioral science to computer science and mathematics. The different approaches have contributed to categorize the field of artificial intelligence into different areas and the definitions of artificial intelligence are many. The aim of this study has been to investigate how IT consulting companies define and position themselves against artificial intelligence. To achieve this goal, literature on the subject has been studied to define artificial intelligence and to categorize the different areas of artificial intelligence. With the help of these areas, a survey has been conducted with IT consultancies as a target group. It turned out that most respondents work with different types of algorithms based on logic computations and big data, which differ from those areas experts believe will be significant for the future of AI. Both experts and the IT-consultancies are convinced that in the future neural networks will be one of the fields with greatest potential.
APA, Harvard, Vancouver, ISO, and other styles
23

Hammarström, Tobias. "Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424779.

Full text
Abstract:
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
APA, Harvard, Vancouver, ISO, and other styles
24

Ahmadi, Mostafa, and Afram Chamoun. "Artificiell intelligens som beslutsfattande medel vid ROT-arbeten : En kvalitativ intervjustudie för kartläggning av beslutfattningsprocessen." Thesis, Uppsala universitet, Institutionen för samhällsbyggnad och industriell teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447256.

Full text
Abstract:
The construction industry stands for a third of the total carbon dioxide emissions globally , this includes the building process as well as the usage of the buildings. This in combination with the need for renovating older buildings in Sweden can lead to additional stress on the environment. This project aims to map out the decision-making process for property owners and consultants when working with renovation projects. The purpose of the study is to identify problematic areas that hinder efficient decision-making and sustainability efforts. With the problem areas mapped out, different solutions containing artificial intelligence will be explored and discussed. Ethical implications with the implementation of artificial intelligence will also be discussed in this study. In this qualitative study, a set of steps were taken to answer the research questions. Firstly, a literature review was conducted to explore existing research. Secondly, semi-structured interviews were held to gather empirical data. Lastly, the interviews were transcribed and analyzed with thematic analysis, to identify problematic areas in the renovation projects. The research strategy applied for this study is abductive. Due to the existing pandemic, the usage of software such as Zoom and Microsoft Teams have been used to conduct the interviews. The results show that the factors hindering the decision-making process were regulations & laws, economy & sustainability, lack of recycling, and lack of documentation. These were areas that both the consultants and property owners described as bottlenecks in the decision- making process. Artificial intelligence solutions were discussed for the problem areas regulations & laws and economy & sustainability because there is statistical data that could be used for training an artificial intelligence, unlike the other two problem areas that have to be dealt with manually due to the neglect of workers. The artificial intelligence solutions presented in this study can be considered ethical due to their assisting purpose. Although this research provides ideas for the implementation of artificial intelligence they are very brief and theoretical, further development and exploration are needed to implement the solutions.
APA, Harvard, Vancouver, ISO, and other styles
25

Handler, Abram. "An empirical study of semantic similarity in WordNet and Word2Vec." ScholarWorks@UNO, 2014. http://scholarworks.uno.edu/td/1922.

Full text
Abstract:
This thesis performs an empirical analysis of Word2Vec by comparing its output to WordNet, a well-known, human-curated lexical database. It finds that Word2Vec tends to uncover more of certain types of semantic relations than others -- with Word2Vec returning more hypernyms, synonomyns and hyponyms than hyponyms or holonyms. It also shows the probability that neighbors separated by a given cosine distance in Word2Vec are semantically related in WordNet. This result both adds to our understanding of the still-unknown Word2Vec and helps to benchmark new semantic tools built from word vectors.
APA, Harvard, Vancouver, ISO, and other styles
26

Brolin, John, and Malmborg Alexander Hörsne. "Positionering med hjälp av Accesspunkter i ett slutet WiFi-nätverk : En delstudie för Sjöfartshögskolan i Kalmar." Thesis, Linnéuniversitetet, Sjöfartshögskolan (SJÖ), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-103577.

Full text
Abstract:
Artificiell intelligens syftar på en maskins förmåga att fatta egna beslut. Maskinen skall sedan utföra en handling baserat på beslutet, allt detta utan människans inblandning. Positionsnoggrannheten för fartyg är något som på senare tid ställs allt högre krav på, inte minst i offshoreindustrin. Med hjälp av ett Dynamic Positioning system kan högre noggrannhet uppnås. I detta projekt undersöks vilket system som är lämpligast att använda för ett positioneringssystem för en modell av skolfartyget Calmare Nyckel. Projektet utvärderar positionering med hjälp av fyra accesspunkter jämnt fördelat över två nätverk. Projektet belyser en rad olika tekniker baserade på datasignaler som sedan moduleras av en hårdvaruenhet. Då projektet är av så kallat low-cost resulterade valet i en ESP32 och WiFi som teknik. Laborationer påvisade ett väl fungerande system. Uppmätt noggrannhet var dock inte tillräcklig för att använda rakt av i det fortsatta projektet.
Artificial intelligence is a machines ability to make its own decisions. The machine is then supposed to take action based on the decision, this without the involvement of a human. The positional accuracy for ships is something that has become increasingly more demanding, especially in the offshore industry. With the aid of a Dynamic positioning system, a great accuracy can be achieved. This undertaking investigates which system that will be most suited to use for a positioning system aimed for a model of the ship, Calmare Nyckel. The project evaluates positioning with the aid of four access points evenly distributed over two networks. The project illustrates a number of different techniques based on data signals, which are then modulated by a hardware unit. Because of the low-cost aim, this resulted in the usage of ESP32 and WiFi as the systems of choice. Laborations in the undertaking proved a well working system. Measured accuracy, however, was not sufficient to use directly in the continued project.
APA, Harvard, Vancouver, ISO, and other styles
27

Lagerstam, Cristopher, and Fabian Lundgren. "Sjövägsregler och autonoma fartyg : Hur sjövägsreglerna skulle kunna fungera i möte med autonoma fartyg." Thesis, Linnéuniversitetet, Sjöfartshögskolan (SJÖ), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-90700.

Full text
Abstract:
Det finns en utvecklingsriktning inom sjöfartsbranschen mot autonoma fartyg. Hur stor andel av alla fartyg som kommer vara autonoma vet man inte i dagsläget. Utgångspunkten för studien var att det kommer bli en blandning mellan autonoma och bemannade fartyg ute till havs. Studien ville ta reda på hur aktiva sjöbefäl skulle förhålla sig om de möter ett autonomt fartyg med utgångspunkt från dagens sjövägsregler. Vidare skulle studien undersöka om respondenterna ansåg att reglerna måste anpassas för autonoma fartyg. En kvalitativ metod valdes och genomfördes med standardiserade frågor. Slutsatsen är att sjövägsreglerna är komplexa på grund av dess uppbyggnad. Det går att bryta mot bestämmelserna, men samtidigt är det möjligt att följa dem. Bakgrunden till respondenternas resonemang går att hitta i tidigare forskning där det finns ett samband mellan människa och maskin. Om människan har begränsad information om ett system desto lägre förtroende har människan för det systemet. Efter alla intervjuer kom det fram att respondenterna är för en ändring av sjövägsreglerna. Den regeländringen som var mest uppenbar, var en ändring av definitionerna på grund av att de vill ha information om att det fartyg det möter är autonomt.
There is a direction of development in the shipping industry towards autonomous vessels. How many autonomous vessels there will be at sea is currently unknown. The idea for the study was that it will be a mix between autonomous and manned vessels at sea. The study wanted to find out how active maritime officers would act if they met an autonomous vessel based on today’s regulations for preventing collisions at sea (Colregs). Furthermore, the study would investigate whether respondents felt that the rules had to be reformed for autonomous vessels. A qualitative method was selected and implemented with standardized questions. The conclusion was that the sea route rules are complex due to its structure. It is possible to violate the rules, but at the same time it is possible to follow them. The background to the respondent’s thoughts can be found in previous research where there is a connection between man and machine. If the person has limited information about a system, the less trust the person has for that system. After all the interviews were done it was revealed that the respondents agreed for a change in the Colregs. The most obvious rule change was a change in the definitions because they wanted information that the ship it encountered was autonomous.
APA, Harvard, Vancouver, ISO, and other styles
28

Elvelind, Sofia. "Mönsterigenkänning och trendanalys i elnät : Prognostisering av elkvalitet samt effektuttag inom industrin." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-159799.

Full text
Abstract:
Intresset för elkvalitet har ökat då elektrisk utrustning, såsom omriktare, numera ger upphov till mer störningar. Elektrisk utrustning har också blivit mer känslig mot störningar samtidigt som industrier har blivit mindre toleranta mot produktionsstörningar. Traditionellt har felhantering i elnät skett när problemet redan uppstått och utgått från historiska data. Metrum har dock genom sin applikation PQ4Cast introducerat mönsterigenkänning för att prognosticera elkvalitetsparametrar samt aktiv effekt och i och med det bidra till ett proaktivt underhåll. Applikationen skapar en prognos för kommande vecka utifrån data för de senaste veckorna, under utveckling är även en funktion för trendanalys av bland annat effektförbrukning och spänningsnivå. Syftet med implementeringen av PQ4Cast är att få en högre tillgänglighet och minimera kostnader för underhåll och oplanerade avbrott. Ett andra syfte är att skapa ökad kontroll över variationer i effektuttag. Målet med detta examensarbete är att avgöra vilka avvikelser som är viktiga för Sandvik att ha kontroll över, ta fram metoder för att utvärdera applikationens funktionalitet samt ge underlag till hur prognoser från applikationen bör hanteras. Utöver det ska även nyttan med funktionen för trendanalys avgöras. Sandvik ser störst nytta med att få kontroll över framtida värden för aktiv effekt, reaktiv effekt samt variationer i spänningens effektivvärde. Av dessa borde variationer i aktiv samt reaktiv effekt vara mest lämpad för PQ4Cast att identifiera. För undersökning av överensstämmelse mellan prognos och verkligt utfall rekommenderas användning av korrelationskoefficient, determinationskoefficient samt signifikansnivå på fem procent. Användning av MAPE, Mean Absolute Percentage Error, rekommenderas också att användas för att kvantifiera prognosfelet. Vid god överensstämmelse rekommenderas prognoserna för aktiv effekt från PQ4Cast användas för veckoprognos till elhandelsbolaget Statkraft i kombination med temperaturprognos samt prognos över produktion kommande veckan. Trendanalysfunktionen visar ett medelfel med några procent för den aktiva effekten. Ytterligare undersökningar av funktionen rekommenderas och vid god överensstämmelse rekommenderas denna användas som grund för prognoser som ges till Statkraft samt används som grund för nytt effektavtal med Vattenfall i kombination med produktionsprognos. För analys av trend för spänningens effektivvärde är avvikelsen från prognosvärdet endast några tiondels procent och här rekommenderas fortsatta undersökningar och då specifikt vid del i nätet där installation av solcellsanläggning planeras. Applikationen PQ4Cast samt trendanalysfunktionen förväntas kunna leda till ekonomiska fördelar i form av minskade kostnader för inköp av el samt minskade elnätsavgifter och även betydande besparingar om störningar som kan leda till avbrott kan upptäckas i tid och avstyras. Kortvariga störningar, såsom spänningsdippar, är dock svåra för PQ4Cast att upptäcka i dagsläget.
Interest in power quality has increased as electrical equipment, such as inverters, nowadays emits more disturbances. Electrical equipment has also become less tolerant to disturbances, while industries have become less tolerant to disturbances in the production. Traditionally, fault diagnosis and handling have been performed when the fault has already arisen and has been based on historical data. Through its application PQ4Cast, Metrum have introduced pattern recognition to forecast power quality parameters and active power, and thereby contribute to proactive maintenance. The application creates a forecast for the coming week based on data for the last few weeks. Under development is also a function for trend analysis of, among other things, power consumption and voltage level. The objective with the implementation of PQ4Cast is to achieve higher availability and minimize costs for maintenance and unplanned interruptions. A second objective is to increase the control over variations in power consumption. The aim of this thesis is to determine which deviations are important for Sandvik, develop methods for evaluating the application’s functionality and provide a basis for how forecasts from the application should be managed. The aim is also to determine the usefulness of the trend analysis function. For Sandvik, the greatest benefit is seen in gaining control over future values for active power, reactive power and variations in the RMS value of the voltage. Of these, variations in active and reactive power should be most suitable for PQ4Cast to identify. For examination of the conformity between prognosis and actual outcome, the use of correlation coefficient, determination coefficient and significance level of five percent is recommended. Use of MAPE, Mean Absolute Percentage Error, is also recommended to quantify the forecast error. In the event of good conformity, the forecasts for active power from PQ4Cast are recommended for weekly forecasts to the electricity trading company, Statkraft, in combination with temperature forecasts and forecasts of production following week. The trend analysis function shows MAPE at a few percent for the active effect. Further investigations of the function are recommended and in case of good conformity, the prognosis is recommended as the basis for forecasts given to Statkraft and as the basis for new power agreements with Vattenfall in combination with production forecast. For analysis of the trend for the voltage's RMS value, the deviation from the forecasted value is only a few tenths of a percentage. Here further studies are recommended and then specifically at area in the grid where installation of solar power is planned. The application PQ4Cast and the trend analysis function are expected to lead to economic benefits, such as reduced costs for purchase of electricity, reduced electricity grid charges and significant savings if disturbances that may lead to interruptions can be detected and prevented. Disturbances of short duration, such as voltage dips, are however hard to detect with the current setup of the application.
APA, Harvard, Vancouver, ISO, and other styles
29

Bevans, Brandon. "Categorizing Blog Spam." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1623.

Full text
Abstract:
The internet has matured into the focal point of our era. Its ecosystem is vast, complex, and in many regards unaccounted for. One of the most prevalent aspects of the internet is spam. Similar to the rest of the internet, spam has evolved from simply meaning ‘unwanted emails’ to a blanket term that encompasses any unsolicited or illegitimate content that appears in the wide range of media that exists on the internet. Many forms of spam permeate the internet, and spam architects continue to develop tools and methods to avoid detection. On the other side, cyber security engineers continue to develop more sophisticated detection tools to curb the harmful effects that come with spam. This virtual arms race has no end in sight. Most efforts thus far have been toward accurately detecting spam from ham, and rightfully so since initial detection is essential. However, research is lacking in understanding the current ecosystem of spam, spam campaigns, and the behavior of the botnets that drive the majority of spam traffic. This thesis focuses on characterizing spam, particularly the spam that appears in forums, where the spam is delivered by bots posing as legitimate users. Forum spam is used primarily to push advertisements or to boost other websites’ perceived popularity by including HTTP links in the content of the post. We conduct an experiment to collect a sample of the blog posts and network activity of the spambots that exist in the internet. We then present a corpora available to conduct analysis on and proceed with our own analysis. We cluster associated groups of users and IP addresses into entities, which we accept as a model of the underlying botnets that interact with our honeypots. We use Natural Language Processing (NLP) and Machine Learning (ML) to determine that creating semantic-based models of botnets are sufficient for distinguishing them from one another. We also find that the syntactic structure of posts has little variation from botnet to botnet. Finally we confirm that to a large degree botnet behavior and content hold across different domains.
APA, Harvard, Vancouver, ISO, and other styles
30

El-Gohary, Mahmoud Ahmed. "Joint Angle Tracking with Inertial Sensors." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/661.

Full text
Abstract:
The need to characterize normal and pathological human movement has consistently driven researchers to develop new tracking devices and to improve movement analysis systems. Movement has traditionally been captured by either optical, magnetic, mechanical, structured light, or acoustic systems. All of these systems have inherent limitations. Optical systems are costly, require fixed cameras in a controlled environment, and suffer from problems of occlusion. Similarly, acoustic and structured light systems suffer from the occlusion problem. Magnetic and radio frequency systems suffer from electromagnetic disturbances, noise and multipath problems. Mechanical systems have physical constraints that limit the natural body movement. Recently, the availability of low-cost wearable inertial sensors containing accelerometers, gyroscopes, and magnetometers has provided an alternative means to overcome the limitations of other motion capture systems. Inertial sensors can be used to track human movement in and outside of a laboratory, cannot be occluded, and are low cost. To calculate changes in orientation, researchers often integrate the angular velocity. However, a relatively small error or drift in the measured angular velocity leads to large integration errors. This restricts the time of accurate measurement and tracking to a few seconds. To compensate that drift, complementary data from accelerometers and magnetometers are normally integrated in tracking systems that utilize the Kalman filter (KF) or the extended Kalman filter (EKF) to fuse the nonlinear inertial data. Orientation estimates are only accurate for brief moments when the body is not moving and acceleration is only due to gravity. Moreover, success of using magnetometers to compensate drift about the vertical axis is limited by magnetic field disturbance. We combine kinematic models designed for control of robotic arms with state space methods to estimate angles of the human shoulder and elbow using two wireless wearable inertial measurement units. The same method can be used to track movement of other joints using a minimal sensor configuration with one sensor on each segment. Each limb is modeled as one kinematic chain. Velocity and acceleration are recursively tracked and propagated from one limb segment to another using Newton-Euler equations implemented in state space form. To mitigate the effect of sensor drift on the tracking accuracy, our system incorporates natural physical constraints on the range of motion for each joint, models gyroscope and accelerometer random drift, and uses zero-velocity updates. The combined effect of imposing physical constraints on state estimates and modeling the sensor random drift results in superior joint angles estimates. The tracker utilizes the unscented Kalman filter (UKF) which is an improvement to the EKF. This removes the need for linearization of the system equations which introduces tracking errors. We validate the performance of the inertial tracking system over long durations of slow, normal, and fast movements. Joint angles obtained from our inertial tracker are compared to those obtained from an optical tracking system and a high-precision industrial robot arm. Results show an excellent agreement between joint angles estimated by the inertial tracker and those obtained from the two reference systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Hård, af Segerstad Per. "Artificiella neurala nät för datorseende hos en luftmålsrobot." Thesis, Försvarshögskolan, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:fhs:diva-7530.

Full text
Abstract:
Studiens syfte är att öka förståelsen för möjligheterna med modern artificiell intelligens (AI) vid militär användning genom att bidra med information om ny teknik. Moderna civila applikationer av datorseende som skapats genom användande av så kallade artificiella neurala nät visar resultat som närmar sig den mänskliga synens nivå när det gäller att känna igen olika saker i sin omgivning. Denna studie motiveras av dessa observationer inom området AI i förhållande till situationer i luftstrid då pilotens syn används för att känna igen flygplan innan det bekämpas. Exempelvis vid användande av hjälmsikte riktar pilotens ögon målsökaren hos en luftmålsrobot mot det flygplan som robotens målsökare sedan låser på. Utanför visuella avstånd kan pilotens ögon av naturliga skäl inte användas på detta sätt, varför datorseende använt i en luftmålsrobot undersöks. Resultaten från studien stödjer att datorseende genom användande av artificiella neurala nät kan användas i en luftmålsrobot samt att datorseende kan utföra uppgiften att känna igen stridsflygplan.
This study is aimed at increasing the knowledge to those concerned within the Armed Forces by providing information on the possibilities of modern artificial intelligence (AI). Motivation comes from observations of civilian technology on the use of AI in the field of Computer Vision showing performance equaling the level of the human vision when using the technology of Deep Learning of Artificial Neural Nets. In air-combat aircraft the pilot´s vision is used for recognizing the aircraft that is about to be shot down. For example when utilizing helmet mounted displays, the seeker of an air-target-missile is directed upon the aircraft on which the pilot´s eyes are looking. However when air-target-missiles are utilized beyond visual range the pilot´s vision cannot help in directing the seeker on a specific target. Therefore computer vision within an air-target-missile is studied. The results of the study support that the technology of neural networks may be used in an air-target-missile and that computer vision provided by this technology can do the job of recognizing a combat aircraft.
APA, Harvard, Vancouver, ISO, and other styles
32

Sierra, Brandon Luis. "COMPARING AND IMPROVING FACIAL RECOGNITION METHOD." CSUSB ScholarWorks, 2017. https://scholarworks.lib.csusb.edu/etd/575.

Full text
Abstract:
Facial recognition is the process in which a sample face can be correctly identified by a machine amongst a group of different faces. With the never-ending need for improvement in the fields of security, surveillance, and identification, facial recognition is becoming increasingly important. Considering this importance, it is imperative that the correct faces are recognized and the error rate is as minimal as possible. Despite the wide variety of current methods for facial recognition, there is no clear cut best method. This project reviews and examines three different methods for facial recognition: Eigenfaces, Fisherfaces, and Local Binary Patterns to determine which method has the highest accuracy of prediction rate. The three methods are reviewed and then compared via experiments. OpenCV, CMake, and Visual Studios were used as tools to conduct experiments. Analysis were conducted to identify which method has the highest accuracy of prediction rate with various experimental factors. By feeding a number of sample images of different people which serve as experimental subjects. The machine is first trained to generate features for each person among the testing subjects. Then, a new image was tested against the “learned” data and be labeled as one of the subjects. With experimental data analysis, the Eigenfaces method was determined to have the highest prediction rate of the three algorithms tested. The Local Binary Pattern Histogram (LBP) was found to have the lowest prediction rate. Finally, LBP was selected for the algorithm improvement. In this project, LBP was improved by identifying the most significant regions of the histograms for each person in training time. The weights of each region are assigned depending on the gray scale contrast. At recognition time, given a new face, different weights are assigned to different regions to increase prediction rate and also speed up the real time recognition. The experimental results confirmed the performance improvement.
APA, Harvard, Vancouver, ISO, and other styles
33

Paclisan, Dana-Maria. "Optimisation par la modélisation de l'expérimentation vibratoire des systèmes pile à combustible pour le transport terrestre." Phd thesis, Université de Technologie de Belfort-Montbeliard, 2013. http://tel.archives-ouvertes.fr/tel-00977688.

Full text
Abstract:
Les recherches scientifiques sur la pile à combustible échangeuse de protons (PEMFC) ont, jusqu'il y a peu, concerné presque exclusivement les aspects fondamentaux liés à l'électrochimie, particulièrement la conception, le dimensionnement, les performances et le diagnostic. Récemment, les objectifs de durée de vie ont ouvert un nouvel axe de recherche sur le comportement mécanique de la PEMFC devant conduire à son optimisation statique et dynamique. Parallèlement les installations vibroclimatiques de la plateforme d'essais " Systèmes Pile à Combustible " de Belfort ont été développées. La thèse de Vicky ROUSS soutenue en 2008 montre l'intérêt et le potentiel de la modélisation type " boîte noire " pour simuler le comportement mécanique de la PEMFC, et de la technique des signatures mécaniques expérimentales pour mettre en évidence la présence des phénomènes physiques à l'intérieur de la PEMFC. Dans ce contexte les travaux de la présente thèse ont concerné le pilotage des essais de durabilité par simulation boîte noire temps réel et l'exploitation de cette dernière en vue de la découverte des phénomènes physiques à l'intérieur de la PEMFC. La modélisation par réseaux de neurones des systèmes simples de type oscillateur harmonique a représenté le premier pas pour la définition d'un modèle neuronal de pilotage des essais de durabilité en temps réel. Le cas du système mécanique excité par la base qui correspond à une pile à combustible fixée sur la plateforme vibratoire, a été considéré. L'architecture neuronale optimale a été définie en plusieurs étapes en utilisant différents algorithmes. Elle utilise en entrée le signal de commande du système et la réponse mesurée sur la pile à combustible au moment t et en sortie on obtient la réponse prédite du comportement de la pile à combustible au moment t+1. Cette architecture a été mise au point et validée par des essais sur la plateforme. D'autres essais ont permis de mettre en évidence différents comportements de la pile à combustible en fonction de l'amplitude de sollicitation, de la pression et de la température de la pile à combustible. Les signatures mécaniques obtenues réalisées à partir des essais de durabilité complètent la bibliothèque de signatures déjà existante et mettent en évidence de nouveaux comportements de la pile à combustible.
APA, Harvard, Vancouver, ISO, and other styles
34

Jakobsson, Oscar. "Förbättringsförslag för prognostisering inom detaljhandeln." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388249.

Full text
Abstract:
Försäljningsprognoser inom organisationer kan vara en komplex process där allt ifrån mänskliga uppskattningar till avancerade högteknologiska system används som metod. Dagens ökade tillgång till datamängder, förbättrad datorkraft, bättre molntjänster, fler verktyg och ökat intresse har i kombination skapat bättre förutsättningar än förut att applicera artificiell intelligens (AI) på aktiviteter som tidigare varit människostyrda. Observationen gjordes inom ett svenskt detaljhandelsföretag att organisationen inte drog nytta av AI och dess teknik. Försäljningsprognostiseringen inom bemanningsprocessen ansågs på organisationen ha utvecklingsmöjligheter, varpå studien syftade till att se hur AI kunde tillämpas på området, hur en sådan förändring kan se ut och hur olika faktorer påverkar försäljningen. Studien har genom intervjuer, litteraturstudie, dokumentobservation, datainhämtning och dataanalys undersökt syftesfrågeställningarna. Det har skett tillsammans med teori om förändringsledarskap, AI och offensiv kvalitetsutveckling. Resultatet visar på att organisationen skulle kunna tillämpa AI-metoden supervised machine learning genom att använda historiska data på faktorer som exempelvis datum, veckodag, månad, löning och högtid vilka har påverkan på arbetsvolymen. Verktyget skulle kunna ersätta dagens försäljningsprognostiseringsverktyg och hjälpa bemanningskontrollanterna i deras arbete att ge stöd åt butikscheferna i schemaläggningen. En eventuell implementering av metoden kan med fördel ske genom föreslagen aktivitetslista som upprättats utifrån olika förändringsteorier.
Sales forecasts within organizations can be a complex process where everything from human estimates to advanced high-tech systems is used as a method. Today's increased access to data, improved computer power, improved cloud services, more tools and increased interest have combined created better conditions than before to be able to apply artificial intelligence (AI) to activities that before has been human-controlled. An observation was made in a Swedish retail company who stated that they generally had not yet benefited from AI and its technology. The sales forecasting tool in the staffing process was considered by the organization to have improvement potential, whereupon this study aimed to see how AI could be applied to the area, how a change might look and how different factors affect sales. The study has through interviews, literature study, document observation, data collection and data analysis examined the purpose of this study together with theory of change management, AI and quality management. The result shows that the organization could apply the AI ​​methodology supervised machine learning by using historical data on factors such as date, weekday, month, paydays and holidays that affect the working volume. The tool could replace today's sales forecasting tool and help staffing controllers in their work to support store managers in scheduling. A possible implementation of the method may benefit through the proposed activity list established by different change management theories.
APA, Harvard, Vancouver, ISO, and other styles
35

Cooper, Wyatt. "Discovering Hidden Networks Using Topic Modeling." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1659.

Full text
Abstract:
This paper explores topic modeling via unsupervised non-negative matrix factorization. This technique is used on a variety of sources in order to extract salient topics. From these topics, hidden entity networks are discovered and visualized in a graph representation. In addition, other visualization techniques such as examining the time series of a topic and examining the top words of a topic are used for evaluation and analysis. There is a large software component to this project, and so this paper will also focus on the design decisions that were made in order to make the program developed as versatile and extensible as possible.
APA, Harvard, Vancouver, ISO, and other styles
36

Zanlongo, Sebastian A. "Multi-Robot Coordination and Scheduling for Deactivation & Decommissioning." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3897.

Full text
Abstract:
Large quantities of high-level radioactive waste were generated during WWII. This waste is being stored in facilities such as double-shell tanks in Washington, and the Waste Isolation Pilot Plant in New Mexico. Due to the dangerous nature of radioactive waste, these facilities must undergo periodic inspections to ensure that leaks are detected quickly. In this work, we provide a set of methodologies to aid in the monitoring and inspection of these hazardous facilities. This allows inspection of dangerous regions without a human operator, and for the inspection of locations where a person would not be physically able to enter. First, we describe a robot equipped with sensors which uses a modified A* path-planning algorithm to navigate in a complex environment with a tether constraint. This is then augmented with an adaptive informative path planning approach that uses the assimilated sensor data within a Gaussian Process distribution model. The model's predictive outputs are used to adaptively plan the robot's path, to quickly map and localize areas from an unknown field of interest. The work was validated in extensive simulation testing and early hardware tests. Next, we focused on how to assign tasks to a heterogeneous set of robots. Task assignment is done in a manner which allows for task-robot dependencies, prioritization of tasks, collision checking, and more realistic travel estimates among other improvements from the state-of-the-art. Simulation testing of this work shows an increase in the number of tasks which are completed ahead of a deadline. Finally, we consider the case where robots are not able to complete planned tasks fully autonomously and require operator assistance during parts of their planned trajectory. We present a sampling-based methodology for allocating operator attention across multiple robots, or across different parts of a more sophisticated robot. This allows few operators to oversee large numbers of robots, allowing for a more scalable robotic infrastructure. This work was tested in simulation for both multi-robot deployment, and high degree-of-freedom robots, and was also tested in multi-robot hardware deployments. The work here can allow robots to carry out complex tasks, autonomously or with operator assistance. Altogether, these three components provide a comprehensive approach towards robotic deployment within the deactivation and decommissioning tasks faced by the Department of Energy.
APA, Harvard, Vancouver, ISO, and other styles
37

Anderson, Jason Lionel. "Autonomous Satellite Operations For CubeSat Satellites." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/256.

Full text
Abstract:
In the world of educational satellites, student teams manually conduct operations daily, sending commands and collecting downlinked data. Educational satellites typically travel in a Low Earth Orbit allowing line of sight communication for approximately thirty minutes each day. This is manageable for student teams as the required manpower is minimal. The international Global Educational Network for Satellite Operations (GENSO), however, promises satellite contact upwards of sixteen hours per day by connecting earth stations all over the world through the Internet. This dramatic increase in satellite communication time is unreasonable for student teams to conduct manual operations and alternatives must be explored. This thesis first introduces a framework for developing different Artificial Intelligences to conduct autonomous satellite operations for CubeSat satellites. Three different implementations are then compared using Cal Poly's CP6 CubeSat and the University of Tokyo's XI-IV CubeSat to determine which method is most effective.
APA, Harvard, Vancouver, ISO, and other styles
38

Josimovic, Aleksandra. "AI as a Radical or Incremental Technology Tool Innovation." Thesis, KTH, Industriell Management, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230603.

Full text
Abstract:
As researchers found that throughout the history a common challenge for companies across different industries, when it comes to leveraging and capturing value from a technology innovation is strongly influenced by the company’s dominant business model, an established framework through which assessment takes place. The overall purpose of this study is to provide a deeper understanding of the role that company's dominant business model has on the assessment of the impact that new technology innovation, in this case, AI, will have on the company and the market on which company operates. This thesis is partially exploratory and partially descriptive with a qualitative and deductive nature. In order to answer the purpose, a research strategy of case studies was used where empirical data was collected from interviews held with 47 company’s top executives from different hierarchical levels and business units, from Sweden, Switzerland, the USA, Germany, and Finland. The theoretical framework that describes the how AI as a new technology tool is perceived from the Company X perspective, either as a radical, game changer, or an incremental innovation technology tool and examines the role that dominant business model has on this perception was created. The developed implementation framework had its foundation in previous research concerning innovation business model theories. The data that was collected from the company’s executives were then analyzed and compared to the model. The most significant findings suggest that AI as a new technology tool is perceived as a game changer, radical innovation tool for some areas within the Company X and that the company dominant business model profoundly influences this perception.
Som forskare fann att genom hela historien är en gemensam utmaning för företag inom olika branscher när det gäller att utnyttja och fånga värde från en teknologisk innovation starkt påverkad av företagets dominerande affärsmodell, en etablerad ram genom vilken bedömning sker. Det övergripande syftet med denna studie är att ge en djupare förståelse för den roll som företagets dominerande affärsmodell har vid bedömningen av den inverkan som ny teknik innovation, i detta fall AI, kommer att ha på företaget och marknaden där företaget driver . Denna avhandling är delvis undersökande och delvis beskrivande med kvalitativ och deduktiv natur. För att svara på målet användes en forskningsstrategi av fallstudier där empiriska data samlades in från intervjuer med 47 bolagets ledande befattningshavare från olika hierarkiska nivåer och affärsenheter, från Sverige, Schweiz, USA, Tyskland och Finland. Den teoretiska ram som beskriver hur AI som ett nytt teknikverktyg uppfattas ur företagets Xperspektiv, antingen som en radikal, spelväxlare eller ett inkrementellt innovationsteknologiprogram och undersöker den roll som dominerande affärsmodell har på denna uppfattning skapades. Den utvecklade implementeringsramen har grundat sig i tidigare forskning rörande innovationsmodellteorier. Data som samlades in från företagets chefer analyserades sedan och jämfördes med modellen. De viktigaste resultaten tyder på att AI som ett nytt teknikverktyg uppfattas som en spelväxlare, radikalt innovationsverktyg för vissa områden inom företaget X och att företagets dominerande affärsmodell påverkar denna uppfattning väsentligt.
APA, Harvard, Vancouver, ISO, and other styles
39

Gaumer, Madelyn. "Using Neural Networks to Classify Discrete Circular Probability Distributions." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/hmc_theses/226.

Full text
Abstract:
Given the rise in the application of neural networks to all sorts of interesting problems, it seems natural to apply them to statistical tests. This senior thesis studies whether neural networks built to classify discrete circular probability distributions can outperform a class of well-known statistical tests for uniformity for discrete circular data that includes the Rayleigh Test1, the Watson Test2, and the Ajne Test3. Each neural network used is relatively small with no more than 3 layers: an input layer taking in discrete data sets on a circle, a hidden layer, and an output layer outputting probability values between 0 and 1, with 0 mapping to uniform and 1 mapping to nonuniform. In evaluating performances, I compare the accuracy, type I error, and type II error of this class of statistical tests and of the neural networks built to compete with them. 1 Jammalamadaka, S. Rao(1-UCSB-PB); SenGupta, A.(6-ISI-ASU)Topics in circular statistics. (English summary) With 1 IBM-PC floppy disk (3.5 inch; HD). Series on Multivariate Analysis, 5. World Scientific Publishing Co., Inc., River Edge, NJ, 2001. xii+322 pp. ISBN: 981-02-3778-2 2 Watson, G. S.Goodness-of-fit tests on a circle. II. Biometrika 49 1962 57–63. 3 Ajne, B.A simple test for uniformity of a circular distribution. Biometrika 55 1968 343–354.
APA, Harvard, Vancouver, ISO, and other styles
40

Eisenberg, Joshua Daniel. "Automatic Extraction of Narrative Structure from Long Form Text." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3912.

Full text
Abstract:
Automatic understanding of stories is a long-time goal of artificial intelligence and natural language processing research communities. Stories literally explain the human experience. Understanding our stories promotes the understanding of both individuals and groups of people; various cultures, societies, families, organizations, governments, and corporations, to name a few. People use stories to share information. Stories are told –by narrators– in linguistic bundles of words called narratives. My work has given computers awareness of narrative structure. Specifically, where are the boundaries of a narrative in a text. This is the task of determining where a narrative begins and ends, a non-trivial task, because people rarely tell one story at a time. People don’t specifically announce when we are starting or stopping our stories: We interrupt each other. We tell stories within stories. Before my work, computers had no awareness of narrative boundaries, essentially where stories begin and end. My programs can extract narrative boundaries from novels and short stories with an F1 of 0.65. Before this I worked on teaching computers to identify which paragraphs of text have story content, with an F1 of 0.75 (which is state of the art). Additionally, I have taught computers to identify the narrative point of view (POV; how the narrator identifies themselves) and diegesis (how involved in the story’s action is the narrator) with F1 of over 0.90 for both narrative characteristics. For the narrative POV, diegesis, and narrative level extractors I ran annotation studies, with high agreement, that allowed me to teach computational models to identify structural elements of narrative through supervised machine learning. My work has given computers the ability to find where stories begin and end in raw text. This allows for further, automatic analysis, like extraction of plot, intent, event causality, and event coreference. These tasks are impossible when the computer can’t distinguish between which stories are told in what spans of text. There are two key contributions in my work: 1) my identification of features that accurately extract elements of narrative structure and 2) the gold-standard data and reports generated from running annotation studies on identifying narrative structure.
APA, Harvard, Vancouver, ISO, and other styles
41

Leoni, Cristian. "Interpretation of Dimensionality Reduction with Supervised Proxies of User-defined Labels." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105622.

Full text
Abstract:
Research on Machine learning (ML) explainability has received a lot of focus in recent times. The interest, however, mostly focused on supervised models, while other ML fields have not had the same level of attention. Despite its usefulness in a variety of different fields, unsupervised learning explainability is still an open issue. In this paper, we present a Visual Analytics framework based on eXplainable AI (XAI) methods to support the interpretation of Dimensionality reduction methods. The framework provides the user with an interactive and iterative process to investigate and explain user-perceived patterns for a variety of DR methods by using XAI methods to explain a supervised method trained on the selected data. To evaluate the effectiveness of the proposed solution, we focus on two main aspects: the quality of the visualization and the quality of the explanation. This challenge is tackled using both quantitative and qualitative methods, and due to the lack of pre-existing test data, a new benchmark has been created. The quality of the visualization is established using a well-known survey-based methodology, while the quality of the explanation is evaluated using both case studies and a controlled experiment, where the generated explanation accuracy is evaluated on the proposed benchmark. The results show a strong capacity of our framework to generate accurate explanations, with an accuracy of 89% over the controlled experiment. The explanation generated for the two case studies yielded very similar results when compared with pre-existing, well-known literature on ground truths. Finally, the user experiment generated high quality overall scores for all assessed aspects of the visualization.
APA, Harvard, Vancouver, ISO, and other styles
42

Haen, Christophe. "Phronesis, a diagnosis and recovery tool for system administrators." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00950700.

Full text
Abstract:
The LHCb online system relies on a large and heterogeneous IT infrastructure made from thousands of servers on which many different applications are running. They run a great variety of tasks : critical ones such as data taking and secondary ones like web servers. The administration of such a system and making sure it is working properly represents a very important workload for the small expert-operator team. Research has been performed to try to automatize (some) system administration tasks, starting in 2001 when IBM defined the so-called "self objectives" supposed to lead to "autonomic computing". In this context, we present a framework that makes use of artificial intelligence and machine learning to monitor and diagnose at a low level and in a non intrusive way Linux-based systems and their interaction with software. Moreover, the shared experience approach we use, coupled with an "object oriented paradigm" architecture increases a lot our learning speed, and highlight relations between problems.
APA, Harvard, Vancouver, ISO, and other styles
43

Melki, Gabriella A. "Novel Support Vector Machines for Diverse Learning Paradigms." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5630.

Full text
Abstract:
This dissertation introduces novel support vector machines (SVM) for the following traditional and non-traditional learning paradigms: Online classification, Multi-Target Regression, Multiple-Instance classification, and Data Stream classification. Three multi-target support vector regression (SVR) models are first presented. The first involves building independent, single-target SVR models for each target. The second builds an ensemble of randomly chained models using the first single-target method as a base model. The third calculates the targets' correlations and forms a maximum correlation chain, which is used to build a single chained SVR model, improving the model's prediction performance, while reducing computational complexity. Under the multi-instance paradigm, a novel SVM multiple-instance formulation and an algorithm with a bag-representative selector, named Multi-Instance Representative SVM (MIRSVM), are presented. The contribution trains the SVM based on bag-level information and is able to identify instances that highly impact classification, i.e. bag-representatives, for both positive and negative bags, while finding the optimal class separation hyperplane. Unlike other multi-instance SVM methods, this approach eliminates possible class imbalance issues by allowing both positive and negative bags to have at most one representative, which constitute as the most contributing instances to the model. Due to the shortcomings of current popular SVM solvers, especially in the context of large-scale learning, the third contribution presents a novel stochastic, i.e. online, learning algorithm for solving the L1-SVM problem in the primal domain, dubbed OnLine Learning Algorithm using Worst-Violators (OLLAWV). This algorithm, unlike other stochastic methods, provides a novel stopping criteria and eliminates the need for using a regularization term. It instead uses early stopping. Because of these characteristics, OLLAWV was proven to efficiently produce sparse models, while maintaining a competitive accuracy. OLLAWV's online nature and success for traditional classification inspired its implementation, as well as its predecessor named OnLine Learning Algorithm - List 2 (OLLA-L2), under the batch data stream classification setting. Unlike other existing methods, these two algorithms were chosen because their properties are a natural remedy for the time and memory constraints that arise from the data stream problem. OLLA-L2's low spacial complexity deals with memory constraints imposed by the data stream setting, and OLLAWV's fast run time, early self-stopping capability, as well as the ability to produce sparse models, agrees with both memory and time constraints. The preliminary results for OLLAWV showed a superior performance to its predecessor and was chosen to be used in the final set of experiments against current popular data stream methods. Rigorous experimental studies and statistical analyses over various metrics and datasets were conducted in order to comprehensively compare the proposed solutions against modern, widely-used methods from all paradigms. The experimental studies and analyses confirm that the proposals achieve better performances and more scalable solutions than the methods compared, making them competitive in their respected fields.
APA, Harvard, Vancouver, ISO, and other styles
44

Chekol, Melisachew Wudage. "Analyse Statique de Requête pour le Web Sémantique." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00834448.

Full text
Abstract:
L'inclusion de requête est un problème bien étudié durant plusieurs décen- nies de recherche. En règle générale, il est défini comme le problème de déterminer si le résultat d'une requête est inclus dans le résultat d'une autre requête pour tout ensem- ble de données. Elle a des applications importantes dans l'optimisation des requêtes et la vérification de bases de connaissances. L'objectif principal de cette thèse est de fournir des procédures correctes et complètes pour déterminer l'inclusion des requêtes SPARQL en vertu d'axiomes exprimés en logiques de description. De plus, nous met- tons en œuvre ces procédures à l'appui des résultats théoriques par l'expérimentation. À ce jour, l'inclusion de requête a été testée à l'aide de différentes techniques: ho- momorphisme de graphes, bases de données canoniques, les techniques de la théorie des automates et réduction au problème de la validité d'une logique. Dans cette thèse, nous utilisons la derniere technique pour tester l'inclusion des requêtes SPARQL utilisant une logique expressive appelée le μ-calcul. Pour ce faire, les graphes RDF sont codés comme des systèmes de transitions, et les requêtes et les axiomes du schéma sont codés comme des formules de μ-calcul. Ainsi, l'inclusion de requêtes peut être réduite au test de la validité d'une formule logique. Dans cette thèse j'identifier les divers fragments de SPARQL (et PSPARQL) et les langages de description logique de schéma pour lequelle l'inculsion est décidable. En outre, afin de fournir théoriquement et expérimentalement des procédures éprouvées pour vérifier l'inclusion de ces fragments décidables. Enfin, cette thèse propose un point de repère pour les solveurs d'inclusion. Ce benchmark est utilisé pour tester et comparer l'état actuel des solveurs d'inclusion.
APA, Harvard, Vancouver, ISO, and other styles
45

Frideros, Micael. "Artificiell intelligens som beslutsmetod." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-23899.

Full text
Abstract:
Detta arbete behandlar artificiell intelligens som beslutsmetod. Efter inledande diskussioner om de övergripande skillnaderna mellan hjärnans och datorns funktionssätt, olika utvecklingsinriktningar av artificiell intelligens samt olika metoder för att skapa artificiell intelligens identifieras strategier för hur artificiell intelligens kan användas som beslutsmetod beroende på faktorer som transparens, effektivitet samt mängden tillgänglig testdata. Exempelvis identifieras några typiska beslutssituationer där det kan antas att automatiserat beslutsfattande baserat på artificiell intelligens har stor potential, samt situationer då metoden kan antas vara mindre lämplig. Därefter analyseras teknikutvecklingen inom artificiell intelligens, både generellt och inom fyra specifika tillämpningsområden: inom autonoma fordon, inom finans, inom medicin och inom militären. Både den övergripande undersökningen av den generella teknikutvecklingen och studien av de fyra teknikområdena indikerar fortsatt mycket snabb utveckling inom området. Exempelvis visar en analys av patentdatabasen Espacenet att antalet patent inom området utvecklas i det närmaste exponentiellt. Samtidigt har det på senare tid gjorts flera tekniska genombrott, t.ex. utvecklandet av allt effektivare algoritmer genom användandet av hierarkiska strukturer med flera olika nivåer av ickelinjär informationsbearbetning, något som ofta benämns Deep Learning. Ett exempel är den metod för artificiell intelligens som utvecklas av DeepMind, som visat sig vara tillämpningsbar inom många olika områden, från att spela klassiska datorspel som Space Invaders och Breakout på en övermänsklig nivå till att göra betydande effektiviseringar i driften av Googles datorhallar. Även ur ett hårdvaruperspektiv är utvecklingen närmast exponentiell, driven av kontinuerliga framsteg inom tillverkningsprocesser samtidigt som det nyligen gjorts betydande framsteg med specialiserade kretsar för artificiell intelligens, något som sannolikt kommer att resultera i ännu snabbare utveckling av kraftfullare artificiell intelligens inom en nära framtid. Med hänsyn till teknikens effektivitet och den mycket snabba utvecklingen inom området diskuteras även några specifika frågeställningar som ofta nämns i diskussionen om artificiell intelligens, dess påverkan på arbetsmarknaden och den globala säkerhetsbalansen, för att baserat på detta sedan diskutera artificiell intelligens som beslutsmetod även i ett vidare perspektiv.
APA, Harvard, Vancouver, ISO, and other styles
46

Lundström, Emelie, and Martyna Nasilowska. "Revisionens digitalisering : en kvalitativ studie om hur digitaliseringen har påverkat revisionsprocessen och revisorsrollen." Thesis, Högskolan Kristianstad, Fakulteten för ekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-21093.

Full text
Abstract:
Dagens studier visar på att revisionsbranschen förändras i takt med digitaliseringens framfart, vilket gör det därför omöjligt för revisionsbyråer att välja bort digitaliseringen. Tidigare forskning påvisar att revisionsbranschen är en av de branscher som blivit mest påverkad av digitaliseringens framfart, eftersom flera arbetsuppgifter har ersatts av den nya tekniken. Syftet med studien har varit att undersöka hur revisionsprocessen samt revisorsrollen har påverkats av den allt mer digitaliserade arbetsmiljön. För att uppnå studiens syfte har en kvalitativ metod använts där fem medarbetare på PwC har intervjuats. Empirin analyseras sedan med hjälp av studiens insamlade teori i syfte att besvara studiens två frågeställningar. Därefter har en innehållsanalys utförts där funna samband ur den insamlade empirin ställts mot studiens teori. Studiens resultat påvisar att digitaliseringen redan har bidragit till förändringar i revisionsprocessens olika faser. Vidare har digitaliseringen påverkat revisorsrollen där revisorn har fått en mer betydande roll för revisionen. Fortsättningsvis påvisar den insamlade empirin att digitaliseringen har resulterat i ett effektivare arbetssätt samt att trovärdigheten på revisionen har ökat. Avslutningsvis förutspår digitaliseringens utveckling att fortsätta och dessutom påverka värdeskapandet i revisionsbranschen. Vidare tyder teorin och empirin på att det kommer skapas mervärde genom att revisorerna således kommer fokusera på att skapa värde för kunden. Slutligen förväntas digitaliseringen påverka efterfrågan på revisionen och vilka egenskaper som behövs hos en revisor, vilket är sociala och tekniska egenskaper samt bedömningsförmågan.
There are studies that indicate big changes concerning audit industries as the digitalization continues to develop. It is known that the audit offices can no longer ignore the fact that digitalization has taken an important role in our society. Previous research shows that the audit industry is one of the many industries that has been affected by digitalization, this because many of the tasks have been replaced by new technology. The purpose of this study has been to investigate how digitalization has affected the audit industry. In this report, a qualitative method has been used where five employees by PwC have been interviewed. The empiricism that was collected during the interviews was analyzed through the use of the theories presented with the aim of answering the study’s two main questions. Subsequently, an analysis of the content was performed where correlations found by the collected empiricism were compared to the theories of the study. The result of the study was that the digitalization has already contributed to changes in the phases of the audit process and the auditor role. Furthermore, the digitalization has resulted in a more efficient way and increased the credibility of the audit. The development of digitalization has been predicted to continue and also affect value creation in the audit industry. The collected theory and empiricism show that the auditors focusing more on creating value for the customers. Finally, digitalization is expected to have an impact on the demand for the audit and its qualities.
APA, Harvard, Vancouver, ISO, and other styles
47

Alklid, Jonathan. "Time to Strike: Intelligent Detection of Receptive Clients : Predicting a Contractual Expiration using Time Series Forecasting." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-106217.

Full text
Abstract:
In recent years with the advances in Machine Learning and Artificial Intelligence, the demand for ever smarter automation solutions could seem insatiable. One such demand was identified by Fortnox AB, but undoubtedly shared by many other industries dealing with contractual services, who were looking for an intelligent solution capable of predicting the expiration date of a contractual period. As there was no clear evidence suggesting that Machine Learning models were capable of learning the patterns necessary to predict a contract's expiration, it was deemed desirable to determine subject feasibility while also investigating whether it would perform better than a commonplace rule-based solution, something that Fortnox had already investigated in the past. To do this, two different solutions capable of predicting a contractual expiration were implemented. The first one was a rule-based solution that was used as a measuring device, and the second was a Machine Learning-based solution that featured Tree Decision classifier as well as Neural Network models. The results suggest that Machine Learning models are indeed capable of learning and recognizing patterns relevant to the problem, and with an average accuracy generally being on the high end. Unfortunately, due to a lack of available data to use for testing and training, the results were too inconclusive to make a reliable assessment of overall accuracy beyond the learning capability. The conclusion of the study is that Machine Learning-based solutions show promising results, but with the caveat that the results should likely be seen as indicative of overall viability rather than representative of actual performance.
APA, Harvard, Vancouver, ISO, and other styles
48

Kvarnström, Jonas. "TALplanner and other extensions to Temporal Action Logic /." Linköping : Dept. of Computer and Information Science, Univ, 2005. http://www.bibl.liu.se/liupubl/disp/disp2005/tek937s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Anderson, James Howard. "A Spatially Explicit Agent Based Model of Muscovy Duck Home Range Behavior." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/3950.

Full text
Abstract:
ABSTRACT Research in GIScience has identified agent-based simulation methodologies as effective in the study of complex adaptive spatial systems (CASS). CASS are characterized by the emergent nature of their spatial expressions and by the changing relationships between their constituent variables and how those variables act on the system's spatial expression over time. Here, emergence refers to a CASS property where small-scale, individual action results in macroscopic or system-level patterns over time. This research develops and executes a spatially-explicit agent based model of Muscovy Duck home range behavior. Muscovy duck home range behavior is regarded as a complex adaptive spatial system for this research, where this process can be explained and studied with simulation techniques. The general animal movement model framework presented in this research explicitly considers spatial characteristics of the landscape in its formulation, as well as provides for spatial cognition in the behavior of its agents. Specification of the model followed a three-phase framework, including: behavioral data collection in the field, construction of a model substrate depicting land cover features found in the study area, and the informing of model agents with products derived from field observations. This framework was applied in the construction of a spatially-explicit agent-based model (SE-ABM) of Muscovy Duck home range behavior. The model was run 30 times to simulate point location distributions of an individual duck's daily activity. These simulated datasets were collected, and home ranges were constructed using Characteristic Hull Polygon (CHP) and Minimum Convex Polygon (MCP) techniques. Descriptive statistics of the CHP and MCP polygons were calculated to characterize the home ranges produced and establish internal model validity. As a theoretical framework for the construction of animal movement SE-ABM's, and as a demonstration of the potential of geosimulation methodologies in support of animal home range estimator validation, the model represents an original contribution to the literature. Implications of model utility as a validation tool for home range extents as derived from GPS or radio telemetry positioning data are discussed.
APA, Harvard, Vancouver, ISO, and other styles
50

Lienemann, Matthew A. "Automated Multi-Modal Search and Rescue using Boosted Histogram of Oriented Gradients." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1507.

Full text
Abstract:
Unmanned Aerial Vehicles (UAVs) provides a platform for many automated tasks and with an ever increasing advances in computing, these tasks can be more complex. The use of UAVs is expanded in this thesis with the goal of Search and Rescue (SAR), where a UAV can assist fast responders to search for a lost person and relay possible search areas back to SAR teams. To identify a person from an aerial perspective, low-level Histogram of Oriented Gradients (HOG) feature descriptors are used over a segmented region, provided from thermal data, to increase classification speed. This thesis also introduces a dataset to support a Bird’s-Eye-View (BEV) perspective and tests the viability of low level HOG feature descriptors on this dataset. The low-level feature descriptors are known as Boosted Histogram of Oriented Gradients (BHOG) features, which discretizes gradients over varying sized cells and blocks that are trained with a Cascaded Gentle AdaBoost Classifier using our compiled BEV dataset. The classification is supported by multiple sensing modes with color and thermal videos to increase classification speed. The thermal video is segmented to indicate any Region of Interest (ROI) that are mapped to the color video where classification occurs. The ROI decreases classification time needed for the aerial platform by eliminating a per-frame sliding window. Testing reveals that with the use of only color data iv and a classifier trained for a profile of a person, there is an average recall of 78%, while the thermal detection results with an average recall of 76%. However, there is a speed up of 2 with a video of 240x320 resolution. The BEV testing reveals that higher resolutions are favored with a recall rate of 71% using BHOG features, and 92% using Haar-Features. In the lower resolution BEV testing, the recall rates are 42% and 55%, for BHOG and Haar-Features, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography