Rozprawy doktorskie na temat „Addressing Machines”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Addressing Machines.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 16 najlepszych rozpraw doktorskich naukowych na temat „Addressing Machines”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Munnich, Nicolas. "Operational and categorical models of PCF : addressing machines and distributing semirings". Electronic Thesis or Diss., Paris 13, 2024. http://www.theses.fr/2024PA131015.

Pełny tekst źródła
Streszczenie:
Bien qu'elle ait été introduite il y a plus de 60 ans, PCF reste intéressante. Bien que la quête d'un modèle satisfaisant et totalement abstrait de PCF ait été résolue au tournant du millénaire, de nouveaux modèles de PCF apparaissent encore fréquemment dans la littérature, explorant des voies inexplorées ou utilisant PCF comme une lentille ou un outil pour étudier une autre construction mathématique. Dans cette thèse, nous nous appuyons sur notre connaissance des modèles de PCF de deux manières distinctes : En construisant un tout nouveau modèle, et en s'appuyant sur les modèles existants. Les machines d'adressage sont un type relativement nouveau de machine abstraite qui s'inspire des machines de Turing. Il a déjà été démontré que ces machines peuvent modéliser l'ensemble du ?-calcul non typé. Nous nous appuyons sur ces machines pour construire des machinesd'adressage étendues (EAM) et les doter d'un système de type. Nous montrons ensuite que ces machines peuvent être utilisées pour obtenir un nouveau modèle entièrement abstrait et distinct de PCF : Nous montrons que les machines simulent fidèlement PCF de telle sorte qu'un terme PCF se termine par un chiffre exactement lorsque la machine d'adressage étendue correspondante se termine par le même chiffre. De même, nous montrons que chaque machine d'adressage étendue typée peut être transformée en un programme PCF avec le même comportement d'observation. De ces deux résultats, il découle que le modèle de PCF obtenu en quotientant les EAM typables par une relation logique appropriée est totalement abstrait. Il existe une pléthore de modèles catégoriels solides de PCF, en raison de sa relation étroite avec le ?-calcul. Nous considérons deux modèles similaires (qui sont aussi des modèles de logique linéaire) qui sont basés sur des sémirings : Les modèles pondérés, qui utilisent les sémirations pour quantifier une valeur interne, et les modèles de multiplicité, qui utilisent les sémirations pour modéliser linéairement des fonctions (modèle de l'exponentielle !). Nous étudionsl'intersection entre ces deux modèles en examinant les conditions sous lesquelles deux monades dérivées de sémirings spécifiques se distribuent. Nous découvrons que le fait qu'un semi-anneau ait ou non une somme idempotente fait une grande différence dans sa capacité à distribuer. Notre étude nous conduit à découvrir la notion de distribution non naturelle, qui forme une monade sur une catégorie de Kleisli. Enfin, nous présentons des conditions précises sous lesquelles une distribution particulière peut se former entre deux semis
Despite being introduced over 60 years ago, PCF remains of interest. Though the quest for a satisfactory fully abstract model of PCF was resolved around the turn of the millennium, new models of PCF still frequently appear in the literature, investigating unexplored avenues or using PCF as a lens or tool to investigate some other mathematical construct. In this thesis, we build upon our knowledge of models of PCF in two distinct ways: Constructing a brand new model, and building upon existing models. Addressing Machines are a relatively new type of abstract machine taking inspiration from Turing Machines. These machines have been previously shown to model the full untyped ?- calculus. We build upon these machines to construct Extended Addressing Machines (EAMs) and endow them with a type system. We then show that these machines can be used to obtain a new and distinct fully abstract model of PCF: We show that the machines faithfully simulatePCF in such a way that a PCF term terminates in a numeral exactly when the corresponding Extended Addressing Machine terminates in the same numeral. Likewise, we show that every typed Extended Addressing Machine can be transformed into a PCF program with the same observational behaviour. From these two results, it follows that the model of PCF obtained by quotienting typable EAMs by a suitable logical relation is fully abstract. There exist a plethora of sound categorical models of PCF, due to its close relationship with the ?-calculus. We consider two similar models (which are also models of Linear Logic) that are based on semirings: Weighted models, using semirings to quantify some internal value, and Multiplicity models, using semirings to linearly model functions (model the exponential !). We investigate the intersection between these two models by investigating the conditions under which two monads derived from specific semirings distribute. We discover that whether or not a semiring has an idempotent sum makes a large difference in its ability to distribute. Our investigation leads us to discover the notion of an unnatural distribution, which forms a monad on a Kleislicategory. Finally, we present precise conditions under which a particular distribution can form between two semirings
Style APA, Harvard, Vancouver, ISO itp.
2

RICCI, FRANCESCO. "Effective Product Lifecycle Management: the role of uncertainties in addressing design, manufacturing and verification processes". Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2501694.

Pełny tekst źródła
Streszczenie:
The aim of this thesis is to use the concept of uncertainty to improve the effectiveness of Product Lifecycle Management (PLM) systems. Uncertainty is a rather new concept in PLM that has been introduced with the new technical language, drawn by ISO, to manage Geometrical Product Specification and Verification (GPS) in the challenging environment of modern manufacturing. GPS standards regard in particular design and verification environments, and want to guarantee consistence of information through a technical language which define both specification and verification on sound logical and mathematical bases. In this context, uncertainty is introduced as the instrument that measures consistency: between the designer intentions (specifications) and the manufactured artefact (as it is observed through measurement) as well as between the measurand definition provided by designers (the specification again) and that used by metrologists. The implications of such an approach have been analyzed through a case study dealing with flatness tolerance and paying particular attention to the verification processes based on Coordinate Measuring Machines (CMM). A Design of Experiment (DoE) has been used and results have been analyzed and used to build a regression model that allows generalization in the experiment validity domain. Then, using Category Theory, a categorical data model has been defined which represents the operation based structure of GPS language and uses the flatness research results in order to design a software able to concretize the GPS vision of geometrical product specifications management. This software is able to translate specification requirements into verification instructions, estimate the uncertainty introduced by simplified verification operations and evaluate costs and risks of verification operations. It provides an important tool for designers, as it allows a responsible definition of specifications (designer can simulate the interpretation of specifications and have an idea of the costs related with their verification), and for metrologist, as it can be a guide for designing GPS compliant verification missions or handling the usual verification procedures according to the GPS standards. However, during the study, it has been matured the consciousness that this approach, even if correct and valuable, was not the most suitable to fully exploit the real potential of CMM. Then, aside the GPS oriented work, an adaptive sampling strategy, based on Kriging modelization, has been proposed with very encouraging results.
Style APA, Harvard, Vancouver, ISO itp.
3

Wang, Fulton. "Addressing two issues in machine learning : interpretability and dataset shift". Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122870.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 71-77).
In this thesis, I create solutions to two problems. In the first, I address the problem that many machine learning models are not interpretable, by creating a new form of classifier, called the Falling Rule List. This is a decision list classifier where the predicted probabilities are decreasing down the list. Experiments show that the gain in interpretability need not be accompanied by a large sacrifice in accuracy on real world datasets. I then briefly discuss possible extensions that allow one to directly optimize rank statistics over rule lists, and handle ordinal data. In the second, I address a shortcoming of a popular approach to handling covariate shift, in which the training distribution and that for which predictions need to be made have different covariate distributions. In particular, the existing importance weighting approach to handling covariate shift suffers from high variance if the two covariate distributions are very different. I develop a dimension reduction procedure that reduces this variance, at the expense of increased bias. Experiments show that this tradeoff can be worthwhile in some situations.
by Fulton Wang.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Style APA, Harvard, Vancouver, ISO itp.
4

Vendra, Soujanya. "Addressing corner detection issues for machine vision based UAV aerial refueling". Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4551.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains xi, 121 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 90-95).
Style APA, Harvard, Vancouver, ISO itp.
5

Heffernan, Rhys. "Addressing One-Dimensional Protein Structure Prediction Problems with Machine Learning Techniques". Thesis, Griffith University, 2018. http://hdl.handle.net/10072/381401.

Pełny tekst źródła
Streszczenie:
In this thesis we tackle the protein structure prediction subproblems listed previously, by applying state of the art deep learning techniques. The work in chapter 2 presents the method SPIDER. In this method, state of the art deep learning is applied iteratively to the task of predicting backbone torsion angles and , and dihedral angles and , by applying evolutionary-derived sequence pro les and physio-chemical properties of amino acid residues. This work is the fi rst method for the sequence based prediction of and angles. Chapter 3 presents the method SPIDER2. This method takes the state of the art iterative deep learning applied in SPIDER, and extends it to the prediction of three-state secondary structure, solvent accessible surface area, and ; ; , and angles, and achieves the best reported prediction accuracies for all of them (at the date of publication). Chapter 4 further builds on the work done in the previous chapters, and now adds the prediction of half sphere exposure (both C and C based) and contact numbers to SPIDER2, in a method called SPIDER2-HSE. In Chapter 5, Long Short-Term Memory Bidirectional Recurrent Neural Networks were applied to the prediction of three-state secondary structure, solvent accessible surface area, ; ; , and angles, as well as half sphere exposure and contact numbers. Previously methods used for these predictions (including SPIDER2) were typically window based. That is to say that the input data made available to the model for a given residue, is comprised of information for only that residue and a number of residues on either side in the sequence (in the range of 10-20 residues on each side). The use of LSTM-BRNNs in this method allows SPIDER3 to better learn both long and short term interactions within proteins. This advancement again lead to the best reported accuracies for all predicted structural properties. In Chapter 6, the LSTM-BRNN model used in SPIDER3 is applied to the prediction of the same structural property predictions, plus the prediction of eight-state secondary structure, using only single-sequence inputs. That is, structural properties were predicted without using any evolutionary information. This provides a method that provides not only the best reported single-sequence secondary structure and solvent accessible surface area predictions, but the fi rst reported method for the single-sequence based prediction of half sphere exposure, contact numbers, and ; ; , and angles. This study is important as most proteins have few homologous sequences and their evolutionary profi les are inac- curate and time-consuming to calculate. This single-sequence-based technique allows for fast genome-scale screening analysis of protein one-dimensional structural properties.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Eng & Built Env
Science, Environment, Engineering and Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
6

RAGONESI, RUGGERO. "Addressing Dataset Bias in Deep Neural Networks". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1069001.

Pełny tekst źródła
Streszczenie:
Deep Learning has achieved tremendous success in recent years in several areas such as image classification, text translation, autonomous agents, to name a few. Deep Neural Networks are able to learn non-linear features in a data-driven fashion from complex, large scale datasets to solve tasks. However, some fundamental issues remain to be fixed: the kind of data that is provided to the neural network directly influences its capability to generalize. This is especially true when training and test data come from different distributions (the so called domain gap or domain shift problem): in this case, the neural network may learn a data representation that is representative for the training data but not for the test, thus performing poorly when deployed in actual scenarios. The domain gap problem is addressed by the so-called Domain Adaptation, for which a large literature was recently developed. In this thesis, we first present a novel method to perform Unsupervised Domain Adaptation. Starting from the typical scenario in which we dispose of labeled source distributions and an unlabeled target distribution, we pursue a pseudo-labeling approach to assign a label to the target data, and then, in an iterative way, we refine them using Generative Adversarial Networks. Subsequently, we faced the debiasing problem. Simply speaking, bias occurs when there are factors in the data which are spuriously correlated with the task label, e.g., the background, which might be a strong clue to guess what class is depicted in an image. When this happens, neural networks may erroneously learn such spurious correlations as predictive factors, and may therefore fail when deployed on different scenarios. Learning a debiased model can be done using supervision regarding the type of bias affecting the data, or can be done without any annotation about what are the spurious correlations. We tackled the problem of supervised debiasing -- where a ground truth annotation for the bias is given -- under the lens of information theory. We designed a neural network architecture that learns to solve the task while achieving at the same time, statistical independence of the data embedding with respect to the bias label. We finally addressed the unsupervised debiasing problem, in which there is no availability of bias annotation. we address this challenging problem by a two-stage approach: we first split coarsely the training dataset into two subsets, samples that exhibit spurious correlations and those that do not. Second, we learn a feature representation that can accommodate both subsets and an augmented version of them.
Style APA, Harvard, Vancouver, ISO itp.
7

Al-Shahib, Ali Walid. "Addressing the core challenges in predicting protein function from sequence using machine learning". Thesis, University of Glasgow, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425167.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Rendleman, Michael. "Machine learning with the cancer genome atlas head and neck squamous cell carcinoma dataset: improving usability by addressing inconsistency, sparsity, and high-dimensionality". Thesis, University of Iowa, 2019. https://ir.uiowa.edu/etd/6841.

Pełny tekst źródła
Streszczenie:
In recent years, more data is becoming available for historical oncology case analysis. A large dataset that describes over 500 patient cases of Head and Neck Squamous Cell Carcinoma is a potential goldmine for finding ways to improve oncological decision support. Unfortunately, the best approaches for finding useful inferences are unknown. With so much information, from DNA and RNA sequencing to clinical records, we must use computational learning to find associations and biomarkers. The available data has sparsity, inconsistencies, and is very large for some datatypes. We processed clinical records with an expert oncologist and used complex modeling methods to substitute (impute) data for cases missing treatment information. We used machine learning algorithms to see if imputed data is useful for predicting patient survival. We saw no difference in ability to predict patient survival with the imputed data, though imputed treatment variables were more important to survival models. To deal with the large number of features in RNA expression data, we used two approaches: using all the data with High Performance Computers, and transforming the data into a smaller set of features (sparse principal components, or SPCs). We compared the performance of survival models with both datasets and saw no differences. However, the SPC models trained more quickly while also allowing us to pinpoint the biological processes each SPC is involved in to inform future biomarker discovery. We also examined ten processed molecular features for survival prediction ability and found some predictive power, though not enough to be clinically useful.
Style APA, Harvard, Vancouver, ISO itp.
9

Mollen, Anne. "Addressing the ghost in the machine or “Is engagement a sustainable intermediate variable between the website drivers of consumer experience and consumers’ attitudinal and behavioural outputs?”". Thesis, Cranfield University, 2007. http://hdl.handle.net/1826/2047.

Pełny tekst źródła
Streszczenie:
Background and Purpose: In response to the cost transparency of the internet which has facilitated consumer switching behaviour, marketing practitioners have used the umbrella term of engagement to describe the experiential response to mechanisms by which consumers can be enticed and co-opted into behaviour presumed to be conducive to purchase or future purchase. It is a concept that, until recently, has been largely circumvented by the marketing academic world. Therefore, the purpose of this systematic review is to generate a workable definition of consumer brand engagement online, predicated on a research model that builds on extant academic and practitioner evidence, which by virtue of its construction: 1. Shifts the locus of theoretical attention from a mechanistic/structuralist view of online consumer experience, increasingly recognized by the academic world as insufficient in its explanatory power, to more a more unitary approach that aligns behaviourist causality with ‘experiential intensity’ 2. Establishes a common discourse, thereby reconciling academic and practitioner perspectives 3. Provides the theoretical base for preliminary work on experiential metrics, and creates a platform for future research. Methodology: The review uses ‘realist synthesis’ to refine theory from a broad range of heterogeneous sources. The chapter on methodology provides a clear audit trail showing how decisions were made, evidence scrutinised and evaluated, and findings synthesised. Findings: The review provides support for the model and the definition of online consumer brand engagement, as well some steps towards operationalising the construct. The limitations of the methodology and learning points are discussed, as well as the contribution to future research and practice.
Style APA, Harvard, Vancouver, ISO itp.
10

Konishcheva, Kseniia. "Novel strategies for identifying and addressing mental health and learning disorders in school-age children". Electronic Thesis or Diss., Université Paris Cité, 2023. http://www.theses.fr/2023UNIP7083.

Pełny tekst źródła
Streszczenie:
The prevalence of mental health and learning disorders in school-age children is a growing concern. Yet, a significant delay exists between the onset of symptoms and referral for intervention, contributing to long-term challenges for affected children. The current mental health system is fragmented, with teachers possessing valuable insights into their students' well-being but limited knowledge of mental health, while clinicians often only encounter more severe cases. Inconsistent implementation of existing screening programs in schools, mainly due to resource constraints, suggests the need for more effective solutions. This thesis presents two novel approaches for improvement of mental health and learning outcomes of children and adolescents. The first approach uses data-driven methods, leveraging the Healthy Brain Network dataset which contains item-level responses from over 50 assessments, consensus diagnoses, and cognitive task scores from thousands of children. Using machine learning techniques, item subsets were identified to predict common mental health and learning disability diagnoses. The approach demonstrated promising performance, offering potential utility for both mental health and learning disability detection. Furthermore, our approach provides an easy-to-use starting point for researchers to apply our method to new datasets. The second approach is a framework aimed at improving the mental health and learning outcomes of children by addressing the challenges faced by teachers in heterogeneous classrooms. This framework enables teachers to create tailored teaching strategies based on identified needs of individual students, and when necessary, suggest referral to clinical care. The first step of the framework is an instrument designed to assess each student's well-being and learning profile. FACETS is a 60-item scale built through partnerships with teachers and clinicians. Teacher acceptance and psychometric properties of FACETS are investigated. Preliminary pilot study demonstrated overall acceptance of FACETS among teachers. In conclusion, this thesis presents a framework to bridge the gap in detection and support of mental health and learning disorders in school-age children. Future studies will further validate and refine our tools, offering more timely and effective interventions to improve the well-being and learning outcomes of children in diverse educational settings
The prevalence of mental health and learning disorders in school-age children is a growing concern. Yet, a significant delay exists between the onset of symptoms and referral for intervention, contributing to long-term challenges for affected children. The current mental health system is fragmented, with teachers possessing valuable insights into their students' well-being but limited knowledge of mental health, while clinicians often only encounter more severe cases. Inconsistent implementation of existing screening programs in schools, mainly due to resource constraints, suggests the need for more effective solutions. This thesis presents two novel approaches for improvement of mental health and learning outcomes of children and adolescents. The first approach uses data-driven methods, leveraging the Healthy Brain Network dataset which contains item-level responses from over 50 assessments, consensus diagnoses, and cognitive task scores from thousands of children. Using machine learning techniques, item subsets were identified to predict common mental health and learning disability diagnoses. The approach demonstrated promising performance, offering potential utility for both mental health and learning disability detection. Furthermore, our approach provides an easy-to-use starting point for researchers to apply our method to new datasets. The second approach is a framework aimed at improving the mental health and learning outcomes of children by addressing the challenges faced by teachers in heterogeneous classrooms. This framework enables teachers to create tailored teaching strategies based on identified needs of individual students, and when necessary, suggest referral to clinical care. The first step of the framework is an instrument designed to assess each student's well-being and learning profile. FACETS is a 60-item scale built through partnerships with teachers and clinicians. Teacher acceptance and psychometric properties of FACETS are investigated. Preliminary pilot study demonstrated overall acceptance of FACETS among teachers. In conclusion, this thesis presents a framework to bridge the gap in detection and support of mental health and learning disorders in school-age children. Future studies will further validate and refine our tools, offering more timely and effective interventions to improve the well-being and learning outcomes of children in diverse educational settings
Style APA, Harvard, Vancouver, ISO itp.
11

Jiang, Yingwei, i 江盈緯. "A Study on the Addressing Issue in 3GPP Machine Type Communications". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/48158187681165993681.

Pełny tekst źródła
Streszczenie:
碩士
國立宜蘭大學
資訊工程研究所碩士班
100
Machine-to-Machine (M2M) communications also known as Machine Type Communications (MTC) are widely discussed in the 3rd Generation Partnership Project (3GPP). One of the key issues for MTC is the IP addressing issue. The Internet Protocol version 6 (IPv6) is preferred to adopt in MTC, however the Internet Protocol version 4 (IPv4) addresses are currently used in Internet environments. Since MTC requires large number of network addresses to identify the trillions of M2M devices, Network Address Translation (NAT) are deployed when IPv4 is used. Thus, the NAT traversal problem should be resolved for MTC in 3GPP. In this thesis we first introduce the call flows of the IPv6 addressing solution and three NAT solutions defined in 3GPP TR 23.888. Three NAT solutions include the NAT Traversal through Tunneling (NATTT) solution, the Managed NAT solution and the Non-managed NAT solution. To improve the performance of the NAT solution, this thesis proposes an Enhanced Port Forwarding (EPF) mechanism. In the EPF mechanism, Packet Data Network Gateway (P-GW) assigns IP address to the MTC device and set up the mapping to NAT simultaneously. Therefore the EPF mechanism reduces the latency of binding setup and signal cost. In the end of this thesis, this thesis compares and evaluates the performance of the IPv6 and NAT solutions in term of the signaling cost and packet delivery
Style APA, Harvard, Vancouver, ISO itp.
12

Chaw, Shaw Yi. "Addressing the brittleness of knowledge-based question-answering". Thesis, 2009. http://hdl.handle.net/2152/ETD-UT-2009-12-580.

Pełny tekst źródła
Streszczenie:
Knowledge base systems are brittle when the users of the knowledge base are unfamiliar with its content and structure. Querying a knowledge base requires users to state their questions in precise and complete formal representations that relate the facts in the question with relevant terms and relations in the underlying knowledge base. This requirement places a heavy burden on the users to become deeply familiar with the contents of the knowledge base and prevents novice users to effectively using the knowledge base for problem solving. As a result, the utility of knowledge base systems is often restricted to the developers themselves. The goal of this work is to help users, who may possess little domain expertise, to use unfamiliar knowledge bases for problem solving. Our thesis is that the difficulty in using unfamiliar knowledge bases can be addressed by an approach that funnels natural questions, expressed in English, into formal representations appropriate for automated reasoning. The approach uses a simplified English controlled language, a domain-neutral ontology, a set of mechanisms to handle a handful of well known question types, and a software component, called the Question Mediator, to identify relevant information in the knowledge base for problem solving. With our approach, a knowledge base user can use a variety of unfamiliar knowledge bases by posing their questions with simplified English to retrieve relevant information in the knowledge base for problem solving. We studied the thesis in the context of a system called ASKME. We evaluated ASKME on the task of answering exam questions for college level biology, chemistry, and physics. The evaluation consists of successive experiments to test if ASKME can help novice users employ unfamiliar knowledge bases for problem solving. The initial experiment measures ASKME's level of performance under ideal conditions, where the knowledge base is built and used by the same knowledge engineers. Subsequent experiments measure ASKME's level of performance under increasingly realistic conditions. In the final experiment, we measure ASKME's level of performance under conditions where the knowledge base is independently built by subject matter experts and the users of the knowledge base are a group of novices who are unfamiliar with the knowledge base. Results from the evaluation show that ASKME works well on different knowledge bases and answers a broad range of questions that were posed by novice users in a variety of domains.
text
Style APA, Harvard, Vancouver, ISO itp.
13

Montgomery, Lloyd Robert Frank. "Escalation prediction using feature engineering: addressing support ticket escalations within IBM’s ecosystem". Thesis, 2017. https://dspace.library.uvic.ca//handle/1828/8478.

Pełny tekst źródła
Streszczenie:
Large software organizations handle many customer support issues every day in the form of bug reports, feature requests, and general misunderstandings as submitted by customers. Strategies to gather, analyze, and negotiate requirements are comple- mented by efforts to manage customer input after products have been deployed. For the latter, support tickets are key in allowing customers to submit their issues, bug re- ports, and feature requests. Whenever insufficient attention is given to support issues, there is a chance customers will escalate their issues, and escalation to management is time-consuming and expensive, especially for large organizations managing hundreds of customers and thousands of support tickets. This thesis provides a step towards simplifying the job for support analysts and managers, particularly in predicting the risk of escalating support tickets. In a field study at our large industrial partner, IBM, a design science methodology was employed to characterize the support process and data available to IBM analysts in managing escalations. Through iterative cycles of design and evaluation, support analysts’ expert knowledge about their customers was translated into features of a support ticket model to be implemented into a Ma- chine Learning model to predict support ticket escalations. The Machine Learning model was trained and evaluated on over 2.5 million support tickets and 10,000 escalations, obtaining a recall of 79.9% and an 80.8% reduction in the workload for support analysts looking to identify support tickets at risk of escalation. Further on- site evaluations were conducted through a tool developed to implement the Machine Learning techniques in industry, deployed during weekly support-ticket-management meetings. The features developed in the Support Ticket Model are designed to serve as a starting place for organizations interested in implementing the model to predict support ticket escalations, and for future researchers to build on to advance research in Escalation Prediction.
Graduate
Style APA, Harvard, Vancouver, ISO itp.
14

(9873176), Quang Dao. "Addressing the Recommender System Data Solicitation Problem with Engaging User Interfaces". Thesis, 2020.

Znajdź pełny tekst źródła
Streszczenie:

With autonomous systems bringing greater demand for user data, in some applications, this also brings an opportunity to solicit data from users. To exploit this, a user interface will need to be designed to coax the user into achieving system goals, like data solicitation. One approach is to design a system to leverage an already present tendency for people to socially interact with technology. In this thesis, I argue that such an approach would involve incorporating interaction concepts that facilitate engagement into the design of recommender system interfaces that will improve the likelihood of obtaining data from users. To support this claim, I synthesize past work on human-computer interaction and recommender systems to derive a framework to guide scientific investigations into interface design concepts that will address the data solicitation problem.

Style APA, Harvard, Vancouver, ISO itp.
15

Bako, Abdulaziz Tijjani. "The Role of Social Workers in Addressing Patients' Unmet Social Needs in the Primary Care Setting". Diss., 2021. http://hdl.handle.net/1805/25986.

Pełny tekst źródła
Streszczenie:
Indiana University-Purdue University Indianapolis (IUPUI)
Unmet social needs pose significant risk to both patients and healthcare organizations by increasing morbidity, mortality, utilization, and costs. Health care delivery organizations are increasingly employing social workers to address social needs, given the growing number of policies mandating them to identify and address their patients’ social needs. However, social workers largely document their activities using unstructured or semi-structured textual descriptions, which may not provide information that is useful for modeling, decision-making, and evaluation. Therefore, without the ability to convert these social work documentations into usable information, the utility of these textual descriptions may be limited. While manual reviews are costly, time-consuming, and require technical skills, text mining algorithms such as natural language processing (NLP) and machine learning (ML) offer cheap and scalable solutions to extracting meaningful information from large text data. Moreover, the ability to extract information on social needs and social work interventions from free-text data within electronic health records (EHR) offers the opportunity to comprehensively evaluate the outcomes specific social work interventions. However, the use of text mining tools to convert these text data into usable information has not been well explored. Furthermore, only few studies sought to comprehensively investigate the outcomes of specific social work interventions in a safety-net population. To investigate the role of social workers in addressing patients’ social needs, this dissertation: 1) utilizes NLP, to extract and categorize the social needs that lead to referral to social workers, and market basket analysis (MBA), to investigate the co-occurrence of these social needs; 2) applies NLP, ML, and deep learning techniques to extract and categorize the interventions instituted by social workers to address patients’ social needs; and 3) measures the effects of receiving a specific social work intervention type on healthcare utilization outcomes.
Style APA, Harvard, Vancouver, ISO itp.
16

"Addressing the Variable Selection Bias and Local Optimum Limitations of Longitudinal Recursive Partitioning with Time-Efficient Approximations". Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.54792.

Pełny tekst źródła
Streszczenie:
abstract: Longitudinal recursive partitioning (LRP) is a tree-based method for longitudinal data. It takes a sample of individuals that were each measured repeatedly across time, and it splits them based on a set of covariates such that individuals with similar trajectories become grouped together into nodes. LRP does this by fitting a mixed-effects model to each node every time that it becomes partitioned and extracting the deviance, which is the measure of node purity. LRP is implemented using the classification and regression tree algorithm, which suffers from a variable selection bias and does not guarantee reaching a global optimum. Additionally, fitting mixed-effects models to each potential split only to extract the deviance and discard the rest of the information is a computationally intensive procedure. Therefore, in this dissertation, I address the high computational demand, variable selection bias, and local optimum solution. I propose three approximation methods that reduce the computational demand of LRP, and at the same time, allow for a straightforward extension to recursive partitioning algorithms that do not have a variable selection bias and can reach the global optimum solution. In the three proposed approximations, a mixed-effects model is fit to the full data, and the growth curve coefficients for each individual are extracted. Then, (1) a principal component analysis is fit to the set of coefficients and the principal component score is extracted for each individual, (2) a one-factor model is fit to the coefficients and the factor score is extracted, or (3) the coefficients are summed. The three methods result in each individual having a single score that represents the growth curve trajectory. Therefore, now that the outcome is a single score for each individual, any tree-based method may be used for partitioning the data and group the individuals together. Once the individuals are assigned to their final nodes, a mixed-effects model is fit to each terminal node with the individuals belonging to it. I conduct a simulation study, where I show that the approximation methods achieve the goals proposed while maintaining a similar level of out-of-sample prediction accuracy as LRP. I then illustrate and compare the methods using an applied data.
Dissertation/Thesis
Doctoral Dissertation Psychology 2019
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii