Siga este enlace para ver otros tipos de publicaciones sobre el tema: MEG data.

Tesis sobre el tema "MEG data"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "MEG data".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Schönherr, Margit. "Development and Evaluation of Data Processing Techniques in Magnetoencephalography". Doctoral thesis, Universitätsbibliothek Leipzig, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-96832.

Texto completo
Resumen
With MEG, the tiny magnetic fields produced by neuronal currents within the brain can be measured completely non-invasively. But the signals are very small (~100 fT) and often obscured by spontaneous brain activity and external noise. So, a recurrent issue in MEG data analysis is the identification and elimination of this unwanted interference within the recordings. Various strategies exist to meet this purpose. In this thesis, two of these strategies are scrutinized in detail. The first is the commonly used procedure of averaging over trials which is a successfully applied data reduction method in many neurocognitive studies. However, the brain does not always respond identically to repeated stimuli, so averaging can eliminate valuable information. Alternative approaches aiming at single trial analysis are difficult to realize and many of them focus on temporal patterns. Here, a compromise involving random subaveraging of trials and repeated source localization is presented. A simulation study with numerous examples demonstrates the applicability of the new method. As a result, inferences about the generators of single trials can be drawn which allows deeper insight into neuronal processes of the human brain. The second technique examined in this thesis is a preprocessing tool termed Signal Space Separation (SSS). It is widely used for preprocessing of MEG data, including noise reduction by suppression of external interference, as well as movement correction. Here, the mathematical principles of the SSS series expansion and the rules for its application are investigated. The most important mathematical precondition is a source-free sensor space. Using three data sets, the influence of a violation of this convergence criterion on source localization accuracy is demonstrated. The analysis reveals that the SSS method works reliably, even when the convergence criterion is not fully obeyed. This leads to utilizing the SSS method for the transformation of MEG data to virtual sensors on the scalp surface. Having MEG data directly on the individual scalp surface would alleviate sensor space analysis across subjects and comparability with EEG. A comparison study of the transformation results obtained with SSS and those produced by inverse and subsequent forward computation is performed. It shows strong dependence on the relative position of sources and sensors. In addition, the latter approach yields superior results for the intended purpose of data transformation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zumer, Johanna Margarete. "Probabilistic methods for neural source reconstruction from MEG data". Diss., Search in ProQuest Dissertations & Theses. UC Only, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3289309.

Texto completo
Resumen
Thesis (Ph.D.)--University of California, San Francisco with the University of California, Berkeley, 2007.
Source: Dissertation Abstracts International, Volume: 68-11, Section: B, page: 7485. Adviser: Srikantan Nagarajan.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

YU, LIJUN. "Sequential Monte Carlo for Estimating Brain Activity from MEG Data". Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1459528441.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zaremba, Wojciech. "Modeling the variability of EEG/MEG data through statistical machine learning". Habilitation à diriger des recherches, Ecole Polytechnique X, 2012. http://tel.archives-ouvertes.fr/tel-00803958.

Texto completo
Resumen
Brain neural activity generates electrical discharges, which manifest as electrical and magnetic potentials around the scalp. Those potentials can be registered with magnetoencephalography (MEG) and electroencephalography (EEG) devices. Data acquired by M/EEG is extremely difficult to work with due to the inherent complexity of underlying brain processes and low signal-to-noise ratio (SNR). Machine learning techniques have to be employed in order to reveal the underlying structure of the signal and to understand the brain state. This thesis explores a diverse range of machine learning techniques which model the structure of M/EEG data in order to decode the mental state. It focuses on measuring a subject's variability and on modeling intrasubject variability. We propose to measure subject variability with a spectral clustering setup. Further, we extend this approach to a unified classification framework based on Laplacian regularized support vector machine (SVM). We solve the issue of intrasubject variability by employing a model with latent variables (based on a latent SVM). Latent variables describe transformations that map samples into a comparable state. We focus mainly on intrasubject experiments to model temporal misalignment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Chowdhury, Rasheda. "Localization of the generators of epileptic activity using Magneto-EncephaloGraphy (MEG) data". Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103740.

Texto completo
Resumen
Understanding the mechanisms and locating the brain regions involved during epileptic activity is a primary diagnostic interest during the pre-surgical investigation of patients with pharmaco-resistant epilepsy. Epileptic spikes are abnormal spontaneous neuronal discharges that are characteristic of the epilepsy of each patient and not associated with clinical manifestations. They are detectable outside the head using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG), recording scalp electric potentials and scalp magnetic fields respectively, both generated by populations of neurons firing synchronously. EEG or MEG epileptic spikes can be visually detected from background brain activity only if they are associated with spatially extended generators. Source localization methods can accurately localize within the brain the generators of epileptic discharges but the main challenge lies in localizing the generators while recovering their spatial extent. In this context, the topic of this Masters research is to provide source localization techniques that can recover the spatial extent of the generators of epileptic spikes with high accuracy. Maximum Entropy on the Mean (MEM) is a source localization technique that has demonstrated its ability to localize the generators of epileptic activity with their spatial extent along the cortical surface when using EEG data. The objective of this project was to adapt and validate the behaviour of MEM when using MEG data. MEM incorporates realistic priors that model the epileptic generators. Based on these priors, new variants of MEM were proposed and compared with new methods implemented using Hierarchical Bayesian models, for which inference is obtained through Restricted Maximum Likelihood (ReML) techniques. The objective was to compare the relevance of models based on realistic prior knowledge within two statistical regularization frameworks (MEM and ReML). Using realistic simulations of epileptic activity, these new methods were investigated for their sensitivity to the extent and location of the source. Results showed that MEM-based methods were the most accurate in localizing the sources with their spatial extent.These new variants were further applied on clinical data to assess their pertinence as a valuable tool to provide information during the pre-surgical investigation. Finally, as a preliminary step of an additional study, I studied the potential of developing model comparison metrics within both ReML and MEM frameworks and applied them to evaluate the impact of forward models on the localization accuracy. The findings suggested that a realistic boundary element model is more relevant than a spherical model when localizing MEG data.
Comprendre les mécanismes sous-jacents associés à la génération d'une activité épileptique ainsi que la localisation des régions cérébrales impliquées lors d'une telle décharge sont d'un intérêt majeur lors du planning pré-chirurgical des patients souffrant d'épilepsie pharmaco-résistante. Les pointes épileptiques sont des décharges neuronales anormales générées de manière spontanée. Elles ne sont associées à aucune manifestation clinique et sont caractéristiques de l'épilepsie de chaque patient. Elles sont détectables à l'aide d'enregistrements de scalp tels que l'Electro-EncéphaloGraphie (EEG) ou la Magnéto-EncéphaloGraphie (MEG), mesurant respectivement les potentiels électriques et champs magnétiques générés par des populations de neurones activées de manière synchrone. Les pointes épileptiques peuvent être détectées en EEG ou en MEG à condition qu'elles se distinguent de l'activité de fond. Pour cela, elles doivent être associées à des générateurs suffisamment étendus spatialement. Alors que les méthodes dites de localisation de sources s'intéressent principalement à localiser l'origine des générateurs de ces décharges épileptiques, l'objectif de ce travail de Maîtrise est d'associer la localisation de ces générateurs à l'estimation de leur extension spatiale. Dans le cadre de ce projet de Maîtrise, nous avons développé et validé des méthodes de localisation des sources capables de localiser les générateurs d'activité épileptique ainsi que leur extension spatiale le long de la surface corticale. Le Maximum d'Entropie sur la Moyenne (MEM) est une technique de localisation de la source qui a démontré de telles performances lors de l'utilisation de données EEG. L'objectif de ce projet était d'adapter et valider le comportement du MEM lors de l'utilisation de données MEG. Le MEM introduit des connaissances a priori réalistes afin de modéliser les générateurs de pointes épileptiques. A partir de tels modèles a priori, deux nouvelles variantes du MEM ont été proposées et comparées avec de nouvelles méthodes implémentées dans le cadre du modèle hiérarchique Bayésien (inférence obtenue par maximum de vraisemblance restreint ReML). Notre objectif était de comparer la pertinence des modèles a priori considérés dans deux cadres statistiques de régularisation (MEM et ReML). A l'aide de simulations réaliste de l'activité épileptique, ces nouvelles méthodes ont été étudiées et leurs performances en termes de localisation spatiale des sources et de leur extension spatiale ont été évaluées. Les résultats ont montré que les variantes du MEM ont fourni les meilleures performances pour localiser les sources avec leur extension spatiale. Finalement, nous présentons quelques résultats préliminaires illustrant les performances de méthodes proposées sur des données cliniques. Ces nouvelles méthodes ont été appliquées sur quelques données cliniques afin d'évaluer leur pertinence dans le contexte du planning pré-chirurgical. Finalement, nous nous sommes intéressés à la possibilité d'utiliser les techniques de régularisation de type MEM et ReML pour proposer des métriques de comparaison de modèles, lors de l'analyse de données cliniques. Nous avons appliqué ces métriques afin d'évaluer l'impact du type de modèle direct sur la précision des méthodes. Nos résultats préliminaires suggèrent que le modèle réaliste des éléments frontières serait plus pertinent que le modèle sphérique lors de la localisation de données MEG.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Molins, Jiménez Antonio. "Multimodal integration of EEG and MEG data using minimum ℓ₂-norm estimates". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40528.

Texto completo
Resumen
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (leaves 69-74).
The aim of this thesis was to study the effects of multimodal integration of electroencephalography (EEG) and magnetoencephalography (MEG) data on the minimum ℓ₂-norm estimates of cortical current densities. We investigated analytically the effect of including EEG recordings in MEG studies versus the addition of new MEG channels. To further confirm these results, clinical datasets comprising concurrent MEG/EEG acquisitions were analyzed. Minimum ℓ₂-norm estimates were computed using MEG alone, EEG alone, and the combination of the two modalities. Localization accuracy of responses to median-nerve stimulation was evaluated to study the utility of combining MEG and EEG.
by Antonio Molins Jiménez.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Papadopoulo, Théodore. "Contributions and perspectives to computer vision, image processing and EEG/MEG data analysis". Habilitation à diriger des recherches, Université Nice Sophia Antipolis, 2011. http://tel.archives-ouvertes.fr/tel-00847782.

Texto completo
Resumen
Dans une première partie, j'illustrerai quelques uns de mes travaux en visio n par ordinateur et traitement d'images. Ceux-ci portent notamment sur la géométrie multi-vues, l'utilisation du raisonnement géométrique pour intégrer des contraintes sur la scène, l'appariement et la segmentation d'images. Sans forcément rentrer dans les détails, j'exposerai les idées fondamentales qui sous-tendent ces travaux qui ont maintenant quelques années et proposerai quelques perspectives sur des extensions possibles. Une deuxième partie abordera certains problèmes liés à l'électro- et la magnéto-encéphalographie M/EEG, sujet auquel je me suis intéressé plus récemment. Je décrirai en particulier un algorithme de détection d'événements d'intérêts en essai par essai ainsi que certaines techniques que nous avons développé pour la modélisation du problème direct M/EEG. Comme pour la première partie, je tenterai de proposer quelques unes des évolutions possibles autour de cette thématique.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zavala, Fernandez Heriberto. "Evaluation and comparsion of the independent components of simultaneously measured MEG and EEG data /". Berlin : Univ.-Verl. der TU, 2009. http://www.ub.tu-berlin.de/index.php?id=2260#c9917.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Dubarry, Anne-Sophie. "Linking neurophysiological data to cognitive functions : methodological developments and applications". Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM5017.

Texto completo
Resumen
Un des enjeux majeurs de la Psychologie Cognitive est de décrire les grandes fonctions mentales, notamment chez l’humain. Du point de vue neuroscientifique, il s’agit de modéliser l’activité cérébrale pour en extraire les éléments et mécanismes spatio-temporels susceptibles d’être mis en correspondance avec les opérations cognitives. Le travail de cette thèse a consisté à définir et mettre en œuvre des stratégies originales permettant de confronter les modèles cognitifs existants à des données issues d’enregistrements neurophysiologiques chez l’humain. Dans une première étude nous avons démontré que la distinction entre les organisations classiques de la dénomination de dessin sériel-parallèle, doit être adressée au niveau des essais uniques et non sur la moyenne des signaux. Nous avons conçu et mené l’analyse des signaux SEEG de 15 patients pour montrer que l’organisation temporelle de la dénomination de dessin n’est pas, au sens strict, parallèle. Dans une deuxième étude nous avons combiné trois techniques d’enregistrements : SEEG, EEG et MEG pour clarifier l’organisation spatiale des sources d’activité neuronales. Nous avons établi la faisabilité de l’enregistrement sur un patient qui exécute une tâche de perception visuelle. Au delà des corrélations entre les signaux moyens des trois techniques, cette analyse a révélé des corrélations au niveau des essais uniques. À travers deux approches expérimentales, cette thèse propose de nombreux développements méthodologiques et conceptuels originaux et pertinents. Ces contributions ouvrent de nouvelles perspectives à partir desquelles les signaux neurophysiologiques pourront informer les théories des Neurosciences Cognitives
A major issue in Cognitive Psychology is to describe human cognitive functions. From the Neuroscientific perceptive, measurements of brain activity are collected and processed in order to grasp, at their best resolution, the relevant spatio-temporal features of the signal that can be linked with cognitive operations. The work of this thesis consisted in designing and implementing strategies in order to overcome spatial and temporal limitations of signal processing procedures used to address cognitive issues. In a first study we demonstrated that the distinction between picture naming classical temporal organizations serial-parallel, should be addressed at the level of single trials and not on the averaged signals. We designed and conducted the analysis of SEEG signals from 5 patients to show that the temporal organization of picture naming involves a parallel processing architecture to a limited degree only. In a second study, we combined SEEG, EEG and MEG into a simultaneous trimodal recording session. A patient was presented with a visual stimulation paradigm while the three types of signals were simultaneously recorded. Averaged activities at the sensor level were shown to be consistent across the three techniques. More importantly a fine-grained coupling between the amplitudes of the three recording techniques is detected at the level of single evoked responses. This thesis proposes various relevant methodological and conceptual developments. It opens up several perspectives in which neurophysiological signals shall better inform Cognitive Neuroscientific theories
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Abbasi, Omid [Verfasser], Georg [Gutachter] Schmitz y Markus [Gutachter] Butz. "Retrieving neurophysiological information from strongly distorted EEG and MEG data / Omid Abbasi ; Gutachter: Georg Schmitz, Markus Butz". Bochum : Ruhr-Universität Bochum, 2017. http://d-nb.info/1140223119/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Bamidis, Panagiotis D. "Spatio-temporal evolution of interictal epileptic activity : a study with unaveraged multichannel MEG data in association with MRIs". Thesis, Open University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318685.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Whinnett, Mark. "Analysis of face specific visual processing in humans by applying independent components analysis(ICA) to magnetoencephalographic (MEG) data". Thesis, Open University, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607160.

Texto completo
Resumen
Face recognition is a key human brain function as faces convey a wealth of information about a person's mood, intentions, interest, health, direction of gaze, intelligence and trustworthiness, among many factors. Previous studies gained from behavioural, functional magnetic resonance imaging (fMRI), electroencephalographic (EEG) and MEG studies have shown that face processing involves activity in many specialised areas of the brain, which are known collectively as the face processing system. The aim of this thesis has been to develop, apply and assess a novel technique of analysis in order to gain information about the face processing system. The new technique involves using Independent Components Analysis (ICA) to identify significant features in the data for each subject and then using k-means clustering to aggregate results across subjects. A key feature of this new technique is that it does not impose a priori assumptions on the localisation of the face processing system in either time or space, and in particular does not assume that the latency of evoked responses is the same between subjects. The new technique is evaluated for robustness, stability and validity by comparing it quantitatively to the well established technique of weighted Minimum Norm Estimation (wMNE). This thesis describes a visually evoked response experiment involving 23 healthy adult subjects in which MEG data was recorded as subjects viewed a sequence of images from three categories: human faces, monkey faces or motorbikes. This MEG data was co-registered with a standard head model (the MNI30S brain) so that inter-subject comparisons could be made. We identify six clusters of brain activity with peak responses in the latency range from lOOms to 3S0ms and give the relative weighting for each cluster for each the three image categories. We use a bootstrap technique to assess the significance of these weightings and find that the only cluster where the human face response was significantly stronger than the motorbike image response was a cluster with peak latency of l72ms, which confirms earlier studies. For this cluster the response to monkey face images was not significantly different to the human face image response at the 99% confidence level. Other significant differences between brain response to the image categories are reported. For each cluster of brain activity we estimate the activity within each labelled region of the MNI30S brain and again use a bootstrap technique to determine brain areas where activity is significantly above the median level of activity. In a similar way we investigate whether activity shows hemispherical bias by reporting the probability that we reject the null hypothesis that the left and right hemispheres have the same level of activation. For the clu~ter with peak latency at 172ms mentioned above we find that the response is right lateralised, which again confirms earlier studies. In addition to this information about the location of brain activity, the techniques used give detailed information about time evolution (and sequencing) that other techniques such as fMRI are unable to provide. This time evolution of the clusters shows some evidence for priming activity that may give advance notice of the importance of a new visual stimulus, and also some support for a theory of anterior temporal lobe involvement in face identification (Kriegeskorte2007). We also describe activity that could be attributed to executive systems and memory access,'
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Ablin, Pierre. "Exploration of multivariate EEG /MEG signals using non-stationary models". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT051.

Texto completo
Resumen
L'Analyse en Composantes Indépendantes (ACI) modèle un ensemble de signaux comme une combinaison linéaire de sources indépendantes. Cette méthode joue un rôle clé dans le traitement des signaux de magnétoencéphalographie (MEG) et électroencéphalographie (EEG). L'ACI de tels signaux permet d'isoler des sources de cerveau intéressantes, de les localiser, et de les séparer d'artefacts. L'ACI fait partie de la boite à outils de nombreux neuroscientifiques, et est utilisée dans de nombreux articles de recherche en neurosciences. Cependant, les algorithmes d'ACI les plus utilisés ont été développés dans les années 90. Ils sont souvent lents lorsqu'ils sont appliqués sur des données réelles, et sont limités au modèle d'ACI classique.L'objectif de cette thèse est de développer des algorithmes d'ACI utiles en pratique aux neuroscientifiques. Nous suivons deux axes. Le premier est celui de la vitesse : nous considérons le problème d'optimisation résolu par deux des algorithmes les plus utilisés par les praticiens: Infomax et FastICA. Nous développons une nouvelle technique se basant sur un préconditionnement par des approximations de la Hessienne de l'algorithm L-BFGS. L'algorithme qui en résulte, Picard, est conçu pour être appliqué sur données réelles, où l'hypothèse d’indépendance n'est jamais entièrement vraie. Sur des données de M/EEG, il converge plus vite que les implémentations `historiques'.Les méthodes incrémentales, qui traitent quelques échantillons à la fois au lieu du jeu de données complet, constituent une autre possibilité d’accélération de l'ACI. Ces méthodes connaissent une popularité grandissante grâce à leur faculté à bien passer à l'échelle sur de grands jeux de données. Nous proposons un algorithme incrémental pour l'ACI, qui possède une importante propriété de descente garantie. En conséquence, cet algorithme est simple d'utilisation, et n'a pas de paramètre critique et difficile à régler comme un taux d'apprentissage.En suivant un second axe, nous proposons de prendre en compte du bruit dans le modèle d'ACI. Le modèle resultant est notoirement difficile et long à estimer sous l'hypothèse standard de non-Gaussianité de l'ACI. Nous nous reposons donc sur une hypothèse de diversité spectrale, qui mène à un algorithme facile d'utilisation et utilisable en pratique, SMICA. La modélisation du bruit permet de nouvelles possibilités inenvisageables avec un modèle d'ACI classique, comme une estimation fine des source et l'utilisation de l'ACI comme une technique de réduction de dimension statistiquement bien posée. De nombreuses expériences sur données M/EEG démontrent l'utilité de cette nouvelle approche.Tous les algorithmes développés dans cette thèse sont disponibles en accès libre sur internet. L’algorithme Picard est inclus dans les librairies de traitement de données M/EEG les plus populaires en Python (MNE) et en Matlab (EEGlab)
Independent Component Analysis (ICA) models a set of signals as linear combinations of independent sources. This analysis method plays a key role in electroencephalography (EEG) and magnetoencephalography (MEG) signal processing. Applied on such signals, it allows to isolate interesting brain sources, locate them, and separate them from artifacts. ICA belongs to the toolbox of many neuroscientists, and is a part of the processing pipeline of many research articles. Yet, the most widely used algorithms date back to the 90's. They are often quite slow, and stick to the standard ICA model, without more advanced features.The goal of this thesis is to develop practical ICA algorithms to help neuroscientists. We follow two axes. The first one is that of speed. We consider the optimization problems solved by two of the most widely used ICA algorithms by practitioners: Infomax and FastICA. We develop a novel technique based on preconditioning the L-BFGS algorithm with Hessian approximation. The resulting algorithm, Picard, is tailored for real data applications, where the independence assumption is never entirely true. On M/EEG data, it converges faster than the `historical' implementations.Another possibility to accelerate ICA is to use incremental methods, which process a few samples at a time instead of the whole dataset. Such methods have gained huge interest in the last years due to their ability to scale well to very large datasets. We propose an incremental algorithm for ICA, with important descent guarantees. As a consequence, the proposed algorithm is simple to use and does not have a critical and hard to tune parameter like a learning rate.In a second axis, we propose to incorporate noise in the ICA model. Such a model is notoriously hard to fit under the standard non-Gaussian hypothesis of ICA, and would render estimation extremely long. Instead, we rely on a spectral diversity assumption, which leads to a practical algorithm, SMICA. The noise model opens the door to new possibilities, like finer estimation of the sources, and use of ICA as a statistically sound dimension reduction technique. Thorough experiments on M/EEG datasets demonstrate the usefulness of this approach.All algorithms developed in this thesis are open-sourced and available online. The Picard algorithm is included in the largest M/EEG processing Python library, MNE and Matlab library, EEGlab
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Ewald, Arne Verfasser], Klaus-Robert [Akademischer Betreuer] [Müller, Andreas [Akademischer Betreuer] Daffertshofer y Guido [Akademischer Betreuer] Nolte. "Novel multivariate data analysis techniques to determine functionally connected networks within the brain from EEG or MEG data / Arne Ewald. Gutachter: Klaus-Robert Müller ; Andreas Daffertshofer ; Guido Nolte". Berlin : Technische Universität Berlin, 2014. http://d-nb.info/1067387773/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Ewald, Arne [Verfasser], Klaus-Robert [Akademischer Betreuer] Müller, Andreas [Akademischer Betreuer] Daffertshofer y Guido [Akademischer Betreuer] Nolte. "Novel multivariate data analysis techniques to determine functionally connected networks within the brain from EEG or MEG data / Arne Ewald. Gutachter: Klaus-Robert Müller ; Andreas Daffertshofer ; Guido Nolte". Berlin : Technische Universität Berlin, 2014. http://d-nb.info/1067387773/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Roux, Frédéric [Verfasser], Peter J. [Akademischer Betreuer] Uhlhaas, Wolf [Akademischer Betreuer] Singer y Christian [Akademischer Betreuer] Fiebach. "Alpha and gamma-band oscillations in MEG-data: networks, function and development / Frédéric Roux. Gutachter: Wolf Singer ; Christian Fiebach. Betreuer: Peter J. Uhlhaas". Frankfurt am Main : Univ.-Bibliothek Frankfurt am Main, 2013. http://d-nb.info/1043978194/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Tucciarelli, Raffaele. "Characterizing the spatiotemporal profile and the level of abstractness of action representations: neural decoding of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) data". Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/368799.

Texto completo
Resumen
When we observe other people's actions, a network of temporal, parietal and frontal regions is recruited, known as action observation network (AON). This network includes areas that have been reported to be involved when we perform actions ourselves. Such findings support the view that action understanding occurs by simulating actions in our own motor system (motor theories of action understanding). Alternatively, it has been argued that actions are understood based on a perceptual analysis, with access to action knowledge stored in the conceptual system (cognitive theories of action understanding). It has been argued earlier that areas that play a crucial role for action understanding should be able to (a) distinguish between different actions, and (b) generalize across the ways in which the action is performed (e.g. Dinstein, Thomas, Behrmann, & Heeger, 2008; Oosterhof, Tipper, & Downing, 2013; Caramazza, Anzelotti, Strnad, & Lingnau, 2014). Here we argue that one additional criterion needs to be met: an area that plays a crucial role for action understanding should have access to such abstract action information early, around the time when the action is recognized. An area that has access to abstract action information after the action has been recognized is unlikely to contribute to the process of action understanding. In this thesis, I report three neuroimaging studies in which we used magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) to characterize the temporal dynamics of abstract representations of observed actions (Study 1 and 2), meaning that generalize across lower level dimensions, and to characterize the type of information encoded in the regions of the AON (Study 3). Specifically, in Study 1 we examined where in the brain and at which point in time it is possible to distinguish between pointing and grasping actions irrespective of the way in which they are performed (reach direction, effector) using MEG in combination with multivariate pattern analysis (MVPA) and source analysis. We show that regions in the left lateral occipitotemporal cortex (LOTC) have the earliest access to abstract action representations. By contrast, precentral regions, though recruited relatively early, have access to abstract action representations substantially later than left LOTC. In Study 2, we tested the temporal dynamics of the neural decoding related to the oscillatory activity induced by observation of actions performed with different effectors (hand, foot). We observed that temporal regions are able to discriminate all the presented actions before effector-related decoding within effector-specific motor regions. Finally, in Study 3 we investigated what aspect of an action is encoded within the regions of the AON. Object-directed actions induce a change of states, e.g. opening a bottle means changing its state from closed to open. It is still unclear how and in which brain regions these neural representations are encoded. Using fMRI-based multivoxel pattern decoding, we aimed at dissociating the neural representations of states and action functions. Participants observed stills of objects (e.g., window blinds) that were in either open or closed states, and videos of actions involving the same objects, i.e., open or close window. Action videos could show the object manipulation only (invisible change), or the complete action scene (visible change). This design allowed us to detect neural representations of action scenes, states and action functions independently of each other. We found different sub-regions within LOTC containing information related to object states, action functions, or both. These findings provide important information regarding the organization of action semantics in the brain and the role of LOTC in action understanding.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Tucciarelli, Raffaele. "Characterizing the spatiotemporal profile and the level of abstractness of action representations: neural decoding of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) data". Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1592/1/PhD_thesis-_Raffaele_Tucciarelli.pdf.

Texto completo
Resumen
When we observe other people's actions, a network of temporal, parietal and frontal regions is recruited, known as action observation network (AON). This network includes areas that have been reported to be involved when we perform actions ourselves. Such findings support the view that action understanding occurs by simulating actions in our own motor system (motor theories of action understanding). Alternatively, it has been argued that actions are understood based on a perceptual analysis, with access to action knowledge stored in the conceptual system (cognitive theories of action understanding). It has been argued earlier that areas that play a crucial role for action understanding should be able to (a) distinguish between different actions, and (b) generalize across the ways in which the action is performed (e.g. Dinstein, Thomas, Behrmann, & Heeger, 2008; Oosterhof, Tipper, & Downing, 2013; Caramazza, Anzelotti, Strnad, & Lingnau, 2014). Here we argue that one additional criterion needs to be met: an area that plays a crucial role for action understanding should have access to such abstract action information early, around the time when the action is recognized. An area that has access to abstract action information after the action has been recognized is unlikely to contribute to the process of action understanding. In this thesis, I report three neuroimaging studies in which we used magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) to characterize the temporal dynamics of abstract representations of observed actions (Study 1 and 2), meaning that generalize across lower level dimensions, and to characterize the type of information encoded in the regions of the AON (Study 3). Specifically, in Study 1 we examined where in the brain and at which point in time it is possible to distinguish between pointing and grasping actions irrespective of the way in which they are performed (reach direction, effector) using MEG in combination with multivariate pattern analysis (MVPA) and source analysis. We show that regions in the left lateral occipitotemporal cortex (LOTC) have the earliest access to abstract action representations. By contrast, precentral regions, though recruited relatively early, have access to abstract action representations substantially later than left LOTC. In Study 2, we tested the temporal dynamics of the neural decoding related to the oscillatory activity induced by observation of actions performed with different effectors (hand, foot). We observed that temporal regions are able to discriminate all the presented actions before effector-related decoding within effector-specific motor regions. Finally, in Study 3 we investigated what aspect of an action is encoded within the regions of the AON. Object-directed actions induce a change of states, e.g. opening a bottle means changing its state from closed to open. It is still unclear how and in which brain regions these neural representations are encoded. Using fMRI-based multivoxel pattern decoding, we aimed at dissociating the neural representations of states and action functions. Participants observed stills of objects (e.g., window blinds) that were in either open or closed states, and videos of actions involving the same objects, i.e., open or close window. Action videos could show the object manipulation only (invisible change), or the complete action scene (visible change). This design allowed us to detect neural representations of action scenes, states and action functions independently of each other. We found different sub-regions within LOTC containing information related to object states, action functions, or both. These findings provide important information regarding the organization of action semantics in the brain and the role of LOTC in action understanding.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Jas, Mainak. "Contributions pour l'analyse automatique de signaux neuronaux". Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0021.

Texto completo
Resumen
Les expériences d’électrophysiologie ont longtemps reposé sur de petites cohortes de sujets pour découvrir des effets d’intérêt significatifs. Toutefois, la faible taille de l’échantillon se traduit par une faible puissance statistique, ce qui entraîne un taux élevé de fausses découvertes et un faible taux de reproductibilité. Deux questions restent à répondre : 1) comment faciliter le partage et la réutilisation des données pour créer de grands ensembles de données; et 2) une fois que de grands ensembles de données sont disponibles, quels outils pouvons-nous construire pour les analyser ? Donc, nous introduisons une nouvelle norme pour le partage des données, Brain Imaging Data Structure (BIDS), et son extension MEG-BIDS. Puis, nous présentons un pipeline d’analyse de données électrophysiologie avec le logiciel MNE. Nous tenons compte des différents choix que l’utilisateur doit faire à chaque étape et formulons des recommandations standardisées. De plus, nous proposons un outil automatisé pour supprimer les segments de données corrompus par des artefacts, ainsi qu’un algorithme de détection d’anomalies basé sur le réglage des seuils de rejet. Par ailleurs, nous utilisons les données HCP, annotées manuellement, pour comparer notre algorithme aux méthodes existantes. Enfin, nous utilisons le convolutional sparse coding pour identifier les structures des séries temporelles neuronales. Nous reformulons l’approche existante comme une inférence MAP pour être atténuer les artefacts provenant des grandes amplitudes et des distributions à queue lourde. Ainsi, cette thèse tente de passer des méthodes d’analyse lentes et manuelles vers des méthodes automatisées et reproducibles
Electrophysiology experiments has for long relied upon small cohorts of subjects to uncover statistically significant effects of interest. However, the low sample size translates into a low power which leads to a high false discovery rate, and hence a low rate of reproducibility. To address this issue means solving two related problems: first, how do we facilitate data sharing and reusability to build large datasets; and second, once big datasets are available, what tools can we build to analyze them ? In the first part of the thesis, we introduce a new data standard for sharing data known as the Brain Imaging Data Structure (BIDS), and its extension MEG-BIDS. Next, we introduce the reader to a typical electrophysiological pipeline analyzed with the MNE software package. We consider the different choices that users have to deal with at each stage of the pipeline and provide standard recommendations. Next, we focus our attention on tools to automate analysis of large datasets. We propose an automated tool to remove segments of data corrupted by artifacts. We develop an outlier detection algorithm based on tuning rejection thresholds. More importantly, we use the HCP data, which is manually annotated, to benchmark our algorithm against existing state-of-the-art methods. Finally, we use convolutional sparse coding to uncover structures in neural time series. We reformulate the existing approach in computer vision as a maximuma posteriori (MAP) inference problem to deal with heavy tailed distributions and high amplitude artifacts. Taken together, this thesis represents an attempt to shift from slow and manual methods of analysis to automated, reproducible analysis
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Carrara, Igor. "Méthodes avancées de traitement des BCI-EEG pour améliorer la performance et la reproductibilité de la classification". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4033.

Texto completo
Resumen
L'électroencéphalographie (EEG) mesure de manière non invasive l'activité électrique du cerveau par le biais de champs électromagnétiques générés par l'activité synchronisée de millions de neurones. Cela permet de collecter des données temporelles multivariées qui constituent une trace de l'activité électrique du cerveau mesurée au niveau du cuir chevelu. À tout instant, les mesures enregistrées par ces capteurs sont des combinaisons linéaires des activités électriques provenant d'un ensemble de sources sous-jacentes situées dans le cortex cérébral. Ces sources interagissent entre elles selon un modèle biophysique complexe qui reste mal compris. Dans certaines applications, telles que la planification chirurgicale, il est crucial de reconstruire avec précision ces sources électriques corticales, une tâche connue sous le nom de résolution du problème inverse de reconstruction de sources. Bien qu'intellectuellement satisfaisante et potentiellement plus précise, cette approche nécessite le développement et l'application d'un modèle spécifique au sujet, ce qui est à la fois coûteux et techniquement difficile à réaliser. Il est cependant souvent possible d'utiliser directement les mesures EEG au niveau des capteurs et d'en extraire des informations sur l'activité cérébrale. Cela réduit considérablement la complexité de l'analyse des données par rapport aux approches au niveau des sources. Ces mesures peuvent être utilisées pour une variété d'applications comme par exemple la surveillance des états cognitifs, le diagnostic des conditions neurologiques ou le développement d'interfaces cerveau-ordinateur (BCI). De fait, même sans avoir une compréhension complète des signaux cérébraux, il est possible de créer une communication directe entre le cerveau et un appareil externe à l'aide de la technologie BCI. Le travail décrit dans ce document est centré sur les interfaces cerveau-ordinateur basées sur l'EEG, qui ont plusieurs applications dans divers domaines médicaux, comme la réadaptation et la communication pour les personnes handicapées, ou dans des domaines non médicaux, notamment les jeux et la réalité virtuelle. La première contribution de cette thèse va dans ce sens, avec la proposition d'une méthode basée sur une matrice de covariance augmentée (ACM). Sur cette base, la méthode de covariance augmentée Block-Toeplitz (BT-ACM) représente une évolution notable, améliorant l'efficacité de calcul tout en conservant son efficacité et sa versatilité. Enfin, ce travail se poursuit avec la proposition d'un réseau de neurones artificiel Phase-SPDNet qui permet l'intégration de ces méthodologies dans une approche de Deep Learning et qui est particulièrement efficace même avec un nombre limité d'électrodes. Nous avons en outre proposé le cadre pseudo-on-line pour mieux caractériser l'efficacité des méthodes BCI et la plus grande étude de reproductibilité BCI basée sur l'EEG en utilisant le benchmark MOABB (Mother of all BCI Benchmarks). Cette recherche vise à promouvoir une plus grande reproductibilité et fiabilité des études BCI. En conclusion, nous relevons dans cette thèse deux défis majeurs dans le domaine des interfaces cerveau-ordinateur (BCI) basées sur l'EEG : l'amélioration des performances par le développement d'algorithmes avancés au niveau des capteurs et l'amélioration de la reproductibilité au sein de la communauté BCI
Electroencephalography (EEG) non-invasively measures the brain's electrical activity through electromagnetic fields generated by synchronized neuronal activity. This allows for the collection of multivariate time series data, capturing a trace of the brain electrical activity at the level of the scalp. At any given time instant, the measurements recorded by these sensors are linear combinations of the electrical activities from a set of underlying sources located in the cerebral cortex. These sources interact with one another according to a complex biophysical model, which remains poorly understood. In certain applications, such as surgical planning, it is crucial to accurately reconstruct these cortical electrical sources, a task known as solving the inverse problem of source reconstruction. While intellectually satisfying and potentially more precise, this approach requires the development and application of a subject-specific model, which is both expensive and technically demanding to achieve.However, it is often possible to directly use the EEG measurements at the level of the sensors and extract information about the brain activity. This significantly reduces the data analysis complexity compared to source-level approaches. These measurements can be used for a variety of applications, including monitoring cognitive states, diagnosing neurological conditions, and developing brain-computer interfaces (BCI). Actually, even though we do not have a complete understanding of brain signals, it is possible to generate direct communication between the brain and an external device using the BCI technology. This work is centered on EEG-based BCIs, which have several applications in various medical fields, like rehabilitation and communication for disabled individuals or in non-medical areas, including gaming and virtual reality.Despite its vast potential, BCI technology has not yet seen widespread use outside of laboratories. The primary objective of this PhD research is to try to address some of the current limitations of the BCI-EEG technology. Autoregressive models, even though they are not completely justified by biology, offer a versatile framework to effectively analyze EEG measurements. By leveraging these models, it is possible to create algorithms that combine nonlinear systems theory with the Riemannian-based approach to classify brain activity. The first contribution of this thesis is in this direction, with the creation of the Augmented Covariance Method (ACM). Building upon this foundation, the Block-Toeplitz Augmented Covariance Method (BT-ACM) represents a notable evolution, enhancing computational efficiency while maintaining its efficacy and versatility. Finally, the Phase-SPDNet work enables the integration of such methodologies into a Deep Learning approach that is particularly effective with a limited number of electrodes.Additionally, we proposed the creation of a pseudo online framework to better characterize the efficacy of BCI methods and the largest EEG-based BCI reproducibility study using the Mother of all BCI Benchmarks (MOABB) framework. This research seeks to promote greater reproducibility and trustworthiness in BCI studies.In conclusion, we address two critical challenges in the field of EEG-based brain-computer interfaces (BCIs): enhancing performance through advanced algorithmic development at the sensor level and improving reproducibility within the BCI community
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Högberg, Martin y Victor Olofsson. "Innovera med data : Bidra till en mer hållbar livsmedelsindustri med hjälp av Design thinking". Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-78336.

Texto completo
Resumen
Sustainable shopping is something that in recent years has become a growing trend among consumers. However, it can be difficult to know how to go about shopping sustainable and therefore many consumers seek guidence. The goals of this project are to be able to help consumers contribute to a more sustainable food industry by making more sustainable food choices, as well as awaken an interest in new types of food and reduce their food waste. By using the Design thinking method, a final prototype in the form of a mobile application has been developed through iteration of prototyping and testing. The final prototype has functionality to randomize receipes based on user preferences and to send push notifications to inspire consumers to try new recipes. Based on tests of the initial prototype, a ranked list of functionality requirements has been developed which has been the basis for the implementation of the final prototype.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Ziehe, Andreas. "Blind source separation based on joint diagonalization of matrices with applications in biomedical signal processing". Phd thesis, [S.l. : s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=976710331.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Cederberg, Petter. "Kan e-tjänster förenklas och bli mer motiverande med gamification och öppna data? : En kvalitativ studie". Thesis, Karlstads universitet, Handelshögskolan, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-42940.

Texto completo
Resumen
Inför sommaren 2015 var 40 % av alla bygglovsansökningar som skickades in till Karlstads kommun antingen felaktiga eller ofullständiga och det leder till stora förseningar på tiden det tar från inskickad ansökan till att beslut fattas från kommunens handläggare. Denna kandidatuppsats syftar till att undersöka om det med hjälp av gamification och öppna data går att förenkla e-tjänster och på vilket sätt, med ansökan av byggnadslov som exempel på en kommunal e-tjänst på hos Karlstads kommun. Gamification innebär att applicera spelmekanismer eller spelupplevelser i icke spelrelaterade sammanhang för att öka motivationen och engagemanget hos användarna. Öppna data är när data som tidigare inte varit tillgänglig blir disponibel för allmänheten att använda, återanvända och distribuera utan andra förbehåll än källangivelse. Kandidatuppsatsen har genomförts i form av litteraturgenomgång som bas och kvalitativa intervjuer med personer som arbetar med gamification, öppna data eller kommunala e-tjänster. Slutsatserna av undersökningen visar att e-tjänster kan behöva bli mer självförklarande och enklare så att fler medborgare klarar av att använda e-tjänsten. Genom att applicera enklare spelmekanismer som passar till kontexten hos en e-tjänst och informationen som går att få ut genom öppna data är det möjligt att förenkla e-tjänster och göra de mer motiverande för slutanvändaren. Gamification kan göra e-tjänster enklare och mer motiverande med till exempel:  Ge användaren mer feedback  Sparfunktion så att användaren får möjlighet att spara sina framsteg  Förloppsindikator så att användaren kan följa hur långt i processen den kommit Öppna data kan göra e-tjänster enklare och mer motiverande med till exempel:  Geografisk data  Korslänkning av olika öppna data
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hörberg, Eric. "Förutsäga data för lastbilstrafik med maskininlärning". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205190.

Texto completo
Resumen
Artificiella neuronnätverk används idag frekvent för att försöka se mönster i stora mängder data. Ser man mönster kan man till viss del se framtiden, och hur väl det fungerar på lastbilstrafik undersöks i den här rapporten. Historisk data om lastbilstrafik används med ett framåtkopplat artificiellt neuronnätverk för att skapa prognoser för lastbilars ankomster till en logistisk plats. Med ett program som skapats för att testa vilka paramterar som ger bäst resultat för det artificiella neuronnätverket så undersöks vilken datastruktur och vilken typ av prognos som ger det bästa resultatet. De två typer av prognoser som testas är tiden till nästa lastbils ankomst samt intensiteten av lastbilarnas an- komster nästa timme. De bästa prognoserna skapades när intensiteten av lastbilar för nästa timme förutspåddes, och prognoserna visade sig då vara bättre än de prognoser nuvarande statistiska metoder kan ge.
Artificial neural networks are used frequently today in order to find patterns in large amounts of data. If one can see the patterns one can to some extent see the future, and how well this works for truck tra c is researched in this report. Historical data about truck tra c is used with a feed-forward artificial neural network to create forecasts for arrivals of trucks to a logistic location. With a program that was created to test what data structure and what parameters give the best results for the artificial neural network it is researched what type of forecast gives the best result. The two forecasts that are tested are the time to the next trucks arrival and the intensity of truck arrivals the next hour. The best forecasts were created when the intensity of trucks for the next hour were predicted, and the forecasts were shown to be better than the forecasts present statistical methods can give.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Konnskog, Magnus. "Nyttor med öppna data : Sydvästlänken som fallstudie". Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-21700.

Texto completo
Resumen
Denna studie redovisar information om nyttorna med öppna geodata. Resultatet ger en indikation över att stora samhällsvinster uppstår vid tillgängliggörande av öppna data. I studien redovisas vinsten av satsningar på öppna geodata för de nordiska länderna. Rapporter från de nordiska länderna indikerar stora samhällsvinster genom nya eller förbättrade innovationer och tjänster. I den fallstudie som genomförs redovisas de genomsnittliga leveranstiderna för Svenska kraftnät och deras pågående infrastrukturprojekt, Sydvästlänken. Om de beställningar Svenska kraftnät gjort mellan 2011-2015 räknas ihop visar denna studie att de väntat i totalt 1457 arbetsdagar på leveranser. Den genomsnittliga leveranstiden för geografisk information som Svenska kraftnät beställt från Lantmäteriet är 5,2 arbetsdagar per order. Resultatet av mer öppna data skulle betyda stor tidsbesparing för Svenska kraftnät och även andra myndigheter.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Mellberg, Amanda y Emma Skog. "Artificiell Intelligens inom rekryteringsprocessen : objektivitet med subjektiv data?" Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-15078.

Texto completo
Resumen
Artificiell Intelligens (AI) har flera användningsområden som bland annat robotik, ansiktsigenkänning och stöd vid beslutsfattande. Organisationer kommer använda AI mer för att möta utmaningar inom Human Resources (HR) de närmaste fem åren vilket pekar på att AI sannolikt kommer bli en vanligare förekomst inom rekryteringsprocessen. En av de viktigaste tillgångarna i ett företag är dess anställda och felaktiga rekryteringar kan komma att medföra stora kostnader. Med maskininlärning och AI-system som beslutsfattare kan det vara av vikt att fundera på vad det är för data som dessa system förses med då en av riskerna med maskininlärning inom AI är att man inte vet vad maskinerna lär sig när den lär sig själv. En större datamängd behöver i sig inte medföra mer subjektiva resultat men risken att direkt koda in diskriminering finns fortfarande eftersom den data AI-system förses med i sig kan innehålla bias. Det har även visat sig att kandidater inte vill bli bedömda på politiska åsikter, relationer eller annat som kan tas fram via big data och data mining. Syftet med studien är att skapa en djupare förståelse för vad som behövs för att automatisera rekryteringsprocessen med hjälp av AI och maskininlärning samt till att utforma en lista på hur AI kan vara ett stöd för företag att ha i åtanke vid en möjlig implementering. Till studien har tre metoder till empiriinsamling valts ut varav samtliga med en kvalitativ ansats. Intervjuer och en enkät har samlat in de data som analyserats i Excel 2016 samt Google Docs. Intervjuerna utfördes i flera skeden och riktade sig mot två anställda på varsitt rekryteringsföretag. Enkäten riktade sig främst till individer som kommer att ta/har tagit examen inom det närmaste året. Urvalet har skett enligt studiens syfte och vid enstaka tillfällen har ett bekvämlighetsurval gjorts. Resultatet visar att rekryterarna lägger mycket tid på att screena kandidater och gör det manuellt. Enkäten visar att kommande kandidater främst är neutrala i sin tillit till att screening utförs av ett AI-system. Respondenten i uppföljningsintervjun säger att en automatisering med AI hade underlättat arbetet och håller med det enkätrespondenterna anser om fördelar och nackdelar med AI men skulle samtidigt inte lita på resultatet. Vidare tror respondenten att det är den automatiserade vägen rekryteringsprocessen kommer att gå. Resultatet av studien kan komma att nyttjas av rekryteringsföretag som funderar på att införa AI i sina rekryteringsprocesser.
Artificial Intelligence (AI) has several areas of use such as robotics, facial recognition and decision-making support. Organizations will use AI more to meet challenges within Human Resources (HR) over the next five years, indicating that AI is likely to become a more common occurrence in the recruitment process. One of the most important assets of a company is its employees and incorrect recruitments can lead to high costs. With machine learning and AI systems as decision makers it may be important to think about what data is provided to these systems, since one of the risks of machine learning within AI is that you do not know what the machines learn as they learn themselves. A larger amount of data does not necessarily lead to more subjective results, but the risk of directly encode discrimination still exists because of the data the AI system is provided with can contain bias. It has also been found that candidates do not want to be judged on political views, relationships or anything that can be gained through big data and data mining. The purpose of the study is to provide a deeper understanding of what is needed to automate the recruitment process using AI and machine learning and to design a list of how AI can be a support for companies to keep in mind during a possible implementation. The study has chosen three methods for empirical gathering, all of which are qualitative. Interviews and a survey has collected the data which is analyzed in Excel 2016 as well as Google Docs. The interviews were conducted in several stages and aimed towards two employees working at two different recruiting companies. The survey was aimed primarily towards individuals who will have graduated within this year. The selection of participants has been made for the purpose of the study and on some occasions a comfort selection has been made. The result shows that the recruiters spend a lot of time screening candidates and do this manually. The survey shows that future candidates have a neutral stance when it comes to trusting in an AI system performing the screening process. The respondent in the follow-up interview says that automation using AI would facilitate the work and agrees with the survey respondents considering the pros and cons of AI, but at the same time would not rely on the results. Further, the respondent believes that it is in the automated way the recruitment process will continue. The result of the study may be used by recruitment companies that are considering introducing AI into their recruitment processes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Johansson, Oskar. "Parafrasidentifiering med maskinklassificerad data : utvärdering av olika metoder". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167039.

Texto completo
Resumen
Detta arbete undersöker hur språkmodellen BERT och en MaLSTM-arkitektur fungerar att för att identifiera parafraser ur 'Microsoft Paraphrase Research Corpus' (MPRC) om dessa tränats på automatiskt identifierade parafraser ur 'Paraphrase Database' (PPDB). Metoderna ställs mot varandra för att undersöka vilken som presterar bäst och metoden att träna på maskinklassificerad data för att användas på mänskligt klassificerad data utvärderas i förhållande till annan klassificering av samma dataset. Meningsparen som används för att träna modellerna hämtas från de högst rankade parafraserna ur PPDB och genom en genereringsmetod som skapar icke-parafraser ur samma dataset. I resultatet visar sig BERT vara kapabel till att identifiera en del parafraser ur MPRC, medan MaLSTM-arkitekturen inte klarade av detta trots förmåga att särskilja på parafraser och icke-parafraser under träning. Både BERT och MaLSTM presterade sämre på att identifiera parafraser ur MPRC än modeller som till exempel StructBERT, som tränat och utvärderats på samma dataset, presterar. Anledningar till att MaLSTM inte klarar av uppgiften diskuteras och främst lyfts att meningarna från icke-parafraserna ur träningsdatan är för olika varandra i förhållande till hur de ser ut i MPRC. Slutligen diskuteras vikten av att forska vidare på hur man kan använda sig av maskinframtagna parafraser inom parafraseringsrelaterad forskning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Lundqvist, Patrik y Michael Enhörning. "Speltestning : Med Fuzzy Logic". Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20260.

Texto completo
Resumen
Vid design av ett dataspel försöker speldesignern ofta skapa banor och fiender som tvingar spelaren att använda olika strategier för att överleva. För att hitta dessa strategier krävs speltestning. Speltestning är tidskrävande och då också dyrt. Den enklaste metoden för att spara tid är då att använda data hooks i spelet och sedan låta testpersoner spela spelet. Data samlas då in under alla spelsessioner och lagras i loggfiler.Med hjälp av data hooks samlades data in till denna rapport. Spelet som analyserades var ett spel av typen top-down-shooter. Anledning till att detta spel valdes var att spelet är ett exempel från Microsoft där designen är enligt XNAs standard, samt att spelidén är allmänt känd.Är det då möjligt att hitta tydliga strategier i den insamlade datan med hjälp av data mining och fuzzy logic? Det är definitivt möjligt att hitta tydliga strategier. Den insamlade datan från spelsessioner analyserades med hjälp av data mining, fuzzy logic och verktyget G-REX. Det visade sig att det fanns tydliga regler för att särskilja bra spelare från dåliga spelare. Detta visar att det är möjligt att utläsa spelarens strategi samt att jämföra denna mot hur speldesignern tänkt ut när han skapat spelet.Det som är mest intressant från resultatet är att G-REX hittade regler som sa hur en bra spelare skulle spela, och att vissa av dessa regler inte stämde överrens med hur speldesignern tänkt. En sådan regel var att i ett av passen var det bra att förlora hälsa, just därför att då kom det fler fiender och spelaren hann få mer poäng under passet.Vid jämförelse av de olika fuzzifieringstyperna (ramverket mot G-REX) visade sig att testresultatet blev väldigt likt varandra. Det innebär att allt arbete med ramverkets fuzzifiering inte hade behövts. Det innebär även att speldesigners med liten eller ingen kunskap alls om fuzzy logic skulle kunna använda G-REX till att evaluera sina spel och loggfiler med hjälp av data mining och fuzzy logic.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Vianello, Dario <1987&gt. "Data management and data analysis in the large European projects GEHA (GEnetics of Healthy Aging) and NU-AGE (NUtrition and AGEing): a bioinformatic approach". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6819/1/phd_thesis_final.pdf.

Texto completo
Resumen
The aging process is characterized by the progressive fitness decline experienced at all the levels of physiological organization, from single molecules up to the whole organism. Studies confirmed inflammaging, a chronic low-level inflammation, as a deeply intertwined partner of the aging process, which may provide the “common soil” upon which age-related diseases develop and flourish. Thus, albeit inflammation per se represents a physiological process, it can rapidly become detrimental if it goes out of control causing an excess of local and systemic inflammatory response, a striking risk factor for the elderly population. Developing interventions to counteract the establishment of this state is thus a top priority. Diet, among other factors, represents a good candidate to regulate inflammation. Building on top of this consideration, the EU project NU-AGE is now trying to assess if a Mediterranean diet, fortified for the elderly population needs, may help in modulating inflammaging. To do so, NU-AGE enrolled a total of 1250 subjects, half of which followed a 1-year long diet, and characterized them by mean of the most advanced –omics and non –omics analyses. The aim of this thesis was the development of a solid data management pipeline able to efficiently cope with the results of these assays, which are now flowing inside a centralized database, ready to be used to test the most disparate scientific hypotheses. At the same time, the work hereby described encompasses the data analysis of the GEHA project, which was focused on identifying the genetic determinants of longevity, with a particular focus on developing and applying a method for detecting epistatic interactions in human mtDNA. Eventually, in an effort to propel the adoption of NGS technologies in everyday pipeline, we developed a NGS variant calling pipeline devoted to solve all the sequencing-related issues of the mtDNA.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Vianello, Dario <1987&gt. "Data management and data analysis in the large European projects GEHA (GEnetics of Healthy Aging) and NU-AGE (NUtrition and AGEing): a bioinformatic approach". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6819/.

Texto completo
Resumen
The aging process is characterized by the progressive fitness decline experienced at all the levels of physiological organization, from single molecules up to the whole organism. Studies confirmed inflammaging, a chronic low-level inflammation, as a deeply intertwined partner of the aging process, which may provide the “common soil” upon which age-related diseases develop and flourish. Thus, albeit inflammation per se represents a physiological process, it can rapidly become detrimental if it goes out of control causing an excess of local and systemic inflammatory response, a striking risk factor for the elderly population. Developing interventions to counteract the establishment of this state is thus a top priority. Diet, among other factors, represents a good candidate to regulate inflammation. Building on top of this consideration, the EU project NU-AGE is now trying to assess if a Mediterranean diet, fortified for the elderly population needs, may help in modulating inflammaging. To do so, NU-AGE enrolled a total of 1250 subjects, half of which followed a 1-year long diet, and characterized them by mean of the most advanced –omics and non –omics analyses. The aim of this thesis was the development of a solid data management pipeline able to efficiently cope with the results of these assays, which are now flowing inside a centralized database, ready to be used to test the most disparate scientific hypotheses. At the same time, the work hereby described encompasses the data analysis of the GEHA project, which was focused on identifying the genetic determinants of longevity, with a particular focus on developing and applying a method for detecting epistatic interactions in human mtDNA. Eventually, in an effort to propel the adoption of NGS technologies in everyday pipeline, we developed a NGS variant calling pipeline devoted to solve all the sequencing-related issues of the mtDNA.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Aspegren, Villiam y Kim Persson. "Spelfördelar med minnesinjektion". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177857.

Texto completo
Resumen
Spel på internet är populärare än någonsin i dagens samhälle, det är en industri som omsätter stora summor pengar varje år. Fler och fler människor söker sig till dessa spelsajter. Hur kan du som spelare vara säker på att dina motståndare spelar på samma villkor? Arbetet som utförts på KTH undersöker hur programmet på användarens egen dator kan exploateras för att vinna fördelar i spelet. Endast minnesinjektion kommer att undersökas i detta syfte. Olika tekniker för minnesinjektion samt förslag på hur applikationer kan göras säkrare presenteras i denna rapport. PokerStat är ett program som utvecklats i ett demonstrativt syfte för att visa hur minnesinjektion kan användas för att exploatera en pokerklient. PokerStat räknar ut sannolikheten för att få olika pokerhänder under tiden en aktiv hand spelas. Det visar tydligt vikten av att tänka på informationen som lagras i minnet för applikationen. En undersöknings som genomförts parallellt med arbetet visade på att majoriteten av undersökningens deltagare inte klassificerade användning av PokerStat som fusk. Trots detta ville en klar majoritet inte att sådan funktionalitet skulle implementeras direkt i pokerklienten. Resultatet av undersökningen visar på att PokerStat ger en fördel som användare inte vill ska finnas lättillgängligt för alla spelare men fördelen är inte så pass stor att den bör klassificeras som fusk.
Online games are more popular today than ever, it is an industry that turns over large sums of money every year. More and more people join these online gaming sites. How can you as a player be certain that your opponents are playing fair? The project that was done at KTH investigates how programs on the user’s computer can be exploited to gain advantages in online games. Only memory injection will be examined in this purpose. Different techniques as well as suggestions on how application security can be improved will be presented in this report. PokerStat is a program developed to demonstrative purpose to show how memory injection can be used to exploit a poker client. PokerStat calculates the probability of receiving different poker hands whilst in the middle of an active hand. It shows the importance of deciding exactly what information should be stored in the memory of an application. A survey conducted during this project showed that the majority of subjects didn’t think that use of PokerStat should be classified as cheating. Despite that, the survey also shows that a majority of people does not want the functionality of PokerStat available to everyone in the poker client. The result of the survey shows that PokerStat gives an advantage to the user that people don’t want available to all users but the advantage is not big enough to be classified as cheating.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Freitag, Kathrine. "”Tanken är god men det är svårt att hitta” : Vad är det som gör att användare är nöjda med sitt system?" Thesis, Faculty of Arts and Sciences, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17593.

Texto completo
Resumen

På läkemedelsföretaget AstraZeneca finns ett webb-baserat system för fildelning, eRoom, som är tänkt att ge en delad, säker arbetsplats på Internet så att spridda projektgrupper kan genomföra gemensamma projekt. Detta examensarbete har sin grund i att flera användare i Sverige inte var helt nöjda med systemet, dess tillgänglighet, struktur och kognitiva aspekter. Jag vill, med eRoom som en fallstudie, undersöka vad det generellt kan finnas för faktorer som påverkar hur nöjda användare är med ett CSCW-system.

Arbetet grundar sig i teknometodologi och kommunikationsteori och tillämpas på domänen CSCW-system. De metoder som använts är en större enkätundersökning, intervjuer samt det kunskap sprunget ur det författarens medlemskap i gruppen som uppnåtts under arbetets gång.

Ett av de största problemen är att eRoom inte används till kommunikation och samarbete mellan spridda projektgrupper. Istället används det som arkiv och dokumenthanteringssystem av personer som befinner sig fysiskt nära varandra och har andra sätt att kommunicera och samarbeta. Detta beror på att systemet inte kan förmedla sin funktion och kommunicera med användarna på ett bra sätt, vilket gör att systemet inte heller stöder användaren i det som är systemets grundläggande funktion. System av den här typen måste även vara bättre än befintliga sätt att utföra liknande arbetsuppgifter, vilket eRoom är när det gäller dokumenthantering, men inte när det gäller kommunikation och samarbete.

Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Evert, Anna-Karin y Alfrida Mattisson. "Rekommendationssystem med begränsad data : Påverkan av gles data och cold start på rekommendationsalgoritmen Slope One". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186734.

Texto completo
Resumen
I dagens överflöd av information och produkter på nätet, har rekommendationssystem blivit viktiga för att kunna presentera sådant som är intressant och relevant för varje enskild användare. Rekommendationssystem kan både förutsäga produktbetyg och ta fram ett urval av rekommenderade produkter för en användare. Ett vanligt problem för rekommendationssystem är begränsad data, vilket kan försämra korrektheten i systemets rekommendationer och förutsägelser avsevärt. De vanligaste typerna av begränsad data är gles data och cold start. Gles data innebär att det finns en liten mängd produktbetyg i förhållande till antalet användare och produkter i systemet. Cold start är i stället då en ny användare eller produkt ska läggas till i systemet, och därmed saknar betyg. Denna rapport har som syfte att studera hur korrektheten i rekommendationsalgoritmen Slope Ones förutsägelser påverkas av begränsad data. De situationer som undersöks är gles data samt cold start-­situationerna ny användare och ny produkt. Rapporten undersöker även om situationen ny användare kan avhjälpas genom att låta nya användare betygsätta ett litet antal produkter direkt när de läggs till i systemet. Sammanfattningsvis, visar rapportens resultat att Slope One är olika känslig för de olika typerna av begränsad data. I enlighet med tidigare forskning, kan slutsatsen dras att Slope One är okänslig för gles data. Vad gäller cold start, blir korrektheten avsevärt sämre, och dessa situationer kan således sägas vara problematiska för Slope One.
In today’s abundance of online information and products, recommender systems have become essential in finding what is interesting and relevant for each user. Recommender systems both predict product ratings and produce a selection of recommended products for a user. Limited data is a common issue for recommender systems and can greatly impair their ability to produce accurate predictions and recommendations. The most prevailing types of limited data are sparse data and cold start. The data in a dataset is said to be sparse if the number of ratings is small compared to the number of users and products. Cold start, on the other hand, is when a new user or product is added to the system and therefore completely lacks ratings. The objective of this report is to study the impact of limited data on the accuracy of predictions produced by the recommendation algorithm Slope One. More specifically, this report examines the impact of sparse data and the two cold start situations new user and new product. The report also investigates whether asking new users to rate a small number of products, instantly after being added to the system, is a successful strategy in order to cope with the problem of making accurate predictions for new users. In summary, it can be deduced from the results of this report that the sensitivity of Slope One varies between the types of limited data. In accordance with previous studies, Slope One appears insensitive to sparse data. On the other hand, for cold start, the accuracy is seriously affected and cold start can thus be said to be problematic for Slope One.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Carlsson, Anders y Linus Lauri. "Modellering av finansiella data med dolda markovmodeller / Analysis of Financial Data with Hidden Markov Models". Thesis, KTH, Matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105526.

Texto completo
Resumen
The prediction and understanding of market fluctuations are of great interest in today’s society. A common tool for analyzing financial data is the use of different statistical models. This report will focus on examining the stability of a financial data sequence using a statistical model. The sequence that will be used in the report is the logarithmic return of OMXS30 index between the 30th of March 2005 and the 6th of March 2009. The statistical model that will be used is a HMM ( Hidden Markov Model). This model consists essentially of two stochastic processes:  A non-observable Markov chain in a finite state space.  A state-dependent process with a superimposed white noise. The latter of these two processes is generally known. Therefore, the key is to find how the hidden Markov chain behaves. This will be solved with the so-called EM-algorithm, which is an iterative method to get the model to converge. An optimization of the model with respect to the number of states will be made with the BIC (Bayesian Information Criterion). Thereafter, a validation of the model is done by graphically comparing the quantiles of the model distribution function and the given data. This study shows that by employing an HMM it is possible to describe how the return on the index varies, by examining the probability of changes between the Markov chains volatility states.
I dagens samhälle finns det ett stort intresse i att kunna analysera finansiella data och skapa sig en uppfattning om hur marknaden utveck-las. Olika statistiska modeller är de vanligaste verktygen för att kunna göra denna analys. Den här rapporten fokuserar på att med hjälp av en statistisk modell undersöka stabiliteten på en finansiell datasekvens. Datasekvensen kommer i rapporten vara de logaritmiska dagsavkastningarna på OMXS30-index mellan den 30e mars 2005 och den 6e mars 2009. Den statistiska modellen som kommer användas är en så kallad dold Markovmodell eller HMM ( Hidden Markov Model). Modellen består huvudsakligen av två stokastiska processer:  En icke observerbar Markovkedja i ett ändligt tillståndsrum.  En tillståndsberoende process med ett pålagt vitt brus. Den senare av dessa två processer är vanligtvis känd. Problemet blir därför att försöka hitta hur den dolda Markovkedjan uppför sig. Detta löses med den så kallade EM-algoritmen, vilket är en iterativ metod för att få modellen att konvergera. Därefter genomförs en optimering med avseende på antal tillstånd med BIC ( Bayesian Information Criterion), varefter en validering av modellen utförs, genom att grafiskt jämföra kvantilerna för modellens fördelningsfunktion med den observerade datamängden. Studien visar att det med hjälp av en HMM är möjligt att beskriva hur avkastningen på index varierar. Detta genom att undersöka hur sanno-likt det är för förändringar mellan Markovkedjans volatilitetstillstånd.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Berezkin, Nikita y Ahmed Heidari. "Berika receptdata med innehållshanteringssystem". Thesis, KTH, Hälsoinformatik och logistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252797.

Texto completo
Resumen
The problem today is that people do not eat climate-smart food; this results in that the food will not suffice, and what we eat may harm the greenhouse effect. The problem is that people do not have the time or knowledge to cook climate-smart food. A solution is to use a Content Management System (CMS). A Content Management System processes selected type of data in a specific way which is then stored. This report will address the basics and the making of a CMS in a recommendation system for a user. The system will entail a more climate-smart food alternative to achieve the individual's personal needs. The result was that with the help of data from various sources, an ingredient of a recipe could add additional information such as nutritional value, allergies, and whether it is vegetarian. Tests such as performance tests on the execution time for the CMS, parsing accuracy, and matching product accuracy, a better result was achieved. Most of the ingredients in the recipe became enriched, which leads to more climate-smart food alternatives, which are better for the environment. The accuracy is the matching of ingredients in the recipe to the names of products in the business. The next step was to enrich the recipes using enriched ingredients.
Problemet i dag är att människor inte äter klimatsmart mat med resultatet att maten inte kommer räcka till i framtiden. Vad vi äter kan ha en negativ påverkan på växthuseffekten. Problemet är att människor inte har tid eller kunskap att tillaga klimatsmart mat. Detta kan lösas med hjälp av ett innehållshanteringssytem. Ett innehållshanteringsystem bearbetar vald typ av data på ett bestämt sätt som sedan lagras. Denna rapport kommer att behandla grunden och uppbyggnaden av ett innehållshanteringsystem som ska ingå i ett rekommendationssystem för en användare. Systemet ska medföra mer alternativ av klimatsmart mat för att uppnå individens personliga behov. Resultatet blev att med hjälp av data från olika källor kunde koppla samman ingredienser där information som näringsvärde, allergier samt om kosten är vegetarisk. Genom tester som prestandatest av exekveringstid för innehållshanteringsystemet, träffsäkerhet av parsning och förbättring av träffsäkerheten uppnåddes ett bättre resultat. Majoriteten av ingredienserna i receptet blev berikade vilket medför till mer klimatsmart matalternativ, vilket är bättre mot miljön. Träffsäkerheten är ingredienser i receptet som matchas mot namn av produkter i affärer. Nästa steg var att med hjälp berikade ingredienser berika recepten.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Hellsin, Beppe. "Inomhuslokalisering med Bluetooth 5". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-39330.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Ciolek, Thomas S. "Meeting the challenges of met data with MySQL X /". [Denver, Colo.] : Regis University, 2006. http://165.236.235.140/lib/TCiolek2006.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Bergsten, Marcus. "Visualisering av multidimensionella data med hjälp av parallella koordinater". Thesis, University of Gävle, Ämnesavdelningen för datavetenskap, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-4859.

Texto completo
Resumen

Problem består ofta av flera variabler som behöver sammanvägas för att vi ska kunna fatta ett beslut om vilket alternativ som är bäst. Detta gör vi var och varannan dag. Med detta arbete görs ett försök att skapa en integrerad webblösning som hämtar information och visualiserar den på ett sätt som förenklar ett beslutsfattande. Till hjälp för detta användes parallella koordinater.

Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Herdahl, Mads. "Lineær mikset modell for kompressor data med en applikasjon". Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10432.

Texto completo
Resumen

StatoilHydro is the operator of the Åsgard oil and gas field outside of Trøndelag, Norway, where large compressors for injection, recompression and export of natural gas are installed. The facility transports and stores up to 36 millions $Sm^3$ of gas every day. If the compressors are not optimally operated, large values are lost. This paper describes how to use linear mixed models to model the condition of the compressors. The focus has been on the 1- and 2- stage recompression compressors. Reference data from Dresser-Rand have been used to make the model. Head and flow data are the modelled, and the explanatory variables used are molweight, rotational speed and an efficiency indicator. The paper also shows how cross validation is used to give an indication of how future datapoints will fit the model. A graphical user interface has been developed to do estimation and plotting with various models. Different models are tested and compared by likelihood methods. For a relatively simple model using three explanatory variables reasonable predictions are obtained. Results are not so good for very high rotational speeds and high molweights.

Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Abramsson, Evelina y Kajsa Grind. "Skattning av kausala effekter med matchat fall-kontroll data". Thesis, Umeå universitet, Statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-139644.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Schipani, Angela <1994&gt. "Comprehensive characterization of SDH-deficient GIST using NGS data and iPSC models". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10190/1/Schipani_Angela_thesis.pdf.

Texto completo
Resumen
Gastrointestinal stromal tumors (GIST) are the most common di tumors of the gastrointestinal tract, arising from the interstitial cells of Cajal (ICCs) or their precursors. The vast majority of GISTs (75–85% of GIST) harbor KIT or PDGFRA mutations. A small percentage of GIST (about 10‐15%) do not harbor any of these driver mutations and have historically been called wild-type (WT). Among them, from 20% to 40% show loss of function of the succinate dehydrogenase complex (SDH), also defined as SDH‐deficient GIST. SDH-deficient GISTs display distinctive clinical and pathological features, and can be sporadic or associated with Carney triad or Carney-Stratakis syndrome. These tumors arise most frequently in the stomach with predilection to distal stomach and antrum, have a multi-nodular growth, display a histological epithelioid phenotype, and present frequent lympho-vascular invasion. Occurrence of lymph node metastases and indolent course are representative features of SDH-deficient GISTs. This subset of GIST is known for the immunohistochemical loss of succinate dehydrogenase subunit B (SDHB), which signals the loss of function of the entire SDH-complex. The overall aim of my PhD project consists of the comprehensive characterization of SDH deficient GIST. Throughout the project, clinical, molecular and cellular characterizations were performed using next-generation sequencing technologies (NGS), that has the potential to allow the identification of molecular patterns useful for the diagnosis and development of novel treatments. Moreover, while there are many different cell lines and preclinical models of KIT/PDGFRA mutant GIST, no reliable cell model of SDH-deficient GIST has currently been developed, which could be used for studies on tumor evolution and in vitro assessments of drug response. Therefore, another aim of this project was to develop a pre-clinical model of SDH deficient GIST using the novel technology of induced pluripotent stem cells (iPSC).
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Holm, Noah. "Möjligheter och utmaningar med öppna geodata". Thesis, KTH, Geoinformatik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188668.

Texto completo
Resumen
Öppna geodata är idag (2016) ett ofta debatterat ämne och många aktörer i samhället får mer och mer intresse för frågan, i synnerhet i offentlig sektor. Även på riksnivå har politiker börjat verka för öppna geodata, där riksdagen nyligen har beslutat kring ett flertal motioner i frågan. Regeringen har dessutom uppdragit om studier om öppna geodata. Sverige ligger efter övriga nordiska länder, men även andra länder har kommit längre i utvecklingen mot öppna geodata. Idag arbetas mycket på området och myndigheten med ansvar för geodata i Sverige, Lantmäteriet, har börjat öppna datamängder, och fortsätter verka för vidare öppnanden. Detta kandidatexamensarbete har genomförts vid Kungliga Tekniska Högskolan, KTH, i samarbete med Agima Management AB. Arbetet syftar till att studera vilka möjligheter öppna geodata medför och vilka utmaningar en organisation ställs inför när organisationen ska öppna geodata. Genom en litteraturstudie kring olika åsikter, främst ifrån offentlig sektor, och intervjuer med personer verksamma inom geodataområdet sammanfattas dessa möjligheter och utmaningar med öppna geodata. Resultaten av arbetet visar att de främsta möjligheterna med öppna geodata är näringslivsutveckling och innovationskraft samt effektiviseringspotential i offentlig sektor. Vidare leder detta till positiva samhällsekonomiska effekter. De utmaningar en offentlig organisation ställs inför vid ett öppnande av geodata är framför allt finansieringen av geodata. Detta eftersom geodata idag delvis finansieras av avgifter från användare. I förlängningen finns utmaningar med att upprätthålla en hög kvalitet på geodata om geodata till exempel skulle skattefinansieras. Detta blir därför en ständig fråga för tjänstemän och politiker. Slutsatsen är att eftersom möjligheterna övervinner många av utmaningarna, då dessa inte är direkta nackdelar utan något som behöver lösas på ett annat sätt än idag, kan öppna geodata antas bli vanligare i Sverige på sikt. En av anledningarna till den relativt låga hastighet Sverige håller på området verkar vara att tjänstemän och politiker inte är på samma nivå i frågan idag.
Open geodata is today (2016) a highly debated topic and the interest for the matter is increasing, especially for the public sector. In the parliament, politicians have started to work for open geodata, where the parliament recently decided about several motions on the matter. Recently, there have also been assignments from the government about studying open geodata impacts. Sweden is behind the other Nordic countries and several other countries have evolved further towards open geodata as well. Today there is a lot of work on open geodata questions and the Swedish mapping, cadastral and land registration authority, Lantmäteriet, has started to open some of its data, and is aiming towards opening more. This Bachelor of Science thesis has been conducted at KTH Royal Institute of Technology, in cooperation with Agima Management AB. The study aims towards describing the opportunities that open geodata brings, and the challenges that an organization faces when opening data. These opportunities and challenges are summarized through a literature review of different opinions, mostly from the public sector, and personal interviews with persons in the geodata field. The results show that the foremost opportunities with open geodata is in developing business and innovation as well as efficiency improvements in the public sector which leads to economic gains. The challenges a public organization faces when trying to open geodata is mainly financial. The financial issues come from the current model where fees from users are financing the operations. In extension, there will also be challenges with sustaining a high quality of geodata, which will be a constant question for officials and politicians if geodata, for example is financed by taxes. The conclusion is that since the opportunities overcome the challenges, as many of the challenges are not direct drawbacks, but rather something that has to be solved in a different way than it is today, open geodata may become more normal in Sweden eventually. One of the reasons for Sweden’s relatively low speed in the area seems to be that public officials and politicians are at different levels in the matter today.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

CARIOLI, GRETA. "CANCER MORTALITY DATA ANALYSIS AND PREDICTION". Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/612668.

Texto completo
Resumen
Tradizionalmente, l’epidemiologia descrittiva viene considerata come un semplice strumento esplorativo. Tuttavia, nel corso degli anni, la maggiore disponibilità e il miglioramento della qualità dei dati epidemiologici hanno portato allo sviluppo di nuove tecniche statistiche che caratterizzano l'epidemiologia moderna. Questi metodi non sono solo esplicativi, ma anche predittivi. In ambito di sanità pubblica, le previsioni degli andamenti futuri di morbilità e mortalità sono essenziali per valutare le strategie di prevenzione, la gestione delle malattie e per pianificare l'allocazione delle risorse. Durante il mio dottorato di ricerca in "Epidemiologia, Ambiente e Sanità Pubblica" ho lavorato all'analisi degli andamenti di mortalità per tumore, utilizzando principalmente la banca dati della World Health Organization (WHO), ma anche quella della Pan American Health Organization, dell’Eurostat, della United Nation Population Division, dello United States Census Bureau e la banca dati del Japanese National Institute of Population. Considerando diversi siti neoplastici e diversi paesi nel mondo, ho calcolato i tassi specifici per ogni classe di età quinquennale (da 0-4 a 80+ o 85+ anni), e singolo anno di calendario o quinquennio. Per poter confrontare i tassi fra diversi paesi, ho calcolato, utilizzando il metodo diretto sulla base della popolazione mondiale standard, i tassi di mortalità standardizzati per età per 100.000 anni-persona. Nella maggior parte delle analisi, ho poi applicato il modello di regressione joinpoint ai tassi standardizzati con lo scopo di individuare gli anni in cui erano avvenuti cambiamenti significativi nell’andamento dei tassi; per ogni segmento individuato dalla regressione joinpoint, ho calcolato le variazioni percentuali annue. Inoltre, mi sono concentrata sulle proiezioni degli andamenti futuri. Con l’obiettivo di individuare il segmento più recente dell’andamento di mortalità, ho applicato il modello di regressione joinpoint al numero di morti in ogni gruppo di età quinquennale. Quindi, ho utilizzato i Modelli Lineari Generalizzati (GLM), scegliendo la distribuzione di Poisson e diverse funzioni link, sui dati dell’ultimo segmento individuato dal modello joinpoint. In particolare, ho considerato le funzioni link identità, logaritmica, quinta potenza e radice quadrata. Ho anche implementato un algoritmo che genera una regressione "ibrida"; questo algoritmo seleziona automaticamente, in base al valore della statistica Akaike Information Criterion (AIC), il modello GLM Poisson più performante, tra quelli generati dalle funzioni link di identità, logaritmica, quinta potenza e radice quadrata, da applicare a ciascuna classe di età quinquennale. La regressione risultante, sull’insieme dei singoli gruppi di età, è quindi una combinazione dei modelli considerati. Quindi, applicando i coefficienti ottenuti dalle quattro regressioni GLM Poisson e dalla regressione ibrida sugli anni di previsione, ho ottenuto le stime predette del numero di morti. A seguire, utilizzando il numero di morti predetto e le popolazioni predette, ho stimato i tassi previsti specifici per età e i corrispondenti intervalli di previsione al 95% (PI). Infine, come ulteriore modello di confronto, ho costruito un modello medio, che semplicemente calcola una media delle stime prodotte dai diversi modelli GLM Poisson. Al fine di confrontare fra loro i sei diversi metodi di previsione, ho utilizzato i dati relativi a 21 paesi in tutto il mondo e all'Unione Europea nel suo complesso, e ho considerato 25 maggiori cause di morte. Ho selezionato solo i paesi con oltre 5 milioni di abitanti e solo i paesi per i quali erano disponibili dati di buona qualità (ovvero con almeno il 90% di coverage). Ho analizzato i dati del periodo temporale compreso tra il 1980 e il 2011 e, in particolare, ho applicato i vari modelli sui dati dal 1980 al 2001 con l’idea di prevedere i tassi sul periodo 2002-2011, e ho poi utilizzato i dati effettivamente disponibili dal 2002 al 2011 per valutare le stime predette. Quindi, per misurare l'accuratezza predittiva dei diversi metodi, ho calcolato la deviazione relativa assoluta media (AARD). Questa quantità indica la deviazione media percentuale del valore stimato dal valore vero. Ho calcolato gli AARD su un periodo di previsione di 5 anni (i.e. 2002-2006), e anche su un periodo di 10 anni (i.e. 2002-2011). Dalle analisi è emerso che il modello ibrido non sempre forniva le migliori stime di previsione e, anche quando risultava il migliore, i corrispondenti valori di AARD non erano poi molto lontani da quelli degli altri metodi. Tuttavia, le proiezioni ottenute utilizzando il modello ibrido, per qualsiasi combinazione di sito di tumore e sesso, non sono mai risultate le peggiori. Questo modello è una sorta di compromesso tra le quattro funzioni link considerate. Anche il modello medio fornisce stime intermedie rispetto alle altre regressioni: non è mai risultato il miglior metodo di previsione, ma i suoi AARD erano competitivi rispetto agli altri metodi considerati. Complessivamente, il modello che mostra le migliori prestazioni predittive è il GLM Poisson con funzione link identità. Inoltre, questo metodo ha mostrato AARD estremamente bassi rispetto agli altri metodi, in particolare considerando un periodo di proiezione di 10 anni. Infine, bisogna tenere in considerazione che gli andamenti previsti, e i corrispondenti AARD, ottenuti da proiezioni su periodi di 5 anni sono molto più accurati rispetto a quelli su periodi di 10 anni. Le proiezioni ottenute con questi metodi per periodi superiori a 5 anni perdono in affidabilità e la loro utilità in sanità pubblica risulta quindi limitata. Durante l'implementazione della regressione ibrida e durante le analisi sono rimaste aperte alcune questioni: ci sono altri modelli rilevanti che possono essere aggiunti all'algoritmo? In che misura la regressione joinpoint influenza le proiezioni? Come trovare una regola "a priori" che aiuti a scegliere quale metodo predittivo applicare in base alle varie covariate disponibili? Tutte queste domande saranno tenute in considerazione per gli sviluppi futuri del progetto. Prevedere gli andamenti futuri è un processo complesso, le stime risultanti dovrebbero quindi essere considerate con cautela e solo come indicazioni generali in ambito epidemiologico e di pianificazione sanitaria.
Descriptive epidemiology has traditionally only been concerned with the definition of a research problem’s scope. However, the greater availability and improvement of epidemiological data over the years has led to the development of new statistical techniques that have characterized modern epidemiology. These methods are not only explanatory, but also predictive. In public health, predictions of future morbidity and mortality trends are essential to evaluate strategies for disease prevention and management, and to plan the allocation of resources. During my PhD at the school of “Epidemiology, Environment and Public Health” I worked on the analysis of cancer mortality trends, using data from the World Health Organization (WHO) database, available on electronic support (WHOSIS), and from other databases, including the Pan American Health Organization database, the Eurostat database, the United Nation Population Division database, the United States Census Bureau and the Japanese National Institute of Population database. Considering several cancer sites and several countries worldwide, I computed age-specific rates for each 5-year age-group (from 0–4 to 80+ or 85+ years) and calendar year or quinquennium. I then computed age-standardized mortality rates per 100,000 person-years using the direct method on the basis of the world standard population. I performed joinpoint models in order to identify the years when significant changes in trends occurred and I calculated the corresponding annual percent changes. Moreover, I focused on projections. I fitted joinpoint models to the numbers of certified deaths in each 5-year age-group in order to identify the most recent trend slope. Then, I applied Generalized Liner Model (GLM) Poisson regressions, considering different link functions, to the data over the time period identified by the joinpoint model. In particular, I considered the identity link, the logarithmic link, the power five link and the square root link. I also implemented an algorithm that generated a “hybrid” regression; this algorithm automatically selects the best fitting GLM Poisson model, among the identity, logarithmic, power five, and square root link functions, to apply for each age-group according to Akaike Information Criterion (AIC) values. The resulting regression is a combination of the considered models. Thus, I computed the predicted age-specific numbers of deaths and rates, and the corresponding 95% prediction intervals (PIs) using the regression coefficients obtained previously from the four GLM Poisson regressions and from the hybrid GLM Poisson regression. Lastly, as a further comparison model, I implemented an average model, which just computes a mean of the estimates produced by the different considered GLM Poisson models. In order to compare the six different prediction methods, I used data from 21 countries worldwide and for the European Union as a whole, I considered 25 major causes of death. I selected countries with over 5 million inhabitants and with good quality data (i.e. with at least 90% of coverage). I analysed data for the period between 1980 and 2011 and, in particular, I considered data from 1980 to 2001 as a training dataset, and from 2002 to 2011 as a validation set. To measure the predictive accuracy of the different models, I computed the average absolute relative deviations (AARDs). These indicate the average percent deviation from the true value. I calculated AARDs on 5-year prediction period (i.e. 2002-2006), as well as for 10-year period (i.e. 2002-2011). The results showed that the hybrid model did not give always the best predictions, and when it was the best, the corresponding AARD estimates were not very far from the other methods. However, the hybrid model projections, for any combination of cancer site and sex, were never the worst. It acted as a compromise between the four considered models. The average model is also ranked in an intermediate position: it never was the best predictive method, but its AARDs were competitive compared to the other methods considered. Overall, the method that shows the best predictive performance is the Poisson GLM with an identity link function. Furthermore, this method, showed extremely low AARDs compared to other methods, particularly when I considered a 10-year projection period. Finally, we must take into account that predicted trends and corresponding AARDs derived from 5-year projections are much more accurate than those done over a 10-year period. Projections beyond five years with these methods lack reliability and become of limited use in public health. During the implementation of the algorithm and the analyses, several questions emerged: Are there other relevant models that can be added to the algorithm? How much does the Joinpoint regression influence projections? How to find an “a priori” rule that helps in choosing which predictive method apply according to various available covariates? All these questions are set aside for the future developments of the project. Prediction of future trends is a complex procedure, the resulting estimates should be taken with caution and considered only as general indications for epidemiology and health planning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Visnes, Snorre. "Skalering av leseoperasjoner med Apache Derby". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10330.

Texto completo
Resumen

Oppgaven tar for seg hvordan det er mulig å lage et cluster basert på Derby som støtter et høyt volum av lesetransaksjoner. Skriving er ikke i fokus ytelsesmessig, men er mulig gjennom Derbys støtte for XA. Det faktum at XA er et verktøy for å gjennomføre 2-fase commit, ikke replisering, gjør at skriving kun er mulig for administrator. Hovedsakelig grunnet manglende sekvensering av transaksjoner, samt mangel på automatisk opprydning etter feilede transaksjoner. Testing viser at skaleringsgraden for et slikt system er på 100%. Det er ingen sammenkobling mellom servernoder, og dermed ingen øvre grense for antall noder. Det at det ikke er noen sammenkobling mellom servernoder gjør at disse kan spres geografisk. Sammen med en fail-over mekanisme i klienten kan dette systemet oppnå høy tilgjengelighet ved lesing.

Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Borak, Kim y Gabriel Vilén. "Datadriven beteendemodellering med genetisk programmering". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177561.

Texto completo
Resumen
Inom Totalförsvarets forskningsinstitut och Försvarsmakten finns behov av att enklare och mer effektivt (med avseende på kostnad och tid) kunna skapa bättre, mer realistiska och objektiva beteenden för de syntetiska aktörer som ingår i Försvarsmaktens simuleringsbaserade beslutsstöds- och träningsapplikationer. Den traditionella metoden för anskaffning av kunskap om ett beteende, vid utveckling av beteendemodeller, är att arbeta med experter inom ämnesområdet. Den här processen är ofta tidskrävande och dyr. Ett annat problem är att komplexiteten är begränsad av expertens kunskap och dess kognitiva förmågor. För att möta problemen undersöker den här studien en alternativ metod till utvecklandet av beteendemodeller med hjälp av maskininlärningsalgoritmer. Det här tillvägagångsättet kallas datadriven beteendemodellering. Datadriven beteendemodellering skiljer sig avsevärt från den traditionella domänexperts-metoden. I den datadrivna bygger datorprogram självständigt beteendemodellerna utifrån data och observationer utan mänsklig inverkan, medan domänexperts-metoden utvecklas för hand av människor. I studien utvecklas datadriven beteendemodellering med genetisk programmering, en maskininlärningsteknik inspirerad av den biologiska evolutionen. Tekniken går ut på att datorn försöker hitta de bästa möjliga programmen för att lösa en användardefinierad uppgift med hjälp utav, bland annat, en lämplighetsfunktion som betygsätter programmen. I den här studien byggs ett system som utför datadriven beteendemodellering med genetisk programmering för att utveckla en mjukvaruagents beteende i en simulator. I projektet användes en vargsimulator för att försöka lära upp en varg att jaga och sluka flyende får. Simulatorgenererad data används av agenten för att lära sig rörelsebeteenden via utforskning (eng. trial-and-error). Systemet är generellt genom dess förmåga att kunna anpassas till andra simuleringar och olika beteendemodeller. Resultatet av experimenten utförda med systemet visar att en upplärd varg lyckades utveckla ett skickligt datadrivet beteende där den jagade och slukade alla 46 får på under tio minuter. I jämförelse lyckades en fördefinierad skriptad centroidalgoritm äta upp fåren på ungefär åtta minuter. Slutsatsen är att lyckade och effektiva datadrivna beteenden kan utvecklas med genetisk programmering, åtminstone för detta problem, om de tillåts tillräckligt lång simuleringstid, bra konfiguration samt en gynnsam och rättvis lämplighetsfunktion.
Within the Swedish Defence Research Agency and the Swedish Armed Forces there is a need to easier and more efficiently (in terms of cost and time) create better, more realistic and objective behaviour for the synthetic actors involved in the Armed Forces’ simulation-based decision support, and training applications. The traditional method for the acquisition of knowledge about a behaviour, in the development of behavioural models, is working with experts on the subject. This process is often time consuming and expensive. Another problem is that the complexity is limited by the expert knowledge and his cognitive abilities. To address the problems this study investigates an alternative approach to the development of behavioural models. The new method develops behavioural models of machine learning algorithms instead of manually creating models. This approach is called data-driven behaviour modelling. Data-driven behaviour modelling differs significantly from the traditional subject-matter expert method. The data-driven computer program creates autonomous behaviour models based on data and observations without human influence, while the subject-matter expert method is developed manually by humans. In this study data-driven behaviour modelling with genetic programming is developed, a machine learning technique inspired by the biological evolution. The technique let’s the computer trying to find the best possible programs in order to solve a user defined task with help of, for instance, a fitness function that rates the programs. This thesis presents a system that performs data-driven behaviour modelling with genetic programming to develop a software agent's behaviour in a simulator. The study used a wolf simulator to try to teach a wolf to hunt and devour fleeing sheep. Simulator-generated data is used by the agent to learn behaviours through exploration (trial-and-error). The system is generally applicable by its ability to be adapted to other simulations with simple adjustments in a configuration file. The system is also adaptable to many different behaviour models by its basic execution structure. The results of the experiments performed with the system show that a trained wolf succeeded to develop a clever data-driven behaviour where it hunted and devoured all of the 46 sheeps in less than ten minutes. In comparison a predefined centroid algorithm managed to eat the sheep in about eight minutes. The conclusion is that successful and efficient data-driven behaviours can be developed, at least regarding this problem, if allowed a sufficiently long simulation time, good configuration and a favourable and equitable fitness function.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Helenius, Anna. "GDPR och känsliga personuppgifter : En fallstudie om fackförbunds arbete med Dataskyddsförordningen". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15328.

Texto completo
Resumen
Den 25e maj 2018 träder den nya dataskyddsförordningen, GDPR, i kraft. I och med detta kommer alla medlemsstater i den europeiska unionen få en gemensam lag som skärper tidigare regler och ställer högre krav på organisationers personuppgiftsbehandling. Syftet med detta arbete har varit att undersöka och kartlägga hur verksamheter som behandlar känsliga personuppgifter anser sig bli påverkade av GDPR, samt hur de arbetar för att uppfylla kraven från denna nya förordning. Känsliga personuppgifter är sådana som exempelvis avslöjar en persons sexuella läggning, politiska åsikt, religiösa övertygelse eller fackliga tillhörighet och för att uppfylla syftet utfördes därför en fallstudie på sex stycken fackförbund av olika storlek. Datainsamlingen gjordes med hjälp av intervjuer med en person från varje förbund som har god insikt och överblick över organisationens GDPR-arbete. Resultaten från studien visar att fackförbunden anser att den nya dataskyddsförordningen är komplex och svårtolkad men att den ändå medför positiva konsekvenser för både organisationen och medlemmarna. Alla personuppgifter som fackförbunden hanterar faller direkt under känsliga personuppgifter eftersom de kan härledas till facklig tillhörighet, och detta gör att förbunden anser sig ställas inför högre krav på informationssäkerhet i jämförelse med många andra verksamheter. Bland annat möter de stora utmaningar i hur de skall kunna kommunicera med sina medlemmar i framtiden eftersom missbruksregeln försvinner och även ostrukturerat material inkluderas i den nya dataskyddsförordningen. Det går inte att säga generellt vilka åtgärder förbunden vidtagit för att förbereda sig inför de nya kraven från GDPR men det är tydligt att både tekniska och administrativa säkerhetsåtgärder behövs. Exempelvis uppgraderar många av förbunden sina IT-system och upphandlar helt nya ärendehanteringssystem, samtidigt som de dessutom inför rutiner för gallring och för hantering av personuppgiftsincidenter.
On 25 May 2018, the new data protection regulation, GDPR, will come into effect. With this, all members of the European Union will have a common law that sharpens previous rules and puts higher demands on organisations' personal data processing. The purpose of this study has been to investigate and map how businesses dealing with sensitive personal data consider themselves being affected by GDPR, and how they work to meet the requirements of this new regulations. Sensitive personal data are what for example reveals a person's sexual orientation, political opinion, religious conviction or union affiliation and therefore, to fulfil the purpose, a case study with six trade unions of different sizes was performed. The data collection was made with help of interviews with one person from each trade union, who has good insight and overview over the organisation's work with the GDPR. The results from the study show that the trade unions find the new data protection regulation to be complex and hard to interpret but that it nevertheless causes positive consequences for both the organisation and the members. All personal data that the trade unions handle fall directly under sensitive personal data since they may be derived to union affiliation and this leads to where the trade unions considering themselves facing higher demands on information security in comparison to many other businesses. Among other things, they face major challenges in how they are going to communicate with their members in the future, as even unstructured material is included in the new data protection regulation.  It's not possible to say in general what actions the unions have taken to prepare for the new requirements of the GDPR, but it's clear that both technical and administrative safety actions are needed. For example, many of the unions are upgrading their IT systems or purchasing brand new case management systems while also introducing new routines for clearing of data and for management of personal data incidents.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Borg, Olivia. "Educational Data Mining : En kvalitativ studie med inriktning på dataanalys för att hitta mönster i närvarostatistik". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-16987.

Texto completo
Resumen
Studien fokuserar på att hitta olika mönster i närvarostatistik hos elever som inte närvarar i skolan. Informationen som resultatet ger kan därefter användas som ett beslutsunderlag för skolor eller till andra organisationer som är intresserade av EDM inom närvarostatistik. Arbetet genomförde en kvalitativ metodansats med en fallstudie som bestod utav en litteraturstudie samt en implementation. Litteraturstudien användes för att få en förståelse över vanliga tillvägagångssätt inom EDM, som därefter låg till grund för implementationen som använde arbetssättet CRISP-DM. Resultatet blev fem olika mönster som definieras genom dataanalys. Mönstren visar frånvaro ur ett tidsperspektiv samt per ämne och kan ligga till grund för framtida beslutsunderlag.
The study focuses on finding different patterns in attendance statistics for students who are not present at school. The information provided by the results can thereafter be used as a basis for decision-making for schools or for other organizations interested in EDM within attendance statistics. The work carried out a qualitative method approach with a case study that consisted a literature study and an implementation. The literature study was used to gain an understanding of common approaches within EDM, which subsequently formed the basis for the implementation that used the working method CRISP-DM. The project resulted in five different patterns defined by data analysis. The patterns show absence from a time perspective and per subject and can form the basis for future decision-making.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Muric, Fatmir. "Öppna data och samordning : En utvärderande fallstudie om implementeringen av öppna data och Riksarkivets arbete med samordning". Thesis, Högskolan i Halmstad, Akademin för lärande, humaniora och samhälle, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-37304.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Lager, Joel y Pontus Hermansson. "Marknadsföringsmixens fortsatta betydelse, med hänsyn till digitaliseringen. : En systematisk litteraturstudie". Thesis, Högskolan Dalarna, Företagsekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:du-27860.

Texto completo
Resumen
Syfte: Syftet med denna studie är att diskutera hur marknadsföringsmixen kunnat bevarat sin betydelse över tid inom området marknadsföring, med hänsyn till de förändringar som digitaliseringen inneburit. Metod: Studien genomfördes som en systematisk litteraturstudie. Studien baserades på över 50 vetenskapliga artiklar som är relevanta för syftet. Artiklarna samlades in via akademiska databaser. Resultat: Studien visar på att marknadsföringsmixen fortsatt är en aktuell modell inom marknadsföring tack vare sin pedagogiska enkelhet och förmåga att anpassa sig till rådande förutsättningar. De fyra P:na står fortsatt för Produkt, Pris, Plats och Påverkan men förändringen ligger i vad som innefattas i de ständigt växande och föränderliga subkategorierna. Det som motståndarna till marknadsföringsmixen menar är dess svaghet, att kriterierna som de olika kategorierna vilar på aldrig blivit specificerade, tycks också vara modellens styrka. Utan den efterfrågade specificeringen kan de fyra P:na anpassas efter användaren och de förhållanden som råder, vilket har gjort att den bevarat sin betydelse trots digitaliseringen och de nya förutsättningarna.
Purpose: The purpose of this study is to discuss how the marketing mix could have retained its importance over time in the field of marketing, in consideration of changes that digitalization meant. Method: The study is conducted as a systematic literature study. The study is based on more than 50 scientific articles relevant to the purpose. The articles were collected through academic databases. Result: The articles show that the marketing mix is still an up-to-date marketing tool thanks to its educational simplicity and ability to adapt to the prevailing conditions. The four P:s still stand for Product, Price, Place and Promotion, but the change lies in what is included in the ever-growing and changing subcategories. What the opponents of the marketing mix mean is its weakness, that the criteria that the different categories rest on have never been specified, also seems to be its greatest strength. Without the requested specification, the four P:s can be adapted to the user and the conditions prevailing, which has made it survive in spite of digitization and the new conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Wikström, Simon. "Spelutveckling med ML : Skapandet av en space shooter med Unity ML Agents toolkit och speldesign koncept". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176096.

Texto completo
Resumen
A.I is growing rapidly within lots of industries and the gaming industry is no exception, ranging from reusable A.I brains and behaviours to procedural generated content (PCG) and more. Most often the A.I agent only has one specific goal: produce content, meet a winning condition by eliminating a player or something similar. This thesis strives to examine if it is possible to design an A.I agent with contradictory assignments within a space-shooter environment and how it is supposed to be designed if that is the case. The idea is that instead of eliminating the player within the shortest amount of time possible, the A.I agent will eliminate the player over time and extend the playing session, simultaneously working with and against the player. This is the thesis contribution to the field of M-L and A.I agents. To achieve the desired effect, Unity M-L toolkit has been used to train an A.I agent using proximal policy optimization (PPO). The produced A.I agent behaves in an acceptable fashion but would require more work to fully behave as intended. This work lays a foundation for creating an A.I agent to be used within a space-shooter game and suggests how to further develop the A.I agent using game designing concepts.
A.I växer hastigt inom diverse industrier och spelindustrin är inget undantag, det rör sig om allt från återanvändningsbara A.I hjärnor och beteenden till automatiskt genererat innehåll och mer därtill. Oftast har A.I agenter bara ett specifikt mål: producera innehåll, uppnå ett tillstånd som leder till vinst genom att eliminera en spelare eller liknande. Det här arbetet strävar efter att undersöka huruvida det är möjligt att skapa en A.I agent med motsägelsefulla uppdrag i en spaceshooter miljö och hur det i så fall skulle designas. Idén är att istället för att eliminera en spelare så fort som möjligt gör man det över så lång tid som möjligt för att förlänga spelsessionen, vilket bidrar till att agenten försöker hjälpa och stjälpa för spelaren samtidigt. Detta är arbetets bidrag till M-L och A.I området. För att uppnå denna effekt har Unity M-L toolkit nyttjats i syfte att träna en agent som använder proximal policy optimization (PPO). Den producerade A.I agenten utför sin uppgift acceptabelt men för att fungera som den var avsedd fullt ut krävs mer arbete.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía