Tesis sobre el tema "MEG data"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "MEG data".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Schönherr, Margit. "Development and Evaluation of Data Processing Techniques in Magnetoencephalography". Doctoral thesis, Universitätsbibliothek Leipzig, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-96832.
Texto completoZumer, Johanna Margarete. "Probabilistic methods for neural source reconstruction from MEG data". Diss., Search in ProQuest Dissertations & Theses. UC Only, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3289309.
Texto completoSource: Dissertation Abstracts International, Volume: 68-11, Section: B, page: 7485. Adviser: Srikantan Nagarajan.
YU, LIJUN. "Sequential Monte Carlo for Estimating Brain Activity from MEG Data". Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1459528441.
Texto completoZaremba, Wojciech. "Modeling the variability of EEG/MEG data through statistical machine learning". Habilitation à diriger des recherches, Ecole Polytechnique X, 2012. http://tel.archives-ouvertes.fr/tel-00803958.
Texto completoChowdhury, Rasheda. "Localization of the generators of epileptic activity using Magneto-EncephaloGraphy (MEG) data". Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103740.
Texto completoComprendre les mécanismes sous-jacents associés à la génération d'une activité épileptique ainsi que la localisation des régions cérébrales impliquées lors d'une telle décharge sont d'un intérêt majeur lors du planning pré-chirurgical des patients souffrant d'épilepsie pharmaco-résistante. Les pointes épileptiques sont des décharges neuronales anormales générées de manière spontanée. Elles ne sont associées à aucune manifestation clinique et sont caractéristiques de l'épilepsie de chaque patient. Elles sont détectables à l'aide d'enregistrements de scalp tels que l'Electro-EncéphaloGraphie (EEG) ou la Magnéto-EncéphaloGraphie (MEG), mesurant respectivement les potentiels électriques et champs magnétiques générés par des populations de neurones activées de manière synchrone. Les pointes épileptiques peuvent être détectées en EEG ou en MEG à condition qu'elles se distinguent de l'activité de fond. Pour cela, elles doivent être associées à des générateurs suffisamment étendus spatialement. Alors que les méthodes dites de localisation de sources s'intéressent principalement à localiser l'origine des générateurs de ces décharges épileptiques, l'objectif de ce travail de Maîtrise est d'associer la localisation de ces générateurs à l'estimation de leur extension spatiale. Dans le cadre de ce projet de Maîtrise, nous avons développé et validé des méthodes de localisation des sources capables de localiser les générateurs d'activité épileptique ainsi que leur extension spatiale le long de la surface corticale. Le Maximum d'Entropie sur la Moyenne (MEM) est une technique de localisation de la source qui a démontré de telles performances lors de l'utilisation de données EEG. L'objectif de ce projet était d'adapter et valider le comportement du MEM lors de l'utilisation de données MEG. Le MEM introduit des connaissances a priori réalistes afin de modéliser les générateurs de pointes épileptiques. A partir de tels modèles a priori, deux nouvelles variantes du MEM ont été proposées et comparées avec de nouvelles méthodes implémentées dans le cadre du modèle hiérarchique Bayésien (inférence obtenue par maximum de vraisemblance restreint ReML). Notre objectif était de comparer la pertinence des modèles a priori considérés dans deux cadres statistiques de régularisation (MEM et ReML). A l'aide de simulations réaliste de l'activité épileptique, ces nouvelles méthodes ont été étudiées et leurs performances en termes de localisation spatiale des sources et de leur extension spatiale ont été évaluées. Les résultats ont montré que les variantes du MEM ont fourni les meilleures performances pour localiser les sources avec leur extension spatiale. Finalement, nous présentons quelques résultats préliminaires illustrant les performances de méthodes proposées sur des données cliniques. Ces nouvelles méthodes ont été appliquées sur quelques données cliniques afin d'évaluer leur pertinence dans le contexte du planning pré-chirurgical. Finalement, nous nous sommes intéressés à la possibilité d'utiliser les techniques de régularisation de type MEM et ReML pour proposer des métriques de comparaison de modèles, lors de l'analyse de données cliniques. Nous avons appliqué ces métriques afin d'évaluer l'impact du type de modèle direct sur la précision des méthodes. Nos résultats préliminaires suggèrent que le modèle réaliste des éléments frontières serait plus pertinent que le modèle sphérique lors de la localisation de données MEG.
Molins, Jiménez Antonio. "Multimodal integration of EEG and MEG data using minimum ℓ₂-norm estimates". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40528.
Texto completoIncludes bibliographical references (leaves 69-74).
The aim of this thesis was to study the effects of multimodal integration of electroencephalography (EEG) and magnetoencephalography (MEG) data on the minimum ℓ₂-norm estimates of cortical current densities. We investigated analytically the effect of including EEG recordings in MEG studies versus the addition of new MEG channels. To further confirm these results, clinical datasets comprising concurrent MEG/EEG acquisitions were analyzed. Minimum ℓ₂-norm estimates were computed using MEG alone, EEG alone, and the combination of the two modalities. Localization accuracy of responses to median-nerve stimulation was evaluated to study the utility of combining MEG and EEG.
by Antonio Molins Jiménez.
S.M.
Papadopoulo, Théodore. "Contributions and perspectives to computer vision, image processing and EEG/MEG data analysis". Habilitation à diriger des recherches, Université Nice Sophia Antipolis, 2011. http://tel.archives-ouvertes.fr/tel-00847782.
Texto completoZavala, Fernandez Heriberto. "Evaluation and comparsion of the independent components of simultaneously measured MEG and EEG data /". Berlin : Univ.-Verl. der TU, 2009. http://www.ub.tu-berlin.de/index.php?id=2260#c9917.
Texto completoDubarry, Anne-Sophie. "Linking neurophysiological data to cognitive functions : methodological developments and applications". Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM5017.
Texto completoA major issue in Cognitive Psychology is to describe human cognitive functions. From the Neuroscientific perceptive, measurements of brain activity are collected and processed in order to grasp, at their best resolution, the relevant spatio-temporal features of the signal that can be linked with cognitive operations. The work of this thesis consisted in designing and implementing strategies in order to overcome spatial and temporal limitations of signal processing procedures used to address cognitive issues. In a first study we demonstrated that the distinction between picture naming classical temporal organizations serial-parallel, should be addressed at the level of single trials and not on the averaged signals. We designed and conducted the analysis of SEEG signals from 5 patients to show that the temporal organization of picture naming involves a parallel processing architecture to a limited degree only. In a second study, we combined SEEG, EEG and MEG into a simultaneous trimodal recording session. A patient was presented with a visual stimulation paradigm while the three types of signals were simultaneously recorded. Averaged activities at the sensor level were shown to be consistent across the three techniques. More importantly a fine-grained coupling between the amplitudes of the three recording techniques is detected at the level of single evoked responses. This thesis proposes various relevant methodological and conceptual developments. It opens up several perspectives in which neurophysiological signals shall better inform Cognitive Neuroscientific theories
Abbasi, Omid [Verfasser], Georg [Gutachter] Schmitz y Markus [Gutachter] Butz. "Retrieving neurophysiological information from strongly distorted EEG and MEG data / Omid Abbasi ; Gutachter: Georg Schmitz, Markus Butz". Bochum : Ruhr-Universität Bochum, 2017. http://d-nb.info/1140223119/34.
Texto completoBamidis, Panagiotis D. "Spatio-temporal evolution of interictal epileptic activity : a study with unaveraged multichannel MEG data in association with MRIs". Thesis, Open University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318685.
Texto completoWhinnett, Mark. "Analysis of face specific visual processing in humans by applying independent components analysis(ICA) to magnetoencephalographic (MEG) data". Thesis, Open University, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607160.
Texto completoAblin, Pierre. "Exploration of multivariate EEG /MEG signals using non-stationary models". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT051.
Texto completoIndependent Component Analysis (ICA) models a set of signals as linear combinations of independent sources. This analysis method plays a key role in electroencephalography (EEG) and magnetoencephalography (MEG) signal processing. Applied on such signals, it allows to isolate interesting brain sources, locate them, and separate them from artifacts. ICA belongs to the toolbox of many neuroscientists, and is a part of the processing pipeline of many research articles. Yet, the most widely used algorithms date back to the 90's. They are often quite slow, and stick to the standard ICA model, without more advanced features.The goal of this thesis is to develop practical ICA algorithms to help neuroscientists. We follow two axes. The first one is that of speed. We consider the optimization problems solved by two of the most widely used ICA algorithms by practitioners: Infomax and FastICA. We develop a novel technique based on preconditioning the L-BFGS algorithm with Hessian approximation. The resulting algorithm, Picard, is tailored for real data applications, where the independence assumption is never entirely true. On M/EEG data, it converges faster than the `historical' implementations.Another possibility to accelerate ICA is to use incremental methods, which process a few samples at a time instead of the whole dataset. Such methods have gained huge interest in the last years due to their ability to scale well to very large datasets. We propose an incremental algorithm for ICA, with important descent guarantees. As a consequence, the proposed algorithm is simple to use and does not have a critical and hard to tune parameter like a learning rate.In a second axis, we propose to incorporate noise in the ICA model. Such a model is notoriously hard to fit under the standard non-Gaussian hypothesis of ICA, and would render estimation extremely long. Instead, we rely on a spectral diversity assumption, which leads to a practical algorithm, SMICA. The noise model opens the door to new possibilities, like finer estimation of the sources, and use of ICA as a statistically sound dimension reduction technique. Thorough experiments on M/EEG datasets demonstrate the usefulness of this approach.All algorithms developed in this thesis are open-sourced and available online. The Picard algorithm is included in the largest M/EEG processing Python library, MNE and Matlab library, EEGlab
Ewald, Arne Verfasser], Klaus-Robert [Akademischer Betreuer] [Müller, Andreas [Akademischer Betreuer] Daffertshofer y Guido [Akademischer Betreuer] Nolte. "Novel multivariate data analysis techniques to determine functionally connected networks within the brain from EEG or MEG data / Arne Ewald. Gutachter: Klaus-Robert Müller ; Andreas Daffertshofer ; Guido Nolte". Berlin : Technische Universität Berlin, 2014. http://d-nb.info/1067387773/34.
Texto completoEwald, Arne [Verfasser], Klaus-Robert [Akademischer Betreuer] Müller, Andreas [Akademischer Betreuer] Daffertshofer y Guido [Akademischer Betreuer] Nolte. "Novel multivariate data analysis techniques to determine functionally connected networks within the brain from EEG or MEG data / Arne Ewald. Gutachter: Klaus-Robert Müller ; Andreas Daffertshofer ; Guido Nolte". Berlin : Technische Universität Berlin, 2014. http://d-nb.info/1067387773/34.
Texto completoRoux, Frédéric [Verfasser], Peter J. [Akademischer Betreuer] Uhlhaas, Wolf [Akademischer Betreuer] Singer y Christian [Akademischer Betreuer] Fiebach. "Alpha and gamma-band oscillations in MEG-data: networks, function and development / Frédéric Roux. Gutachter: Wolf Singer ; Christian Fiebach. Betreuer: Peter J. Uhlhaas". Frankfurt am Main : Univ.-Bibliothek Frankfurt am Main, 2013. http://d-nb.info/1043978194/34.
Texto completoTucciarelli, Raffaele. "Characterizing the spatiotemporal profile and the level of abstractness of action representations: neural decoding of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) data". Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/368799.
Texto completoTucciarelli, Raffaele. "Characterizing the spatiotemporal profile and the level of abstractness of action representations: neural decoding of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) data". Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1592/1/PhD_thesis-_Raffaele_Tucciarelli.pdf.
Texto completoJas, Mainak. "Contributions pour l'analyse automatique de signaux neuronaux". Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0021.
Texto completoElectrophysiology experiments has for long relied upon small cohorts of subjects to uncover statistically significant effects of interest. However, the low sample size translates into a low power which leads to a high false discovery rate, and hence a low rate of reproducibility. To address this issue means solving two related problems: first, how do we facilitate data sharing and reusability to build large datasets; and second, once big datasets are available, what tools can we build to analyze them ? In the first part of the thesis, we introduce a new data standard for sharing data known as the Brain Imaging Data Structure (BIDS), and its extension MEG-BIDS. Next, we introduce the reader to a typical electrophysiological pipeline analyzed with the MNE software package. We consider the different choices that users have to deal with at each stage of the pipeline and provide standard recommendations. Next, we focus our attention on tools to automate analysis of large datasets. We propose an automated tool to remove segments of data corrupted by artifacts. We develop an outlier detection algorithm based on tuning rejection thresholds. More importantly, we use the HCP data, which is manually annotated, to benchmark our algorithm against existing state-of-the-art methods. Finally, we use convolutional sparse coding to uncover structures in neural time series. We reformulate the existing approach in computer vision as a maximuma posteriori (MAP) inference problem to deal with heavy tailed distributions and high amplitude artifacts. Taken together, this thesis represents an attempt to shift from slow and manual methods of analysis to automated, reproducible analysis
Carrara, Igor. "Méthodes avancées de traitement des BCI-EEG pour améliorer la performance et la reproductibilité de la classification". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4033.
Texto completoElectroencephalography (EEG) non-invasively measures the brain's electrical activity through electromagnetic fields generated by synchronized neuronal activity. This allows for the collection of multivariate time series data, capturing a trace of the brain electrical activity at the level of the scalp. At any given time instant, the measurements recorded by these sensors are linear combinations of the electrical activities from a set of underlying sources located in the cerebral cortex. These sources interact with one another according to a complex biophysical model, which remains poorly understood. In certain applications, such as surgical planning, it is crucial to accurately reconstruct these cortical electrical sources, a task known as solving the inverse problem of source reconstruction. While intellectually satisfying and potentially more precise, this approach requires the development and application of a subject-specific model, which is both expensive and technically demanding to achieve.However, it is often possible to directly use the EEG measurements at the level of the sensors and extract information about the brain activity. This significantly reduces the data analysis complexity compared to source-level approaches. These measurements can be used for a variety of applications, including monitoring cognitive states, diagnosing neurological conditions, and developing brain-computer interfaces (BCI). Actually, even though we do not have a complete understanding of brain signals, it is possible to generate direct communication between the brain and an external device using the BCI technology. This work is centered on EEG-based BCIs, which have several applications in various medical fields, like rehabilitation and communication for disabled individuals or in non-medical areas, including gaming and virtual reality.Despite its vast potential, BCI technology has not yet seen widespread use outside of laboratories. The primary objective of this PhD research is to try to address some of the current limitations of the BCI-EEG technology. Autoregressive models, even though they are not completely justified by biology, offer a versatile framework to effectively analyze EEG measurements. By leveraging these models, it is possible to create algorithms that combine nonlinear systems theory with the Riemannian-based approach to classify brain activity. The first contribution of this thesis is in this direction, with the creation of the Augmented Covariance Method (ACM). Building upon this foundation, the Block-Toeplitz Augmented Covariance Method (BT-ACM) represents a notable evolution, enhancing computational efficiency while maintaining its efficacy and versatility. Finally, the Phase-SPDNet work enables the integration of such methodologies into a Deep Learning approach that is particularly effective with a limited number of electrodes.Additionally, we proposed the creation of a pseudo online framework to better characterize the efficacy of BCI methods and the largest EEG-based BCI reproducibility study using the Mother of all BCI Benchmarks (MOABB) framework. This research seeks to promote greater reproducibility and trustworthiness in BCI studies.In conclusion, we address two critical challenges in the field of EEG-based brain-computer interfaces (BCIs): enhancing performance through advanced algorithmic development at the sensor level and improving reproducibility within the BCI community
Högberg, Martin y Victor Olofsson. "Innovera med data : Bidra till en mer hållbar livsmedelsindustri med hjälp av Design thinking". Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-78336.
Texto completoZiehe, Andreas. "Blind source separation based on joint diagonalization of matrices with applications in biomedical signal processing". Phd thesis, [S.l. : s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=976710331.
Texto completoCederberg, Petter. "Kan e-tjänster förenklas och bli mer motiverande med gamification och öppna data? : En kvalitativ studie". Thesis, Karlstads universitet, Handelshögskolan, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-42940.
Texto completoHörberg, Eric. "Förutsäga data för lastbilstrafik med maskininlärning". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205190.
Texto completoArtificial neural networks are used frequently today in order to find patterns in large amounts of data. If one can see the patterns one can to some extent see the future, and how well this works for truck tra c is researched in this report. Historical data about truck tra c is used with a feed-forward artificial neural network to create forecasts for arrivals of trucks to a logistic location. With a program that was created to test what data structure and what parameters give the best results for the artificial neural network it is researched what type of forecast gives the best result. The two forecasts that are tested are the time to the next trucks arrival and the intensity of truck arrivals the next hour. The best forecasts were created when the intensity of trucks for the next hour were predicted, and the forecasts were shown to be better than the forecasts present statistical methods can give.
Konnskog, Magnus. "Nyttor med öppna data : Sydvästlänken som fallstudie". Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-21700.
Texto completoMellberg, Amanda y Emma Skog. "Artificiell Intelligens inom rekryteringsprocessen : objektivitet med subjektiv data?" Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-15078.
Texto completoArtificial Intelligence (AI) has several areas of use such as robotics, facial recognition and decision-making support. Organizations will use AI more to meet challenges within Human Resources (HR) over the next five years, indicating that AI is likely to become a more common occurrence in the recruitment process. One of the most important assets of a company is its employees and incorrect recruitments can lead to high costs. With machine learning and AI systems as decision makers it may be important to think about what data is provided to these systems, since one of the risks of machine learning within AI is that you do not know what the machines learn as they learn themselves. A larger amount of data does not necessarily lead to more subjective results, but the risk of directly encode discrimination still exists because of the data the AI system is provided with can contain bias. It has also been found that candidates do not want to be judged on political views, relationships or anything that can be gained through big data and data mining. The purpose of the study is to provide a deeper understanding of what is needed to automate the recruitment process using AI and machine learning and to design a list of how AI can be a support for companies to keep in mind during a possible implementation. The study has chosen three methods for empirical gathering, all of which are qualitative. Interviews and a survey has collected the data which is analyzed in Excel 2016 as well as Google Docs. The interviews were conducted in several stages and aimed towards two employees working at two different recruiting companies. The survey was aimed primarily towards individuals who will have graduated within this year. The selection of participants has been made for the purpose of the study and on some occasions a comfort selection has been made. The result shows that the recruiters spend a lot of time screening candidates and do this manually. The survey shows that future candidates have a neutral stance when it comes to trusting in an AI system performing the screening process. The respondent in the follow-up interview says that automation using AI would facilitate the work and agrees with the survey respondents considering the pros and cons of AI, but at the same time would not rely on the results. Further, the respondent believes that it is in the automated way the recruitment process will continue. The result of the study may be used by recruitment companies that are considering introducing AI into their recruitment processes.
Johansson, Oskar. "Parafrasidentifiering med maskinklassificerad data : utvärdering av olika metoder". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167039.
Texto completoLundqvist, Patrik y Michael Enhörning. "Speltestning : Med Fuzzy Logic". Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20260.
Texto completoVianello, Dario <1987>. "Data management and data analysis in the large European projects GEHA (GEnetics of Healthy Aging) and NU-AGE (NUtrition and AGEing): a bioinformatic approach". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6819/1/phd_thesis_final.pdf.
Texto completoVianello, Dario <1987>. "Data management and data analysis in the large European projects GEHA (GEnetics of Healthy Aging) and NU-AGE (NUtrition and AGEing): a bioinformatic approach". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6819/.
Texto completoAspegren, Villiam y Kim Persson. "Spelfördelar med minnesinjektion". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177857.
Texto completoOnline games are more popular today than ever, it is an industry that turns over large sums of money every year. More and more people join these online gaming sites. How can you as a player be certain that your opponents are playing fair? The project that was done at KTH investigates how programs on the user’s computer can be exploited to gain advantages in online games. Only memory injection will be examined in this purpose. Different techniques as well as suggestions on how application security can be improved will be presented in this report. PokerStat is a program developed to demonstrative purpose to show how memory injection can be used to exploit a poker client. PokerStat calculates the probability of receiving different poker hands whilst in the middle of an active hand. It shows the importance of deciding exactly what information should be stored in the memory of an application. A survey conducted during this project showed that the majority of subjects didn’t think that use of PokerStat should be classified as cheating. Despite that, the survey also shows that a majority of people does not want the functionality of PokerStat available to everyone in the poker client. The result of the survey shows that PokerStat gives an advantage to the user that people don’t want available to all users but the advantage is not big enough to be classified as cheating.
Freitag, Kathrine. "”Tanken är god men det är svårt att hitta” : Vad är det som gör att användare är nöjda med sitt system?" Thesis, Faculty of Arts and Sciences, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17593.
Texto completoPå läkemedelsföretaget AstraZeneca finns ett webb-baserat system för fildelning, eRoom, som är tänkt att ge en delad, säker arbetsplats på Internet så att spridda projektgrupper kan genomföra gemensamma projekt. Detta examensarbete har sin grund i att flera användare i Sverige inte var helt nöjda med systemet, dess tillgänglighet, struktur och kognitiva aspekter. Jag vill, med eRoom som en fallstudie, undersöka vad det generellt kan finnas för faktorer som påverkar hur nöjda användare är med ett CSCW-system.
Arbetet grundar sig i teknometodologi och kommunikationsteori och tillämpas på domänen CSCW-system. De metoder som använts är en större enkätundersökning, intervjuer samt det kunskap sprunget ur det författarens medlemskap i gruppen som uppnåtts under arbetets gång.
Ett av de största problemen är att eRoom inte används till kommunikation och samarbete mellan spridda projektgrupper. Istället används det som arkiv och dokumenthanteringssystem av personer som befinner sig fysiskt nära varandra och har andra sätt att kommunicera och samarbeta. Detta beror på att systemet inte kan förmedla sin funktion och kommunicera med användarna på ett bra sätt, vilket gör att systemet inte heller stöder användaren i det som är systemets grundläggande funktion. System av den här typen måste även vara bättre än befintliga sätt att utföra liknande arbetsuppgifter, vilket eRoom är när det gäller dokumenthantering, men inte när det gäller kommunikation och samarbete.
Evert, Anna-Karin y Alfrida Mattisson. "Rekommendationssystem med begränsad data : Påverkan av gles data och cold start på rekommendationsalgoritmen Slope One". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186734.
Texto completoIn today’s abundance of online information and products, recommender systems have become essential in finding what is interesting and relevant for each user. Recommender systems both predict product ratings and produce a selection of recommended products for a user. Limited data is a common issue for recommender systems and can greatly impair their ability to produce accurate predictions and recommendations. The most prevailing types of limited data are sparse data and cold start. The data in a dataset is said to be sparse if the number of ratings is small compared to the number of users and products. Cold start, on the other hand, is when a new user or product is added to the system and therefore completely lacks ratings. The objective of this report is to study the impact of limited data on the accuracy of predictions produced by the recommendation algorithm Slope One. More specifically, this report examines the impact of sparse data and the two cold start situations new user and new product. The report also investigates whether asking new users to rate a small number of products, instantly after being added to the system, is a successful strategy in order to cope with the problem of making accurate predictions for new users. In summary, it can be deduced from the results of this report that the sensitivity of Slope One varies between the types of limited data. In accordance with previous studies, Slope One appears insensitive to sparse data. On the other hand, for cold start, the accuracy is seriously affected and cold start can thus be said to be problematic for Slope One.
Carlsson, Anders y Linus Lauri. "Modellering av finansiella data med dolda markovmodeller / Analysis of Financial Data with Hidden Markov Models". Thesis, KTH, Matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105526.
Texto completoI dagens samhälle finns det ett stort intresse i att kunna analysera finansiella data och skapa sig en uppfattning om hur marknaden utveck-las. Olika statistiska modeller är de vanligaste verktygen för att kunna göra denna analys. Den här rapporten fokuserar på att med hjälp av en statistisk modell undersöka stabiliteten på en finansiell datasekvens. Datasekvensen kommer i rapporten vara de logaritmiska dagsavkastningarna på OMXS30-index mellan den 30e mars 2005 och den 6e mars 2009. Den statistiska modellen som kommer användas är en så kallad dold Markovmodell eller HMM ( Hidden Markov Model). Modellen består huvudsakligen av två stokastiska processer: En icke observerbar Markovkedja i ett ändligt tillståndsrum. En tillståndsberoende process med ett pålagt vitt brus. Den senare av dessa två processer är vanligtvis känd. Problemet blir därför att försöka hitta hur den dolda Markovkedjan uppför sig. Detta löses med den så kallade EM-algoritmen, vilket är en iterativ metod för att få modellen att konvergera. Därefter genomförs en optimering med avseende på antal tillstånd med BIC ( Bayesian Information Criterion), varefter en validering av modellen utförs, genom att grafiskt jämföra kvantilerna för modellens fördelningsfunktion med den observerade datamängden. Studien visar att det med hjälp av en HMM är möjligt att beskriva hur avkastningen på index varierar. Detta genom att undersöka hur sanno-likt det är för förändringar mellan Markovkedjans volatilitetstillstånd.
Berezkin, Nikita y Ahmed Heidari. "Berika receptdata med innehållshanteringssystem". Thesis, KTH, Hälsoinformatik och logistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252797.
Texto completoProblemet i dag är att människor inte äter klimatsmart mat med resultatet att maten inte kommer räcka till i framtiden. Vad vi äter kan ha en negativ påverkan på växthuseffekten. Problemet är att människor inte har tid eller kunskap att tillaga klimatsmart mat. Detta kan lösas med hjälp av ett innehållshanteringssytem. Ett innehållshanteringsystem bearbetar vald typ av data på ett bestämt sätt som sedan lagras. Denna rapport kommer att behandla grunden och uppbyggnaden av ett innehållshanteringsystem som ska ingå i ett rekommendationssystem för en användare. Systemet ska medföra mer alternativ av klimatsmart mat för att uppnå individens personliga behov. Resultatet blev att med hjälp av data från olika källor kunde koppla samman ingredienser där information som näringsvärde, allergier samt om kosten är vegetarisk. Genom tester som prestandatest av exekveringstid för innehållshanteringsystemet, träffsäkerhet av parsning och förbättring av träffsäkerheten uppnåddes ett bättre resultat. Majoriteten av ingredienserna i receptet blev berikade vilket medför till mer klimatsmart matalternativ, vilket är bättre mot miljön. Träffsäkerheten är ingredienser i receptet som matchas mot namn av produkter i affärer. Nästa steg var att med hjälp berikade ingredienser berika recepten.
Hellsin, Beppe. "Inomhuslokalisering med Bluetooth 5". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-39330.
Texto completoCiolek, Thomas S. "Meeting the challenges of met data with MySQL X /". [Denver, Colo.] : Regis University, 2006. http://165.236.235.140/lib/TCiolek2006.pdf.
Texto completoBergsten, Marcus. "Visualisering av multidimensionella data med hjälp av parallella koordinater". Thesis, University of Gävle, Ämnesavdelningen för datavetenskap, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-4859.
Texto completoProblem består ofta av flera variabler som behöver sammanvägas för att vi ska kunna fatta ett beslut om vilket alternativ som är bäst. Detta gör vi var och varannan dag. Med detta arbete görs ett försök att skapa en integrerad webblösning som hämtar information och visualiserar den på ett sätt som förenklar ett beslutsfattande. Till hjälp för detta användes parallella koordinater.
Herdahl, Mads. "Lineær mikset modell for kompressor data med en applikasjon". Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10432.
Texto completoStatoilHydro is the operator of the Åsgard oil and gas field outside of Trøndelag, Norway, where large compressors for injection, recompression and export of natural gas are installed. The facility transports and stores up to 36 millions $Sm^3$ of gas every day. If the compressors are not optimally operated, large values are lost. This paper describes how to use linear mixed models to model the condition of the compressors. The focus has been on the 1- and 2- stage recompression compressors. Reference data from Dresser-Rand have been used to make the model. Head and flow data are the modelled, and the explanatory variables used are molweight, rotational speed and an efficiency indicator. The paper also shows how cross validation is used to give an indication of how future datapoints will fit the model. A graphical user interface has been developed to do estimation and plotting with various models. Different models are tested and compared by likelihood methods. For a relatively simple model using three explanatory variables reasonable predictions are obtained. Results are not so good for very high rotational speeds and high molweights.
Abramsson, Evelina y Kajsa Grind. "Skattning av kausala effekter med matchat fall-kontroll data". Thesis, Umeå universitet, Statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-139644.
Texto completoSchipani, Angela <1994>. "Comprehensive characterization of SDH-deficient GIST using NGS data and iPSC models". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10190/1/Schipani_Angela_thesis.pdf.
Texto completoHolm, Noah. "Möjligheter och utmaningar med öppna geodata". Thesis, KTH, Geoinformatik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188668.
Texto completoOpen geodata is today (2016) a highly debated topic and the interest for the matter is increasing, especially for the public sector. In the parliament, politicians have started to work for open geodata, where the parliament recently decided about several motions on the matter. Recently, there have also been assignments from the government about studying open geodata impacts. Sweden is behind the other Nordic countries and several other countries have evolved further towards open geodata as well. Today there is a lot of work on open geodata questions and the Swedish mapping, cadastral and land registration authority, Lantmäteriet, has started to open some of its data, and is aiming towards opening more. This Bachelor of Science thesis has been conducted at KTH Royal Institute of Technology, in cooperation with Agima Management AB. The study aims towards describing the opportunities that open geodata brings, and the challenges that an organization faces when opening data. These opportunities and challenges are summarized through a literature review of different opinions, mostly from the public sector, and personal interviews with persons in the geodata field. The results show that the foremost opportunities with open geodata is in developing business and innovation as well as efficiency improvements in the public sector which leads to economic gains. The challenges a public organization faces when trying to open geodata is mainly financial. The financial issues come from the current model where fees from users are financing the operations. In extension, there will also be challenges with sustaining a high quality of geodata, which will be a constant question for officials and politicians if geodata, for example is financed by taxes. The conclusion is that since the opportunities overcome the challenges, as many of the challenges are not direct drawbacks, but rather something that has to be solved in a different way than it is today, open geodata may become more normal in Sweden eventually. One of the reasons for Sweden’s relatively low speed in the area seems to be that public officials and politicians are at different levels in the matter today.
CARIOLI, GRETA. "CANCER MORTALITY DATA ANALYSIS AND PREDICTION". Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/612668.
Texto completoDescriptive epidemiology has traditionally only been concerned with the definition of a research problem’s scope. However, the greater availability and improvement of epidemiological data over the years has led to the development of new statistical techniques that have characterized modern epidemiology. These methods are not only explanatory, but also predictive. In public health, predictions of future morbidity and mortality trends are essential to evaluate strategies for disease prevention and management, and to plan the allocation of resources. During my PhD at the school of “Epidemiology, Environment and Public Health” I worked on the analysis of cancer mortality trends, using data from the World Health Organization (WHO) database, available on electronic support (WHOSIS), and from other databases, including the Pan American Health Organization database, the Eurostat database, the United Nation Population Division database, the United States Census Bureau and the Japanese National Institute of Population database. Considering several cancer sites and several countries worldwide, I computed age-specific rates for each 5-year age-group (from 0–4 to 80+ or 85+ years) and calendar year or quinquennium. I then computed age-standardized mortality rates per 100,000 person-years using the direct method on the basis of the world standard population. I performed joinpoint models in order to identify the years when significant changes in trends occurred and I calculated the corresponding annual percent changes. Moreover, I focused on projections. I fitted joinpoint models to the numbers of certified deaths in each 5-year age-group in order to identify the most recent trend slope. Then, I applied Generalized Liner Model (GLM) Poisson regressions, considering different link functions, to the data over the time period identified by the joinpoint model. In particular, I considered the identity link, the logarithmic link, the power five link and the square root link. I also implemented an algorithm that generated a “hybrid” regression; this algorithm automatically selects the best fitting GLM Poisson model, among the identity, logarithmic, power five, and square root link functions, to apply for each age-group according to Akaike Information Criterion (AIC) values. The resulting regression is a combination of the considered models. Thus, I computed the predicted age-specific numbers of deaths and rates, and the corresponding 95% prediction intervals (PIs) using the regression coefficients obtained previously from the four GLM Poisson regressions and from the hybrid GLM Poisson regression. Lastly, as a further comparison model, I implemented an average model, which just computes a mean of the estimates produced by the different considered GLM Poisson models. In order to compare the six different prediction methods, I used data from 21 countries worldwide and for the European Union as a whole, I considered 25 major causes of death. I selected countries with over 5 million inhabitants and with good quality data (i.e. with at least 90% of coverage). I analysed data for the period between 1980 and 2011 and, in particular, I considered data from 1980 to 2001 as a training dataset, and from 2002 to 2011 as a validation set. To measure the predictive accuracy of the different models, I computed the average absolute relative deviations (AARDs). These indicate the average percent deviation from the true value. I calculated AARDs on 5-year prediction period (i.e. 2002-2006), as well as for 10-year period (i.e. 2002-2011). The results showed that the hybrid model did not give always the best predictions, and when it was the best, the corresponding AARD estimates were not very far from the other methods. However, the hybrid model projections, for any combination of cancer site and sex, were never the worst. It acted as a compromise between the four considered models. The average model is also ranked in an intermediate position: it never was the best predictive method, but its AARDs were competitive compared to the other methods considered. Overall, the method that shows the best predictive performance is the Poisson GLM with an identity link function. Furthermore, this method, showed extremely low AARDs compared to other methods, particularly when I considered a 10-year projection period. Finally, we must take into account that predicted trends and corresponding AARDs derived from 5-year projections are much more accurate than those done over a 10-year period. Projections beyond five years with these methods lack reliability and become of limited use in public health. During the implementation of the algorithm and the analyses, several questions emerged: Are there other relevant models that can be added to the algorithm? How much does the Joinpoint regression influence projections? How to find an “a priori” rule that helps in choosing which predictive method apply according to various available covariates? All these questions are set aside for the future developments of the project. Prediction of future trends is a complex procedure, the resulting estimates should be taken with caution and considered only as general indications for epidemiology and health planning.
Visnes, Snorre. "Skalering av leseoperasjoner med Apache Derby". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10330.
Texto completoOppgaven tar for seg hvordan det er mulig å lage et cluster basert på Derby som støtter et høyt volum av lesetransaksjoner. Skriving er ikke i fokus ytelsesmessig, men er mulig gjennom Derbys støtte for XA. Det faktum at XA er et verktøy for å gjennomføre 2-fase commit, ikke replisering, gjør at skriving kun er mulig for administrator. Hovedsakelig grunnet manglende sekvensering av transaksjoner, samt mangel på automatisk opprydning etter feilede transaksjoner. Testing viser at skaleringsgraden for et slikt system er på 100%. Det er ingen sammenkobling mellom servernoder, og dermed ingen øvre grense for antall noder. Det at det ikke er noen sammenkobling mellom servernoder gjør at disse kan spres geografisk. Sammen med en fail-over mekanisme i klienten kan dette systemet oppnå høy tilgjengelighet ved lesing.
Borak, Kim y Gabriel Vilén. "Datadriven beteendemodellering med genetisk programmering". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177561.
Texto completoWithin the Swedish Defence Research Agency and the Swedish Armed Forces there is a need to easier and more efficiently (in terms of cost and time) create better, more realistic and objective behaviour for the synthetic actors involved in the Armed Forces’ simulation-based decision support, and training applications. The traditional method for the acquisition of knowledge about a behaviour, in the development of behavioural models, is working with experts on the subject. This process is often time consuming and expensive. Another problem is that the complexity is limited by the expert knowledge and his cognitive abilities. To address the problems this study investigates an alternative approach to the development of behavioural models. The new method develops behavioural models of machine learning algorithms instead of manually creating models. This approach is called data-driven behaviour modelling. Data-driven behaviour modelling differs significantly from the traditional subject-matter expert method. The data-driven computer program creates autonomous behaviour models based on data and observations without human influence, while the subject-matter expert method is developed manually by humans. In this study data-driven behaviour modelling with genetic programming is developed, a machine learning technique inspired by the biological evolution. The technique let’s the computer trying to find the best possible programs in order to solve a user defined task with help of, for instance, a fitness function that rates the programs. This thesis presents a system that performs data-driven behaviour modelling with genetic programming to develop a software agent's behaviour in a simulator. The study used a wolf simulator to try to teach a wolf to hunt and devour fleeing sheep. Simulator-generated data is used by the agent to learn behaviours through exploration (trial-and-error). The system is generally applicable by its ability to be adapted to other simulations with simple adjustments in a configuration file. The system is also adaptable to many different behaviour models by its basic execution structure. The results of the experiments performed with the system show that a trained wolf succeeded to develop a clever data-driven behaviour where it hunted and devoured all of the 46 sheeps in less than ten minutes. In comparison a predefined centroid algorithm managed to eat the sheep in about eight minutes. The conclusion is that successful and efficient data-driven behaviours can be developed, at least regarding this problem, if allowed a sufficiently long simulation time, good configuration and a favourable and equitable fitness function.
Helenius, Anna. "GDPR och känsliga personuppgifter : En fallstudie om fackförbunds arbete med Dataskyddsförordningen". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15328.
Texto completoOn 25 May 2018, the new data protection regulation, GDPR, will come into effect. With this, all members of the European Union will have a common law that sharpens previous rules and puts higher demands on organisations' personal data processing. The purpose of this study has been to investigate and map how businesses dealing with sensitive personal data consider themselves being affected by GDPR, and how they work to meet the requirements of this new regulations. Sensitive personal data are what for example reveals a person's sexual orientation, political opinion, religious conviction or union affiliation and therefore, to fulfil the purpose, a case study with six trade unions of different sizes was performed. The data collection was made with help of interviews with one person from each trade union, who has good insight and overview over the organisation's work with the GDPR. The results from the study show that the trade unions find the new data protection regulation to be complex and hard to interpret but that it nevertheless causes positive consequences for both the organisation and the members. All personal data that the trade unions handle fall directly under sensitive personal data since they may be derived to union affiliation and this leads to where the trade unions considering themselves facing higher demands on information security in comparison to many other businesses. Among other things, they face major challenges in how they are going to communicate with their members in the future, as even unstructured material is included in the new data protection regulation. It's not possible to say in general what actions the unions have taken to prepare for the new requirements of the GDPR, but it's clear that both technical and administrative safety actions are needed. For example, many of the unions are upgrading their IT systems or purchasing brand new case management systems while also introducing new routines for clearing of data and for management of personal data incidents.
Borg, Olivia. "Educational Data Mining : En kvalitativ studie med inriktning på dataanalys för att hitta mönster i närvarostatistik". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-16987.
Texto completoThe study focuses on finding different patterns in attendance statistics for students who are not present at school. The information provided by the results can thereafter be used as a basis for decision-making for schools or for other organizations interested in EDM within attendance statistics. The work carried out a qualitative method approach with a case study that consisted a literature study and an implementation. The literature study was used to gain an understanding of common approaches within EDM, which subsequently formed the basis for the implementation that used the working method CRISP-DM. The project resulted in five different patterns defined by data analysis. The patterns show absence from a time perspective and per subject and can form the basis for future decision-making.
Muric, Fatmir. "Öppna data och samordning : En utvärderande fallstudie om implementeringen av öppna data och Riksarkivets arbete med samordning". Thesis, Högskolan i Halmstad, Akademin för lärande, humaniora och samhälle, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-37304.
Texto completoLager, Joel y Pontus Hermansson. "Marknadsföringsmixens fortsatta betydelse, med hänsyn till digitaliseringen. : En systematisk litteraturstudie". Thesis, Högskolan Dalarna, Företagsekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:du-27860.
Texto completoPurpose: The purpose of this study is to discuss how the marketing mix could have retained its importance over time in the field of marketing, in consideration of changes that digitalization meant. Method: The study is conducted as a systematic literature study. The study is based on more than 50 scientific articles relevant to the purpose. The articles were collected through academic databases. Result: The articles show that the marketing mix is still an up-to-date marketing tool thanks to its educational simplicity and ability to adapt to the prevailing conditions. The four P:s still stand for Product, Price, Place and Promotion, but the change lies in what is included in the ever-growing and changing subcategories. What the opponents of the marketing mix mean is its weakness, that the criteria that the different categories rest on have never been specified, also seems to be its greatest strength. Without the requested specification, the four P:s can be adapted to the user and the conditions prevailing, which has made it survive in spite of digitization and the new conditions.
Wikström, Simon. "Spelutveckling med ML : Skapandet av en space shooter med Unity ML Agents toolkit och speldesign koncept". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176096.
Texto completoA.I växer hastigt inom diverse industrier och spelindustrin är inget undantag, det rör sig om allt från återanvändningsbara A.I hjärnor och beteenden till automatiskt genererat innehåll och mer därtill. Oftast har A.I agenter bara ett specifikt mål: producera innehåll, uppnå ett tillstånd som leder till vinst genom att eliminera en spelare eller liknande. Det här arbetet strävar efter att undersöka huruvida det är möjligt att skapa en A.I agent med motsägelsefulla uppdrag i en spaceshooter miljö och hur det i så fall skulle designas. Idén är att istället för att eliminera en spelare så fort som möjligt gör man det över så lång tid som möjligt för att förlänga spelsessionen, vilket bidrar till att agenten försöker hjälpa och stjälpa för spelaren samtidigt. Detta är arbetets bidrag till M-L och A.I området. För att uppnå denna effekt har Unity M-L toolkit nyttjats i syfte att träna en agent som använder proximal policy optimization (PPO). Den producerade A.I agenten utför sin uppgift acceptabelt men för att fungera som den var avsedd fullt ut krävs mer arbete.