Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Data harm.

Dissertationen zum Thema „Data harm“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Data harm" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Buffenbarger, Lauren. „Ethics in Data Science: Implementing a Harm Prevention Framework“. University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623166419961692.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Andersson, Erica, und Ida Knutsson. „Immigration - Benefit or harm for native-born workers?“ Thesis, Linnéuniversitetet, Institutionen för nationalekonomi och statistik (NS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-53829.

Der volle Inhalt der Quelle
Annotation:
The aim of our study is to investigate the effect of immigrants on wages for natives with divergent skill level within one country. Skill level is measured as education level and the purpose is to focus on the level where it according to us is a lack in research, namely the effect on high skilled native-born worker wages. Further, our contribution to the already existing studies may be considered to be a complement. Using panel data, collected from the time period 2000-2008 for the 290 municipalities in Sweden to get regional variation, we investigate and interpret the estimated outcome of how wages for native-born workers in the Swedish labor market respond to immigration into Sweden. The main findings, when controlling for age, unemployment, and differences between year and municipalities in this study are on the short run, in line with the theory. The closer to a substitute the native-born and foreign-born workers are, the greater are the adverse effect on the wage for native-born, given that we assume immigrants as low skilled. The effect on wage for high skilled native workers in short run, when assuming immigrants and natives as complement, is positive, i.e. the wage for high skilled natives increases as the share of immigrants increases. The effect on high skilled native-born wages is positive even in mid-long run and adverse for the low and medium skilled native-workers. This is not an expected outcome since we according to theory predict the wage to be unaffected in mid-long run. This may be the result of errors in the assumption that immigrants are low skilled, or that five years is a too short time to see the expected effect in the long run; the Swedish labor market may need more time to adjust to what we predict the outcome to be.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chang, David C. „A comparison of computed and measured transmission data for the AGM-88 HARM radome“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA274868.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

McCullagh, Karen. „The social, cultural, epistemological and technical basis of the concept of 'private' data“. Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/the-social-cultural-epistemological-and-technical-basis-of-the-concept-of-private-data(e2ea538a-8e5b-43e3-8dc2-4cdf602a19d3).html.

Der volle Inhalt der Quelle
Annotation:
In July 2008, the UK Information Commissioner launched a review of EU Directive 95/46/EC on the basis that: “European data protection law is increasingly seen as out of date, bureaucratic and excessively prescriptive. It is showing its age and is failing to meet new challenges to privacy, such as the transfer of personal details across international borders and the huge growth in personal information online. It is high time the law is reviewed and updated for the modern world.” Legal practitioners such as Bergkamp have expressed a similar sense of dissatisfaction with the current legislative approach: “Data Protection as currently conceived by the EU is a fallacy. It is a shotgun remedy against an incompletely conceptualised problem. It is an emotional, rather than rational reaction to feelings of discomfort with expanding data flows. The EU regime is not supported by any empirical data on privacy risks and demand…A future EU privacy program should focus on actual harms and apply targeted remedies.” Accordingly, this thesis critiques key concepts of existing data protection legislation, namely ‘personal’ and ‘sensitive’ data, in order to explore whether current data protection laws can simply be amended and supplemented to manage privacy in the information society. The findings from empirical research will demonstrate that a more radical change in EU law and policy is required to effectively address privacy in the digital economy. To this end, proposed definitions of data privacy and private data was developed and tested through semi-structured interviews with privacy and data protection experts. The expert responses indicate that Bergkamp et al have indeed identified a potential future direction for privacy and data protection, but that further research is required in order to develop a coherent definition of privacy protection based on managing risks to personal data, and harm from misuse of such information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Steeg, Sarah. „Estimating effects of self-harm treatment from observational data in England : the use of propensity scores to estimate associations between clinical management in general hospitals and patient outcomes“. Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/estimating-effects-of-selfharm-treatment-from-observational-data-in-england-the-use-of-propensity-scores-to-estimate-associations-between-clinical-management-in-general-hospitals-and-patient-outcomes(ab6f96b1-f326-43ea-9999-0c410e4c517d).html.

Der volle Inhalt der Quelle
Annotation:
Background: The use of health data from sources such as administrative and medical records to examine efficacy of health interventions is becoming increasingly common. Addressing selection bias inherent in these data is important; treatments are allocated according to clinical need and resource availability rather than delivered under experimental conditions. Propensity score (PS) methods are widely used to address selection bias due to observed confounding. This project used PS methods with observational cohort data relating to individuals who had attended an Emergency Department (ED) following self-harm (including self-poisoning and self-injury). This group is at greatly increased risks of further self-harm, suicide and all-cause mortality compared to the general population. However, it is not clear how hospital management affects risks of these adverse outcomes. Methods: A systematic review of PS methods with record-based mental health care data was used to determine the most appropriate methodological approach to estimate treatment effects following presentation to ED following self-harm. Following this review, PS stratification and PS matching methods were used with observational self-harm data to address observed baseline differences between patients receiving different types of clinical management following their hospital presentation (specialist psychosocial assessment, medical admission, referral to outpatient mental health services and psychiatric admission). Effects on repeat attendance for self-harm, suicide and all-cause mortality within 12 months were estimated. Advice on the interpretation and dissemination of results was sought from service users. Results: The systematic review resulted in 32 studies. The quality of the implementation and reporting of methods was mixed. Sensitivity analysis of the potential impacts of unobserved confounding was largely absent from the studies. Results from analysis of the self-harm cohorts showed that, broadly, prior to PS adjustment, individuals receiving each of the four categories of hospital management had higher risks of repeat attendance for self-harm, suicide and all-cause mortality than those not receiving that management. The use of PS methods resulted in attenuation of most of these increased risks. Psychosocial assessment appeared to be associated with reduced risk of repeat attendance for self-harm (risk ratio 0.87, 95% CI 0.80 to 0.95). Three advisors attended a group meeting and a further two provided responses by email. As a result of advisors' recommendations, an information sheet is being developed containing information about what patients can expect when attending hospital following self-harm and how treatment might influence future risk. Conclusions: Propensity score methods are a promising development in evaluating routine care for individuals who have self-harmed. There is now more robust evidence that specialist psychosocial assessment is beneficial in reducing risk of further attendances for self-harm. Advisors offered different perspectives to the researchers, leading to novel suggestions for dissemination.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mpame, Mario Egbe [Verfasser]. „The General Data Protection Regulation and the effective protection of data subjects’ rights in the online environment : To what extent are these rights enforced during mass harm situations? / Mario Egbe Mpame“. Baden-Baden : Nomos Verlagsgesellschaft mbH & Co. KG, 2021. http://d-nb.info/1237168708/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Gkaravella, Antigoni. „A study of patients referred following an episode of self-harm, a suicide attempt, or in a suicidal crisis using routinely collected data“. Thesis, University of East London, 2014. http://roar.uel.ac.uk/4593/.

Der volle Inhalt der Quelle
Annotation:
Self-harm and suicide prevention remain a priority of public health policy in the UK. Clinicians conducting psychosocial assessments in Accident and Emergency Departments are confronted with a complex and demanding task. There is a paucity of research into the content of psychosocial assessments and the experiences of clinicians conducting psychosocial assessments in Accident and Emergency Departments. This study examines the experiences of people who presented in an Accident and Emergency Department following self-harm or with suicidal ideation, as those are documented in the psychosocial assessments. Furthermore, the study explores the attitudes, feelings and experiences of clinicians working in a Psychiatric Liaison Team, as well as the process of making decisions about aftercare plans. In order to achieve this, qualitative methods were employed. A sample of sixty-one psychosocial assessments was collected and analysed using thematic analysis. The coding of the data was done inductively and deductively with the use of the categories of the Orbach and Mikulincer Mental Pain Scale. Two focus groups with clinicians were conducted and analysed with a grounded theory oriented approach. Stevens’ framework was applied in order to analyse the interactional data in the focus groups. Key themes emerging from the focus groups were shared with serviceusers who offered their own interpretation of the data and findings. The study draws on psychodynamic theories to explore the experiences of clinicians assessing and treating patients with self-harm and suicidal ideation in an Accident and Emergency Department and to make sense of the needs of the patients. The findings are that suicidal ideation and self-harm were assessed and treated in similar ways. Difficulties in relationships and experiences of loss or trauma in childhood and/or adulthood were the two most common themes emerging in the psychosocial assessments. Decisions about aftercare plans were guided by patients’ presentation and needs in conjunction with available resources. Clinicians were found to have various emotional responses to patients’ painful experiences with limited space to reflect upon these at work. Clinicians and service-users commented upon the therapeutic aspect of psychosocial assessments, which in light of the painful experiences reported in the psychosocial assessments could be used to generate more sensitive and meaningful approaches to the care of this population. Providing support and a space for clinicians to be able to think of their task and their responses seems important.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wojda, Magdalena A. „A focus on the risk of harm : applying a risk-centered purposive approach to the interpretation of "personal information" under Canadian data protection laws“. Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/55133.

Der volle Inhalt der Quelle
Annotation:
We now live in a world where the Internet is in its second generation, big data is king, and a “Digital Earth” has emerged alongside advancements in 3S technologies, where cyber-attacks and cybercrime are the new trend in criminal activity. The ease with which we can now find, collect, store, transfer, mine and potentially misuse large amounts of personal information is unprecedented. The pressure on data protection regulators continues to mount against this backdrop of frenetic change and increased vulnerability. Law and policy makers around the world tasked with protecting information privacy in the face of these advances are simply struggling to keep pace. One important difficulty they encounter is defining the term “personal information” under data protection laws (DPLs) in order to delineate precisely what type of information enjoys the protection of these legislative instruments. As a result, the meaning and scope of this term have emerged as a pressing issue in scholarly debates in the field of privacy and data protection law. This thesis contributes to these discussions by critically appraising the approaches taken by Canadian courts, privacy commissioners and arbitrators to interpreting the statutory definitions of “personal information” under Canadian private sector DPLs, and showing that a different approach is justified in light of rapidly evolving technologies. The second part of my thesis recommends a purposive risk of harm focused framework advanced by Canadian privacy scholar Éloïse Gratton as a desirable substitute for existing expansionist approaches to interpreting the definition of “personal information” under Canada’s private sector DPLs. I support my recommendation by discussing the ways in which the proposed risk of harm framework can overcome the shortcomings of existing approaches, and demonstrate this by applying it to previously issued decisions in which Canadian arbitrators and privacy commissioners or their delegates have applied expansionist approaches to new data types and data gathered by new technologies. In so doing, I demonstrate that the proposed framework better reflects the fundamental purpose of Canadian private sector DPLs: to protect only data that raises a risk of harm to individuals impacted by its collection, use or disclosure.
Law, Peter A. Allard School of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Berto, Hedda. „Sharing is Caring : An Examination of the Essential Facilities Doctrine and its Applicability to Big Data“. Thesis, Uppsala universitet, Juridiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-411945.

Der volle Inhalt der Quelle
Annotation:
Since the internet revolution, and with the ever-growing presence of the internet in our everyday lives, being able to control as much data as possible has become an indispensable part of any business looking to succeed on digital markets. This is where Big Data has become crucial. Being able to gather, but more importantly process and understand data, has allowed companies to tailor their services according to the unspoken wants of the consumer as well as optimize ad sales according to consumers’ online patterns. Considering the significant power over digital markets possessed by certain companies, it becomes critical to examine such companies from a competition law perspective. Refusal to supply, which is an abuse of a dominant position according to Article 102 TFEU, can be used to compel abusive undertakings to share a product or service, which they alone possess, and which is indispensable input in another product, with competitors. This is otherwise known as the Essential Facilities Doctrine. If the Big Data used by attention platforms such as Facebook or Google were to be considered such an indispensable product, these undertakings would be required to share Big Data with competitors. While Big Data enables the dominant positions held by powerful attention platforms today, there are certain aspects of it and its particular uses by such platforms that do not allow for the application of the Essential Facilities Doctrine. Considering the significance of Big Data for these undertakings, however, there may be need for a reform of the Essential Facilities Doctrine. From a purely competition standpoint, allowing the application of the Essential Facilities Doctrine to Big Data would be beneficial, particularly considering the doctrine’s effect on innovation. However, enforcing an obligation to share Big Data with competitors would be in breach of privacy policies within the EU. While competition decisions made by the Commission do not directly concern rules set forth in such policies, the Commission is still obligated to respect the right to privacy set forth in the EU Charter of Fundamental Rights. Thus, while the significance of Big Data demands a change in how it is approached by competition law, the Essential Facilities Doctrine is not the appropriate remedy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lee, Amra. „Why do some civilian lives matter more than others? Exploring how the quality, timeliness and consistency of data on civilian harm affects the conduct of hostilities for civilians caught in conflict“. Thesis, Uppsala universitet, Institutionen för freds- och konfliktforskning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-387653.

Der volle Inhalt der Quelle
Annotation:
Normatively, protecting civilians from the conduct of hostilities is grounded in the Geneva Conventions and the UN Security Council protection of civilian agenda, both of which celebrate their 70 and 20 year anniversaries in 2019. Previous research focusses heavily on protection of civilians through peacekeeping whereas this research focuses on ‘non-armed’ approaches to enhancing civilian protection in conflict. Prior research and experience reveals a high level of missingness and variation in the level of available data on civilian harm in conflict. Where civilian harm is considered in the peace and conflict literature, it is predominantly from a securitized lens of understanding insurgent recruitment strategies and more recent counter-insurgent strategies aimed at winning ‘hearts and minds’. Through a structured focused comparison of four case studies the correlation between the level of quality, timely and consistent data on civilian harm and affect on the conduct of hostilities will be reviewed and potential confounders identified. Following this the hypothesized causal mechanism will be process traced through the pathway case of Afghanistan. The findings and analysis from both methods identify support for the theory and it’s refinement with important nuances in the factors conducive to quality, timely and consistent data collection on civilian harm in armed conflict.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Gratton, Eloïse. „Redéfinir la notion de donnée personnelle dans le contexte des nouvelles technologies de l'Internet“. Thesis, Paris 2, 2012. http://www.theses.fr/2012PA020061.

Der volle Inhalt der Quelle
Annotation:
Vers la fin des années soixante, face à l’importance grandissante de l’utilisation des ordinateurs par les organisations, une définition englobante de la notion de donnée personnelle a été incorporée dans les lois en matière de protection de données personnelles (« LPDPs »). Avec Internet et la circulation accrue de nouvelles données (adresse IP, données de géolocalisation, etc.), il y a lieu de s’interroger quant à l’adéquation entre cette définition et cette réalité. Aussi, si la notion de donnée personnelle, définie comme étant « une donnée concernant un individu identifiable » est toujours applicable à un tel contexte révolutionnaire, il n’en demeure pas moins qu’il importe de trouver des principes interprétatifs qui puissent intégrer ces changements factuels. La présente thèse vise à proposer une interprétation tenant compte de l’objectif recherché par les LPDPs, à savoir protéger les individus contre les risques de dommage découlant de la collecte, de l’utilisation ou de la divulgation de leurs données. Alors que la collecte et la divulgation des données entraîneront surtout un risque de dommage de nature subjective (la collecte, un sentiment d’être sous observation et la divulgation, un sentiment d’embarras et d’humiliation), l’utilisation de ces données causera davantage un dommage objectif (dommage de nature financière, physique ou discriminatoire). La thèse propose plusieurs critères qui devraient être pris en compte pour évaluer ce risque de dommage ; elle servira de guide afin de déterminer quelles données doivent être qualifiées de personnelles, et fera en sorte que les LPDPs soient le plus efficaces possibles dans un contexte de développements technologiques grandissants
In the late sixties, with the growing use of computers by organizations, a very broad definition of personal information as “information about an identifiable individual” was elaborated and has been incorporated in data protection laws (“DPLs”). In more recent days, with the Internet and the circulation of new types of information (IP addresses, location information, etc), the efficiency of this definition may be challenged. This thesis aims at proposing a new way of interpreting personal information. Instead of using a literal interpretation, an interpretation which takes into account the purpose behind DPLs will be proposed, in order to ensure that DPLs do what they are supposed to do: address or avoid the risk of harm to individuals triggered by organizations handling their personal information. While the collection or disclosure of information may trigger a more subjective kind of harm (the collection, a feeling of being observed and the disclosure, embarrassment and humiliation), the use of information will trigger a more objective kind of harm (financial, physical, discrimination, etc.). Various criteria useful in order to evaluate this risk of harm will be proposed. The thesis aims at providing a guide that may be used in order to determine whether certain information should qualify as personal information. It will provide for a useful framework under which DPLs remain efficient in light of modern technologies and the Internet
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Ramanayaka, Mudiyanselage Asanga. „Data Engineering and Failure Prediction for Hard Drive S.M.A.R.T. Data“. Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1594957948648404.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Acuna, Stamp Annabelen. „Design Study for Variable Data Printing“. University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin962378632.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Troska, Jan Kevin. „Radiation-hard optoelectronic data transfer for the CMS tracker“. Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313621.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Zhang, Shuang Nan. „Instrumentation and data analysis for hard X-ray astronomy“. Thesis, University of Southampton, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252689.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Schmedding, Anna. „Epidemic Spread Modeling For Covid-19 Using Hard Data“. W&M ScholarWorks, 2021. https://scholarworks.wm.edu/etd/1627047844.

Der volle Inhalt der Quelle
Annotation:
We present an individual-centric model for COVID-19 spread in an urban setting. We first analyze patient and route data of infected patients from January 20, 2020 ,to May 31, 2020, collected by the Korean Center for Disease Control & Prevention (KCDC) and illustrate how infection clusters develop as a function of time. This analysis offers a statistical characterization of mobility habits and patterns of individuals. We use this characterization to parameterize agent-based simulations that capture the spread of the disease, we evaluate simulation predictions with ground truth, and we evaluate different what-if counter-measure scenarios. Although the presented agent-based model is not a definitive model of how COVID-19 spreads in a population, its usefulness, limitations, and flexibility are illustrated and validated using hard data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Yip, Yuk-Lap Kevin, und 葉旭立. „HARP: a practical projected clustering algorithm for mining gene expression data“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29634568.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Craig, David W. (David William) Carleton University Dissertation Engineering Electrical. „Light traffic loss of random hard real-time tasks in a network“. Ottawa, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Laclau, Charlotte. „Hard and fuzzy block clustering algorithms for high dimensional data“. Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB014.

Der volle Inhalt der Quelle
Annotation:
Notre capacité grandissante à collecter et stocker des données a fait de l'apprentissage non supervisé un outil indispensable qui permet la découverte de structures et de modèles sous-jacents aux données, sans avoir à \étiqueter les individus manuellement. Parmi les différentes approches proposées pour aborder ce type de problème, le clustering est très certainement le plus répandu. Le clustering suppose que chaque groupe, également appelé cluster, est distribué autour d'un centre défini en fonction des valeurs qu'il prend pour l'ensemble des variables. Cependant, dans certaines applications du monde réel, et notamment dans le cas de données de dimension importante, cette hypothèse peut être invalidée. Aussi, les algorithmes de co-clustering ont-ils été proposés: ils décrivent les groupes d'individus par un ou plusieurs sous-ensembles de variables au regard de leur pertinence. La structure des données finalement obtenue est composée de blocs communément appelés co-clusters. Dans les deux premiers chapitres de cette thèse, nous présentons deux approches de co-clustering permettant de différencier les variables pertinentes du bruit en fonction de leur capacité \`a révéler la structure latente des données, dans un cadre probabiliste d'une part et basée sur la notion de métrique, d'autre part. L'approche probabiliste utilise le principe des modèles de mélanges, et suppose que les variables non pertinentes sont distribuées selon une loi de probabilité dont les paramètres sont indépendants de la partition des données en cluster. L'approche métrique est fondée sur l'utilisation d'une distance adaptative permettant d'affecter à chaque variable un poids définissant sa contribution au co-clustering. D'un point de vue théorique, nous démontrons la convergence des algorithmes proposés en nous appuyant sur le théorème de convergence de Zangwill. Dans les deux chapitres suivants, nous considérons un cas particulier de structure en co-clustering, qui suppose que chaque sous-ensemble d'individus et décrit par un unique sous-ensemble de variables. La réorganisation de la matrice originale selon les partitions obtenues sous cette hypothèse révèle alors une structure de blocks homogènes diagonaux. Comme pour les deux contributions précédentes, nous nous plaçons dans le cadre probabiliste et métrique. L'idée principale des méthodes proposées est d'imposer deux types de contraintes : (1) nous fixons le même nombre de cluster pour les individus et les variables; (2) nous cherchons une structure de la matrice de données d'origine qui possède les valeurs maximales sur sa diagonale (par exemple pour le cas des données binaires, on cherche des blocs diagonaux majoritairement composés de valeurs 1, et de 0 à l’extérieur de la diagonale). Les approches proposées bénéficient des garanties de convergence issues des résultats des chapitres précédents. Enfin, pour chaque chapitre, nous dérivons des algorithmes permettant d'obtenir des partitions dures et floues. Nous évaluons nos contributions sur un large éventail de données simulées et liées a des applications réelles telles que le text mining, dont les données peuvent être binaires ou continues. Ces expérimentations nous permettent également de mettre en avant les avantages et les inconvénients des différentes approches proposées. Pour conclure, nous pensons que cette thèse couvre explicitement une grande majorité des scénarios possibles découlant du co-clustering flou et dur, et peut être vu comme une généralisation de certaines approches de biclustering populaires
With the increasing number of data available, unsupervised learning has become an important tool used to discover underlying patterns without the need to label instances manually. Among different approaches proposed to tackle this problem, clustering is arguably the most popular one. Clustering is usually based on the assumption that each group, also called cluster, is distributed around a center defined in terms of all features while in some real-world applications dealing with high-dimensional data, this assumption may be false. To this end, co-clustering algorithms were proposed to describe clusters by subsets of features that are the most relevant to them. The obtained latent structure of data is composed of blocks usually called co-clusters. In first two chapters, we describe two co-clustering methods that proceed by differentiating the relevance of features calculated with respect to their capability of revealing the latent structure of the data in both probabilistic and distance-based framework. The probabilistic approach uses the mixture model framework where the irrelevant features are assumed to have a different probability distribution that is independent of the co-clustering structure. On the other hand, the distance-based (also called metric-based) approach relied on the adaptive metric where each variable is assigned with its weight that defines its contribution in the resulting co-clustering. From the theoretical point of view, we show the global convergence of the proposed algorithms using Zangwill convergence theorem. In the last two chapters, we consider a special case of co-clustering where contrary to the original setting, each subset of instances is described by a unique subset of features resulting in a diagonal structure of the initial data matrix. Same as for the two first contributions, we consider both probabilistic and metric-based approaches. The main idea of the proposed contributions is to impose two different kinds of constraints: (1) we fix the number of row clusters to the number of column clusters; (2) we seek a structure of the original data matrix that has the maximum values on its diagonal (for instance for binary data, we look for diagonal blocks composed of ones with zeros outside the main diagonal). The proposed approaches enjoy the convergence guarantees derived from the results of the previous chapters. Finally, we present both hard and fuzzy versions of the proposed algorithms. We evaluate our contributions on a wide variety of synthetic and real-world benchmark binary and continuous data sets related to text mining applications and analyze advantages and inconvenients of each approach. To conclude, we believe that this thesis covers explicitly a vast majority of possible scenarios arising in hard and fuzzy co-clustering and can be seen as a generalization of some popular biclustering approaches
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Joyce, Robert. „Dynamic optimisation of NP-hard combinatorial problems of engineering data sets“. Thesis, Coventry University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261170.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Darragh, Neil. „An adaptive partial response data channel for hard disk magnetic recording“. Thesis, University of Plymouth, 1994. http://hdl.handle.net/10026.1/2594.

Der volle Inhalt der Quelle
Annotation:
An adaptive data channel is proposed which is better able to deal with the variations in performance typically found in the recording components of a hard disk drive. Three such categories of variation were investigated in order to gain an understanding of their relative and absolute significance; variations over radius, along the track length, and between different head / media pairs. The variations were characterised in terms of their effects on the step-response pulse width and signal-to-noise ratio. It was found that in each of the categories investigated, significant variations could be found in both longitudinal and perpendicular recording systems which, with the exception of radial variations, were nondeterministic over different head / media pairs but were deterministic for any particular head / media pair characterised. Conventional data channel design assumes such variations are non-deterministic and is therefore designed to provide the minimum error rate performance for the worst case expected recording performance within the range of accepted manufacturing tolerance. The proposed adaptive channel works on the principle that once a particular set of recording components are assembled into the disk drive, such variations become deterministic if they are able to be characterised. Such ability is facilitated by the recent introduction of partial response signalling to hard disk magnetic recording which brings with it the discrete-time sampler and the ability of the microprocessor to analyse signals digitally much more easily than analogue domain alternatives. Simple methods of measuring the step-response pulse width and signal to noise ratio with the partial response channel's electronic components are presented. The expected error rate as a function of recording density and signal to noise ratio is derived experimentally for the PR4 and EPR4 classes of partial response. On the basis of this information and the recording performance it has measured, the adaptive channel is able to implement either PR4 or EPR4 signalling and at any data rate. The capacity advantage over the non-adaptive approach is investigated for the variables previously identified. It is concluded on the basis of this investigation that the proposed adaptive channel could provide significant manufacturing yield and capacity advantages over the non-adaptive approach for a modest increase in electronic complexity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Yasuda, Takeo. „Circuit Technologies for High Performance Hard Disk Drive Data Channel LSI“. 京都大学 (Kyoto University), 2001. http://hdl.handle.net/2433/150621.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Puchol, Carlos Miguel. „An automation-based design methodolgy [sic] for distributed, hard real-time systems /“. Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Svensson, Karin. „Har kvinnor förändrade pendlingsmönster? : En kvantitativ studie om kvinnors pendlingsmönster har påverkats av att deras utbildningsnivå ökat“. Thesis, Uppsala universitet, Kulturgeografiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413694.

Der volle Inhalt der Quelle
Annotation:
Det finns en löneskillnad mellan kvinnor och män som tidigare till viss del förklarats med hänvisning till pendling, eftersom män pendlar i högre utsträckning än vad kvinnor gör. Idag har kvinnor en högre utbildningsnivå än män. Eftersom tidigare studier visar på ett samband mellan utbildning och pendling undersöker föreliggande studie antagandet att kvinnors pendlingsmönster har påverkats av att deras utbildningsnivå ökat. För att undersöka detta analyseras skillnader mellan mäns och kvinnors pendlingsmönster över både tid och rum, samt samband mellan utbildningsnivå och pendling. Detta görs genom en kvantitativ metod baserad på data över Sveriges befolkning. Resultatet visar på att kvinnors pendlingsmönster har förändrats, i många avseenden på andra sätt än mäns. Framförallt har pendlingsbenägenheten ökat mer för kvinnor än för män, vilket skulle kunna förklaras med kvinnors ökade utbildningsnivå, och går således i linje med det antagande som studien baseras på. Om det verkligen är så att förändringen beror på den ökade utbildningsnivån eller av andra anledningar är dock oklart, eftersom utbildningsnivå och pendling inte tycks ha det tydliga samband som tidigare sagts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Li, Guijun, und 李桂君. „Development of recording technology with FePt recording media and magnetic tunnel junction sensors with conetic alloy“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50899776.

Der volle Inhalt der Quelle
Annotation:
With highly demanding requirement in current emerging cloud storage and personal computers, hard disk drive recording with high stability and high volume has attached much attention in industry and academy. Recording media and recording head feasible for future high-density recording are both crucial to utilize magnetic recording with 1T bit/in2 recording density. Recoding media with FePt for high density and high stability was investigated in this thesis using FePt polymers with imprinting methods and FePt thin films with ion-beam bombardment technologies. The FePt polymers can be patterned using imprint at micro-and nano-scales. The micro-and nano-patterns could be retained on substrates after sintering at high temperatures. The high magnetic coercivity was proved with line and dot patterns at different scales. Recording heads with Al2O3based magnetic tunneling junction sensors were also studied in thesis. The magnetic tunneling junction sensors were proved to work stable at different temperatures varying from -30oC to 100oC. The long time running test up to 100 hours also proved the stability of the magnetic tunneling junction sensors working in extreme temperatures. Withstate-of-art patterning and depositing technologies, new ideas about using FePt polymer to work as magnetic recording media and using ion beam bombardments to tune the FePt magnetic properties were verified. The feasibility of using Al2O3 based magnetic tunneling junction sensors as recording head was also discussed.
published_or_final_version
Electrical and Electronic Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Chen, Tao. „Development and simulation of hard real-time switched-ethernet avionics data network“. Thesis, Cranfield University, 2011. http://dspace.lib.cranfield.ac.uk/handle/1826/6995.

Der volle Inhalt der Quelle
Annotation:
The computer and microelectronics technologies are developing very quickly nowadays. In the mean time, the modern integrated avionics systems are burgeoning unceasingly. The modern integrated modular architecture more and more requires the low-latency and reliable communication databus with the high bandwidth. The traditional avionics databus technology, such as ARINC429, can not provide enough high speed and size for data communication, and it is a problem to achieve transmission mission successfully between the advanced avionic devices with the sufficient bandwidth. AFDX(Avionics Full Duplex Switched Ethernet) is a good solution for this problem, which is the high-speed full duplex switched avionic databus, on the basis of the Ethernet technology. AFDX can not only avoid Ethernet conflicts and collisions, but also increase transmission rate with a lower weigh of the databus. AFDX is now adopted by A380,B787 aircraft successfully. The avionics data must be delivered punctualy and reliablely, so it is very essential to validate the real-time performance of AFDX during the design process. The simulation is a good method to acquire the network performance, but it only happends in some given set of scenarios, and it is impossible to consider every case. So a sophisticatd network performance method for the worst-case scenario with the pessimistic upper bound requires to be deduced. The avionic design engineers have launched many researches in the AFDX simulation and methods study. That is the goal that this thesis is aimming for. The development of this project can been planned in the following two steps. In the first step, a communication platform plans to be implemented to simulate the AFDX network in two versions – the RTAI realtime framework and Linux user space framework. Ultimately, these frameworks need to be integrated into net-ASS, which is an integrated simulation and assessment platform in the cranfield’s lab.The second step deduces an effective method to evaluate network performance, including three bounds(delay,backlog and output flow), based on the NC. It is called Network Calculus. It is an internet theory keeping the network system in determistic way. It is also used in communication queue management. This mathematics method is planed to be verified with simulation results from the AFDX simuation communication platform, in order to assure its validity and applicability. All in all, the project aims to assess the performance of different network topologies in different avionic architectures, through the simulation and the mathematical assessment. The technologies used in this thesis benefit to find problems and faults in the beginning stage of the avionics architecture design in the industrial project, especially, in terms of guarantee the lossless service in avionics databus.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Meister, Eric. „Using hard cost data on resource consumption to measure green building performance“. [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0010531.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Deng, Jiantao. „Adaptation of A TruckSim Model to Experimental Heavy Truck Hard Braking Data“. The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259633762.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Tunstall, Glen Alan. „Dynamic characterisation of the head-media interface in hard disk drives using novel sensor systems“. Thesis, University of Plymouth, 2002. http://hdl.handle.net/10026.1/1643.

Der volle Inhalt der Quelle
Annotation:
Hard disk drives function perfectly satisfactorily when used in a stable environment, but in certain applications they are subjected to shock and vibration. During the work reported in this thesis it has been found that when typical hard disk drives are subjected lo vibration, data transfer failure is found to be significant at frequencies between 440Hz and 700Hz, at an extreme, failing at only Ig of sinusoidal vibration. These failures can largely be attributed to two key components: the suspension arm and the hard disk. At non-critical frequencies of vibration the typical hard disk drive can reliably transfer data whilst subjected to as much as 45g. When transferring data to the drive controller, the drive's operations are controlled and monitored using BIOS commands. Examining the embedded error signals proved that the drive predominantly failed due lo tracking errors. Novel piezo-electric sensors have been developed to measure unobtrusively suspension arm and disk motion, the results from which show the disk to be the most significant failure mechanism, with its First mode of resonance at around 440Hz. The suspension arm movement has been found to be greatest at IkHz. Extensive modelling of the flexure of the disk, clamped and unclamped, has been undertaken using finite element analysis. The theoretical modelling strongly reinforces the empirical results presented in this thesis. If suspension arm movement is not directly coupled with disk movement then a flying height variation is created. This, together with tracking variations, leads to data transfer corruption. This has been found to occur at IkHz and 2kHz. An optical system has been developed and characterised for a novel and inexpensive flying height measurement system using compact disc player technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Harrison, Christopher Bernard. „Feasibility of rock characterization for mineral exploration using seismic data“. Curtin University of Technology, Western Australia School of Mines, Department of Exploration Geophysics, 2009. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=129417.

Der volle Inhalt der Quelle
Annotation:
The use of seismic methods in hard rock environments in Western Australia for mineral exploration is a new and burgeoning technology. Traditionally, mineral exploration has relied upon potential field methods and surface prospecting to reveal shallow targets for economic exploitation. These methods have been and will continue to be effective but lack lateral and depth resolution needed to image deeper mineral deposits for targeted mining. With global need for minerals, and gold in particular, increasing in demand, and with shallower targets harder to find, new methods to uncover deeper mineral reserves are needed. Seismic reflection imaging, hard rock borehole data analysis, seismic inversion and seismic attribute analysis all give the spatial and volumetric exploration techniques the mineral industry can use to reveal high value deeper mineral targets.
In 2002, two high resolution seismic lines, the East Victory and Intrepid, were acquired along with sonic logging, to assess the feasibility of seismic imaging and rock characterisation at the St. Ives gold camp in Western Australia. An innovative research project was undertaken combining seismic processing, rock characterization, reflection calibration, seismic inversion and seismic attribute analysis to show that volumetric predictions of rock type and gold-content may be viable in hard rock environments. Accurate seismic imaging and reflection identification proved to be challenging but achievable task in the all-out hard rock environment of the Yilgarn craton. Accurate results were confounded by crocked seismic line acquisition, low signal-to-noise ratio, regolith distortions, small elastic property variations in the rock, and a limited volume of sonic logging. Each of these challenges, however, did have a systematic solution which allowed for accurate results to be achieved.
Seismic imaging was successfully completed on both the East Victory and Intrepid data sets revealing complex structures in the Earth as shallow as 100 metres to as deep as 3000 metres. The successful imaging required homogenization of the regolith to eliminate regolith travel-time distortions and accurate constant velocity analysis for reflection focusing using migration. Verification of the high amplitude reflections within each image was achieved through integration of surface geological and underground mine data as well as calibration with log derived synthetic seismograms. The most accurate imaging results were ultimately achieved on the East Victory line which had good signal-to-noise ratio and close-to-straight data acquisition direction compared to the more crooked Intrepid seismic line.
The sonic logs from both the East Victory and Intrepid seismic lines were comprehensively analysed by re-sampling and separating the data based on rock type, structure type, alteration type, and Au assay. Cross plotting of the log data revealed statistically accurate separation between harder and softer rocks, as well as sheared and un-sheared rock, were possible based solely on compressional-wave, shear-wave, density, acoustic and elastic impedance. These results were used successfully to derive empirical relationships between seismic attributes and geology. Calibrations of the logs and seismic data provided proof that reflections, especially high-amplitude reflections, correlated well with certain rock properties as expected from the sonic data, including high gold content sheared zones. The correlation value, however, varied with signal-to-noise ratio and crookedness of the seismic line. Subsequent numerical modelling confirmed that separating soft from hard rocks can be based on both general reflectivity pattern and impedance contrasts.
Indeed impedance inversions on the calibrated seismic and sonic data produced reliable volumetric separations between harder rocks (basalt and dolerite) and softer rock (intermediate intrusive, mafic, and volcaniclastic). Acoustic impedance inversions produced the most statistically valid volumetric predictions with the simultaneous use of acoustic and elastic inversions producing stable separation of softer and harder rocks zones. Similarly, Lambda-Mu-Rho inversions showed good separations between softer and harder rock zones. With high gold content rock associated more with “softer” hard rocks and sheared zones, these volumetric inversion provide valuable information for targeted mining. The geostatistical method applied to attribute analysis, however, was highly ambiguous due to low correlations and thus produced overly generalized predictions. Overall reliability of the seismic inversion results were based on quality and quantity of sonic data leaving the East Victory data set, again with superior results as compared to the Intrepid data set.
In general, detailed processing and analysis of the 2D seismic data and the study of the relationship between the recorded wave-field and rock properties measured from borehole logs, core samples and open cut mining, revealed that positive correlations can be developed between the two. The results of rigorous research show that rock characterization using seismic methodology will greatly benefit the mineral industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Li, Hai. „Storage Physics and Noise Mechanism in Heat-Assisted Magnetic Recording“. Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/706.

Der volle Inhalt der Quelle
Annotation:
As cloud computing and massive-data machine learning are applied pervasively, ultra-high volume data storage serves as the foundation block. Every day, nearly 2.5 quintillion bytes (50000 GB/second in 2018) of data is created and stored. Hard Disk Drive (HDD) takes major part of this heavy duty. However, despite the amazing evolution of HDD technology during the past 50 years, the conventional Perpendicular Magnetic Recording (PMR), the state-of-the-art HDD technique, starts to have less momentum in increasing storage density because of the recording trilemma. To overcome this, Heat-Assisted Magnetic Recording (HAMR) was initially proposed in 1990s. With years of advancement, recent industrial demos have shown the potential of HAMR to actually break the theoretical limit of PMR. However, to fully take advantage of HAMR and realize the commercialization, there are still quite a few technical challenges, which motivated this thesis work. Via thermal coupled micromagnetic simulation based upon Landau-Lifshitz-Bloch (LLB) equation, the entire dynamic recording process has been studied systematically. The very fundamental recording physics theorem is established, which manages to elegantly interpret the previously conflicting experimental observations. The thermal induced field dependence of performance, due to incomplete switching and erase-after-write, is proposed for the first time and validated in industrial lab. The combinational effects form the ultimate physical limit of this technology. Meanwhile, this theorem predicts the novel noise origins, examples being Curie temperature distribution and temperature distribution, which are the key properties but ignored previously. To enhance performance, utilizations of higher thermal gradient, magnetically stiffer medium, optimal field etc. have been suggested based upon the theorem. Furthermore, a novel concept, Recording Time Window (RTW), has been proposed. It tightly correlates with performance and serves as a unified optimization standard, summarizing almost all primary parameters. After being validated via spin stand testing, the theorem has been applied to provide solutions for guiding medium design and relaxing the field and heating requirement. This helps solve the issues around writer limit and thermal reliability. Additionally, crosstrack varying field has been proposed to solve the well-known transition curvature issue, which may increase the storage density by 50%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Asseburg, Christian. „A Bayesian approach to modelling field data on multi-species predator prey-interactions“. Thesis, St Andrews, 2006. https://research-repository.st-andrews.ac.uk/handle/10023/174.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Ryman, Jonatan, und Felicia Torbjörnsson. „Hur har digitaliseringen och Big data påverkat revisionsbranschen? : - Hur ser framtiden ut?“ Thesis, Högskolan i Halmstad, Akademin för företagande, innovation och hållbarhet, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-45310.

Der volle Inhalt der Quelle
Annotation:
Information och data är något som är värdefullt i de flesta organisationer. Idag finns det möjlighet att samla in och analysera stora mängder data, något som brukar benämnas Big data. Ifall materialet hanteras på ett korrekt sätt kan det leda till ett förbättrat beslutsfattande samt ökade insikter om kunden och marknaden. Bearbetning av Big data har möjliggjorts under de senaste åren genom ny teknik, men det är förknippat med höga kostnader vilket har medfört att små- och medelstora företag inte har tillräckligt med resurser för att implementera Big data- tekniker. Detta är något som i stor utsträckning även kan appliceras på revisionsbyråer. I studiens problemdiskussion diskuteras att trots de fördelar som Big data-tekniker och digitalisering innebär för revisionsbranschen, har utvecklingen varit långsam. Genom det utformades ett syfte, att undersöka hur digitaliseringen och Big data-tekniker påverkat den svenska revisionsbranschen.  Syftet med denna studie är att undersöka, analysera och beskriva hur implementeringen av Big data har och kommer att påverka revisorns arbetsprocess. Syftet är också att undersöka vilka fördelar och nackdelar som implementeringen har medfört för revisionsbyråer och det skedde genom en empirisk undersökning.  Den empiriska undersökningen är en fallstudie som består av sju stycken intervjuer med representanter från olika revisionsbyråer i Västra Sverige. Genom att använda sig av intervjuer har respondenterna haft möjlighet att berätta om sina egna erfarenheter utan störningsmoment. Det är en kvalitativ metod som har tillämpats och studien har utgångspunkt ur en deduktiv ansats. Det eftersom utgångspunkten kommer ifrån tidigare forskning samt teori som redan existerar om digitalisering, Big data och revisionsbranschen.  Resultatet av studien visar att de olika respondenterna gav liknande svar, dels kring hur deras revisionsprocess är utformad, men även om den digitala utvecklingen och Big data. Det är tre stycken byråer som idag använder sig utav Big data i begränsad omfattning och en av respondenterna hade ingen kunskap om konceptet alls. Samtliga respondenter anser att digitaliseringen inom revisionsbranschen kommer att bli ännu viktigare och spela en större roll i framtiden. Automatisering och standardisering är processer som respondenterna tror kommer att bli mer omfattande i framtiden inom revisionen.  I studiens slutsats framkommer det att digitaliseringen inom revisionsbranschen har utvecklats något långsamt jämfört med andra delar i samhället, men att det skett en stor utveckling under de senaste åren. De slutsatser som presenteras och som utgår ifrån studiens frågeställningar är att arbetsprocessen för revisorerna inte har påverkats av Big data än, då det fortfarande är ett relativt okänt begrepp. Framtidens revision kommer dock inte se ut som idag, utan att det kommer ske stora förändringar inom branschen kommande år. Arbetsprocesser kommer att bli mer effektiva samt att revisionen kommer att ha en högre kvalitet.
Information and data are something that is of value in most organizations. Today, there is theopportunity to collect and analyze large amounts of data, which usually goes by the term Bigdata. If the essential material is handled correctly, it may result in improved decision makingand increased insights about clients and the market. The use and processing of Big data hasbeen made possible in the recent years through new technology, but it is associated with highcosts which has resulted in that small and medium-sized companies don’t have the resourcesto implement Big data technologies. This means that larger and more established companieshave an advantage. Big data can also be widely applied to auditing firms. The study's problemdiscussion proves that despite the many benefits that Big data technologies and the digitizationcreate for the auditing industry, the development is slagging behind other industries. Therefore,the purpose of the study was designed to see how digitalization and Big Data technologies hasaffected the Swedish auditing industry.The purpose of this study is to investigate, analyze and describe how the implementation ofBig data has and future affect the auditors’ work process. The purpose is also to investigatewhich advantages and disadvantages the implementation has brought auditing firms and thatwill be examined through an empirical study.The empirical study is a case study that consists of seven interviews with representatives fromvarious auditing firms in Western Sweden. By using interviews, the respondents have theopportunity to describe their own experiences. The study is based on the qualitative method.The study has a deductive approach as the starting point comes from previous research andtheory that already exists about digitization, Big data and the auditing industry.The results of the study shows that the respondents gave somewhat similar answers to thequestions, mostly about their auditing process and how it's designed today. But also about theirdigital development and Big data. Three of the interviewed firms use Big data today, but onlyto a limited extent and one of the respondents had no idea what the concept Big datarepresented. However, all respondents believe that digitalization in the auditing industry willbecome even more important and play a greater role in the near future. Automation andstandardization are processes that the respondents believe will be more extensive in the futurewithin the auditing firms.The study's conclusion shows that digitalization in the auditing industry has been somewhatslow compared with other industries, but that there has been a great development in recentyears. The conclusions that are presented and which are based on the study's questions are thatthe work process for the auditors has not been affected by Big data yet, as it’s still a relativelyunknown concept. However, auditing in the future will not look like it does today, there will besome major changes in the industry in the upcoming years. The work processes will be moreefficient than today and the audit will be of a higher quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Smeding, Gideon. „Verification of Weakly-Hard Requirements on Quasi-Synchronous Systems“. Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM073/document.

Der volle Inhalt der Quelle
Annotation:
L’approche synchrone aux systèmes réactifs, où le temps global est une séquence d’instants discrets, a été proposée afin de faciliter la conception des systèmes embarqués critiques. Des systèmes synchrones sont souvent réalisés sur des architectures asynchrones pour des raisons de performance ou de contraintes physiques de l’application. Une répartition d’un système synchrone sur une architecture asynchrone nécessite des protocoles de communication et de synchronisation pour préserver la sémantique synchrone. En pratique, les protocoles peut avoir un coût important qui peut entrer en conflit avec les contraintes de l’application comme, par exemple, la taille de mémoire disponible, le temps de réaction, ou le débit global.L’approche quasi-synchrone utilise des composants synchrones avec des horloges indépendantes. Les composants communiquent par échantillonnage de mémoire partagée ou par des tampons FIFO. On peut exécuter un tel système de façon synchrone, où toutes les horloges avancent simultanément, ou de façon asynchrone avec moins de contraintes sur les horloges, sans ajouter des protocoles .Plus les contraintes sont relâchées, plus de comportements se rajoutent en fonction de l’entrelacement des tics des horloges. Dans le cas de systèmes flots de données, un comportement est différent d’un autre si les valeurs ou le cadencement ont changé. Pour certaines classes de systèmes l’occurrence des déviations est acceptable, tant que la fréquence de ces événements reste bornée.Nous considérons des limites dures sur la fréquence des déviations avec ce que nous appelons les exigences faiblement dures, par exemple, le nombre maximal d’éléments divergents d’un flot par un nombre d’éléments consécutifs.Nous introduisons des limites de dérive sur les apparitions relatives des paires d’événements récurrents comme les tics d’une horloge, l’occurrence d’une différence,ou l’arrivée d’un message. Les limites de dérive expriment des contraintes entre les horloges, par exemple, une borne supérieure de deux tics d’une horloge entre trois tics consécutifs d’une autre horloge. Les limites permettent également de caractériser les exigences faiblement dures. Cette thèse présente des analyses pour la vérification et l’inférence des exigences faiblement dures pour des programmes de flots de données synchrones étendu avec de la communication asynchrone par l’échantillonnage de mémoire partagée où les horloges sont décrites par des limites de dérive. Nous proposons aussi une analyse de performance des systèmes répartis avec de la communication par tampons FIFO, en utilisant les limites de dérive comme abstraction
The synchronous approach to reactive systems, where time evolves by globally synchronized discrete steps, has proven successful for the design of safetycriticalembedded systems. Synchronous systems are often distributed overasynchronous architectures for reasons of performance or physical constraintsof the application. Such distributions typically require communication and synchronizationprotocols to preserve the synchronous semantics. In practice, protocolsoften have a significant overhead that may conflict with design constraintssuch as maximum available buffer space, minimum reaction time, and robustness.The quasi-synchronous approach considers independently clocked, synchronouscomponents that interact via communication-by-sampling or FIFO channels. Insuch systems we can move from total synchrony, where all clocks tick simultaneously,to global asynchrony by relaxing constraints on the clocks and withoutadditional protocols. Relaxing the constraints adds different behaviors dependingon the interleavings of clock ticks. In the case of data-flow systems, onebehavior is different from another when the values and timing of items in a flowof one behavior differ from the values and timing of items in the same flow ofthe other behavior. In many systems, such as distributed control systems, theoccasional difference is acceptable as long as the frequency of such differencesis bounded. We suppose hard bounds on the frequency of deviating items in aflow with, what we call, weakly-hard requirements, e.g., the maximum numberdeviations out of a given number of consecutive items.We define relative drift bounds on pairs of recurring events such as clockticks, the occurrence of a difference or the arrival of a message. Drift boundsexpress constraints on the stability of clocks, e.g., at least two ticks of one perthree consecutive ticks of the other. Drift bounds also describe weakly-hardrequirements. This thesis presents analyses to verify weakly-hard requirementsand infer weakly-hard properties of basic synchronous data-flow programs withasynchronous communication-by-sampling when executed with clocks describedby drift bounds. Moreover, we use drift bounds as an abstraction in a performanceanalysis of stream processing systems based on FIFO-channels
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Byrnes, Denise Dianne. „Static scheduling of hard real-time control software using an asynchronous data-driven execution model /“. The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu14877799148243.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Hore, Prodip. „Scalable frameworks and algorithms for cluster ensembles and clustering data streams“. [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002135.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Hsieh, Jane W. „Asking questions is easy, asking great questions is hard: Constructing Effective Stack Overflow Questions“. Oberlin College Honors Theses / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1589722602631253.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Pukitis, Furhoff Hampus. „Efficient search of an underwater area based on probability“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254568.

Der volle Inhalt der Quelle
Annotation:
Today more and more different types of autonomous robots and vehicles are being developed. Most of these rely on the global positioning system and/or communication with other robots and vehicles to determine their global position. However, these are not viable options for the autonomous underwater vehicles (AUVs) of today since radio-waves does not travel well in water. Instead, various techniques for determining the AUVs position are used which comes with a margin of error. This thesis examines the problem of efficiently performing a local search within this margin of error with the objective of finding a docking-station or a bouy.To solve this problem research was made on the subject of search theory and how it previously has been applied in this context. What was found was that classical bayesian search theory had not been used very often in this context since it would require to much processing power to be a viable option in the embedded systems that is AUVs. Instead different heuristics were used to get solutions that still were viable for the situations in which they were used, even though they maybe wasn’t optimal.Based on this the search-strategies Spiral, Greedy, Look-ahead and Quadtree were developed and evaluated in a simulator. Their mean time to detection (MTTD) were compared as well as the average time it took for the strategies to process a search. Look-ahead was the best one of the four different strategies with respect to the MTTD and based on this it is suggested that it should be implemented and evaluated in a real AUV.
Idag utvecklas allt fler olika typer av autonoma robotar och fordon. De flesta av dessa är beroende av det globala positioneringssystemet och/eller kommunikation med andra robotar och fordon för att bestämma deras globala position. Detta är dock inte realistiska alternativ för autonoma undervattensfordon (AUV) idag eftersom radiovågor inte färdas bra i vatten. I stället används olika tekniker för att bestämma AUVens position, tekniker som ofta har en felmarginal. Denna rapport undersöker problemet med att effektivt utföra en lokal sökning inom denna felmarginal med målet att hitta en dockningsstation eller en boj.För att lösa detta problem gjordes en litteraturstudie om ämnet sökteori och hur det tidigare har tillämpats i detta sammanhang. Det som hittades var att den klassiska bayesiska sökteorin inte hade använts mycket ofta i detta sammanhang eftersom det skulle kräva för mycket processorkraft för att det skulle vara ett rimligt alternativ för de inbyggda systemen på en AUV. Istället användes olika heuristiska metoder för att få lösningar som fortfarande var dugliga för de situationer där de användes, även om de kanske inte var optimala.Baserat på detta utvecklades sökstrategierna Spiral, Greedy, Look-ahead och Quad-tree och utvärderades i en simulator. Deras genomsnittliga tid för att upptäcka målet (MTTD) jämfördes liksom den genomsnittliga tiden det tog för strategierna att bearbeta en sökning. Look-ahead var den bästa av de fyra olika strategierna med avseende på MTTD och baserat på detta föreslås det att den ska implementeras och utvärderas i en verklig AUV.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Canzonieri, Massimiliano. „Dati digitali effimeri e permanenti“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3904/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Wistrand, Henrik. „Vad har svenska företag för syn på sovande data och hur hanterar svenska företag sovande data med avseende på identifiering och lagring?“ Thesis, University of Skövde, School of Humanities and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-853.

Der volle Inhalt der Quelle
Annotation:

Alltför många organisationer har datalager innehållande stora mängder sovande data, det vill säga data som sällan eller aldrig används. Sovande data påverkar en organisations datalager negativt eftersom den försämrar datalagrets prestanda, kostar pengar i onödan och påverkar datalagrets infrastruktur negativt. Enligt Inmon, Glassey och Welch (1997) är det en mycket svår och komplex process att rensa ut sovande data ur sitt datalager. Administratören måste ha kunskap om vilka datatabeller i datalagret som används och vilka rader utav data som används för att kunna ta bort data från datalagret. Enligt Inmon m.fl. (1997) är det nödvändigt att använda någon form av metod för att kunna identifiera vilken data i datalagret som kan klassas som sovande data. Syftet med arbetet är att undersöka hur svenska företag hanterar sovande data för att ta reda på vilka metoder de använder för att identifiera sovande data och vad de gör med den datan som blir klassad som sovande data.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Erlandsson, Emil. „Har sociala medier förkortat vår koncentrationsförmåga? : Upplever KTH-studenter att deras koncentrationsförmåga har försämrats och finns det något samband med deras användande av sociala medier?“ Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237252.

Der volle Inhalt der Quelle
Annotation:
I den här studien, undersöker vi hur KTH-studenter upplever att deras sociala medievanor påverkat deras koncentrationsförmåga relaterat till djupläsning och deras förmåga att fokusera på att läsa längre texter. Vi börjar med att utforska bakgrunden till den allmänna uppfattningen att sociala medier förstör vår förmåga att koncentrera oss. Arbetsantagandet och hypotesen för den här studien var att studenter som var aktiva på sociala medier inte har sämre koncentrationsförmåga än de studenter som inte var aktiva användare. Undersökningen genomfördes med en enkät. Efter att ha analyserat datan från de 70 deltagarna var det tydligt att datan stödjer hypotesen. Men det finns ett flertal felkällor vilka diskuteras i slutet av rapporten, och rapporten avslutas med möjliga förbättringar och hänsynstaganden för framtida studier.
In this paper, we examine how students at KTH perceive that their social media habits have affected their attention span related to deep reading and their ability to focus on reading longer texts. We begin by exploring the background to the commonly held assumption that social media is ruining our ability to concentrate. The working assumption and the hypophysis of the study was that students who were active social media users would not have worse attention span than those who were not active users. The study was performed using a survey. After analysing the data from the 70 participants, it was clear that the data would support the hypophysis. However there are a number of sources of errors that are discussed at the end, and the study concludes with possible improvements and considerations for future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Eriksson, Claes. „PERMADEATH MEDPERMANENTA OCHTRANSIENTA MÅL : Effekten permanenta och transienta mål har påpermadeath“. Thesis, Högskolan i Skövde, Institutionen för kommunikation och information, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11099.

Der volle Inhalt der Quelle
Annotation:
Arbetet ämnar undersöka vilken effekt på spelupplevelsen ett transient kontra permanent mål har i ett spel med permadeath, samt hur spelaren reagerar på mekaniken när spelets förutsättningar förändras. Arbetet utgår ifrån hypotesen att ett spel som presenteras med ett transient mål är mer passande för permadeath och gör döden mer acceptabelt för spelaren. Arbetet går först igenom en introduktion av permadeath som spelmekanik, vad förlust i spelsammanhang innebär, samt de två mål-variationerna, följt av arbetets frågeställning och hur denna ska utmätas. För detta arbete har ett spel med permdeath skapats i två variationer, en presenterad med ett permanent mål och en med ett transient mål. Testen utfördes sedan genom två grupper, vars spelupplevelse mättes genom en enkätundersökning. Resultatet från denna undersökning visade att det inte fanns någon märkbar skillnad mellan spelversionerna och deras mål, vilket berodde till stor del på hur testspelen i sig var designade. Resultatet visar dock andra intressanta aspekter av permdeath, så som vilka effekter andra mekaniker i speldesignen kan ha på permadeath och områden som måste tas hänsyn till om det ska vara meningsfullt att påstå att ett spel innehåller permadeath.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Copete, Antonio Julio. „BAT Slew Survey (BATSS): Slew Data Analysis for the Swift-BAT Coded Aperture Imaging Telescope“. Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10681.

Der volle Inhalt der Quelle
Annotation:
The BAT Slew Survey (BATSS) is the first wide-field survey of the hard X-ray sky (15–150 keV) with a slewing coded aperture imaging telescope. Its fine time resolution, high sensitivity and large sky coverage make it particularly well-suited for detections of transient sources with variability timescales in the \(\sim 1 sec–1 hour\) range, such as Gamma-Ray Bursts (GRBs), flaring stars and Blazars. As implemented, BATSS observations are found to be consistently more sensitive than their BAT pointing-mode counterparts, by an average of 20% over the 10 sec–3 ksec exposure range, due to intrinsic systematic differences between them. The survey’s motivation, development and implementation are presented, including a description of the software and hardware infrastructure that made this effort possible. The analysis of BATSS science data concentrates on the results of the 4.8-year BATSS GRB survey, beginning with the discovery of GRB 070326 during its preliminary testing phase. A total of nineteen (19) GRBs were detected exclusively in BATSS slews over this period, making it the largest contribution to the Swift GRB catalog from all ground-based analysis. The timing and spectral properties of prompt emission from BATSS GRBs reveal their consistency with Swift long GRBs (L-GRBs), though with instances of GRBs with unusually soft spectra or X-Ray Flashes (XRFs), GRBs near the faint end of the fluence distribution accessible to Swift-BAT, and a probable short GRB with extended emission, all uncommon traits within the general Swift GRB population. In addition, the BATSS overall detection rate of 0.49 GRBs/day of instrument time is a significant increase (45%) above the BAT pointing detection rate. This result was confirmed by a GRB detection simulation model, which further showed the increased sky coverage of slews to be the dominant effect in enhancing GRB detection probabilities. A review of lessons learned is included, with specific proposals to broaden both the number and range of astrophysical sources found in future enhancements. The BATSS survey results provide solid empirical evidence in support of an all-slewing hard X-ray survey mission, a prospect that may be realized with the launch of the proposed MIRAX-HXI mission in 2017.
Physics
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Zábojník, Jakub. „Využití knihovny HAM-Tools pro simulaci tepelného chování rodinného domu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231126.

Der volle Inhalt der Quelle
Annotation:
In terms of master’s thesis HAM-Tools library designed for MATLAB/Simulink was modified for the use in simulations of houses in the Czech Republic. Modified library and its parts were described in detail and tested by the simulation of the one-zone and two-zones models of the house. The simulations of models with same parameters were also realized in program TRNSYS. The corresponding results achieved in mentioned simulation tools were compared to each other. The one-zone model created by using HAM-Tools library is tested by the simulation of ventilating, heating, cooling, and sources of moisture. A demonstration of the practical use of the simulation is carried out in the thesis, namely by examining the influence of the insulation thickness on the thermal performance of the house (resp. its heat loss) on real atmospheric conditions. Among others, available resources of meteorological data are mentioned and compared to each other. The function for processing of the meteorological data to a file compatible with the HAM-Tools library was created. It was also created a material data file containing commonly used materials of building structures in the Czech Republic and their parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Poluri, Kaushik. „Bounding the Worst-Case Response Times of Hard-Real-Time Tasks under the Priority Ceiling Protocol in Cache-Based Architectures“. OpenSIUC, 2013. https://opensiuc.lib.siu.edu/theses/1213.

Der volle Inhalt der Quelle
Annotation:
AN ABSTRACT OF THE THESIS OF KAUSHIK POLURI, for the Master of Science degree in Electrical and Computer Engineering, presented on 07/03/2013, at Southern Illinois University Carbondale. TITLE: Bounding the Worst-Case Response Times of Hard-Real-Time Tasks under the Priority Ceiling Protocol in Cache-Based Architectures MAJOR PROFESSOR: Dr. HARINI RAMAPRASAD Schedulability analysis of hard-real-time systems requires a-priori knowledge of the worst-case execution times (WCET) of all tasks. Static timing analysis is a safe technique used for calculating WCET that attempts to model program complexity, architectural complexity and complexity introduced by interference from other tasks. Modern architectural features such as caches make static timing analysis of a single task challenging due to unpredictability introduced by their reliance on the history of memory accesses and the analysis of a set of tasks even more challenging due to cache-related interference among tasks. Researchers have proposed several static timing analysis techniques that explicitly consider cache-eviction delays for independent hard-real-time tasks executing on cache-based architectures. However, there is little research in this area for resource-sharing tasks. Recently, an analysis technique was proposed for systems using the Priority Inheritance Protocol (PIP) to manage resource-arbitration among tasks. The Priority Ceiling Protocol (PCP) is a resource-arbitration protocol that offers distinct advantages over the PIP, including deadlock avoidance. However, to the best of our knowledge, there is currently no technique to bound the WCET of resource-sharing tasks under PCP with explicit consideration of cache-eviction delays. This thesis presents a technique to bound the WCETs and hence, the Worst-Case Response Times (WCRTs) of resource-sharing hard-real-time tasks executing on cache-based uniprocessor systems, specifically focusing on data cache analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Imbert, Julie. „Fine-tuning of Fully Convolutional Networks for Vehicle Detection in Satellite Images: Data Augmentation and Hard Examples Mining“. Thesis, KTH, Geoinformatik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254946.

Der volle Inhalt der Quelle
Annotation:
Earth observation satellites, both from private companies and governmental agencies, allow us to take a deep look at the Earth from above. Images are acquired with shorter revisit times and spatial resolutions offering new details. Nevertheless, collecting the data is only half of the work that has to be done. With an increase of the amount and quality of satellite data available, developing processing methods to exploit them efficiently and rapidly is primary. On images at around 30 cm resolution, vehicles can be well detected but remain considered as small objects. Even the best models can still have false alarms and missed detections. In this master thesis, the aim is to improve vehicle detection with already high performing networks, by only using their initial training dataset. Worldview-3 satellite images are used for this work considering that they have a resolution close to 30 cm. Areas all over the world are considered. A custom U-Net convolutional network is trained for vehicle detection on two different datasets. When such an architecture is designed carefully with state of the art methods and then trained with the right parameters on a relevant and big enough dataset, very high scores can be reached. For this reason, improving such a network, once it has been trained on all the data available, in order to grab the last score points, is a real challenge. The performances of the U-Net are analysed both on a test set and on its own training dataset. From the performances on the test set, specific data augmentations are chosen to improve the network with a small fine-tuning training. This method allows to improve the network on its own specific weaknesses. It is an efficient way to avoid using directly numerous data augmentations that would not all be necessary and would increase training times. From the performances on the training dataset, examples where the network failed to learn are identified. Missed vehicles and false alarms are then used to design new datasets on which the network is fine-tuned in order to improve it and to reduce these types of mistakes on the test set. These fine-tuning trainings are performed with adapted parameters to avoid catastrophic forgetting. The aim is to focus the networks fine-tuning on false positives or false negatives, in order to allow it to learn features that it might have missed during the first training. Using data augmentation as a fine-tuning method allowed to increase the performances of a model. A gain close to 2.57 points in F1-score has been obtained with a specific augmentation. The hard mining strategy yielded more variable results. In the best case an improvement of 1.4 in F1-score has been observed. The method allowed to orientate the network to improve either recall or precision, while a deterioration of the respectively other metric was observed. An improvement of the both metrics simultaneously has not been reached.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Gächter, Sundbäck Dominic. „Analysis of the Hard Spectrum BL Lac Source 1H 1914-194 with Fermi-LAT Data and Multiwavelength Modelling“. Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-76510.

Der volle Inhalt der Quelle
Annotation:
The very-high-energy gamma-ray emission of the hard spectrum BL Lac source 1H1914-194 has been studied with Fermi-LAT data covering a nearly ten-year period between August 2008 until March 2018 in the energy range of 300 MeV to 870 GeV. The mean flux has been determined as 8.4 x 10-9±3.5 x 10-10 photon cm-2 s-1. The data processing has been done with the Enrico software using the Fermi Science Tools (v10r0p5) and the Pass 8 version of the data, performing binned analysis in order to handle the long integration time. The lightcurve shows that the source has to be considered as variable in the given time period for a three-month binning. It gives furthermore evidence for at least one quiet and active period lasting slightly over 1.5 years each. Even these shorter periods show a weak variability. The significance of the source has been determined as σ = 57.5 for a one-year period. The spectral analysis of three different time periods have been fitted by PowerLaw2, LogParabola and PLExpCutoff functions resulting in LogParabola being slightly favored in most of the cases. However, the test statistic are not showing enough significance that may lead to an unambiguous preference. The data from the analysis has been implemented in a multiwavelength view of the source, showing that the analysis is in agreement with the data coming from the Fermi catalogs. The overall emission of 1H1914-194 has been modelled with theoretical frameworks based on a one-zone Synchrotron Self Compton (SSC) model providing an acceptable description of the SED.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Amos, Nissim. „Media fabrication and characterization systems for three dimensional-multilevel magnetic recording“. Diss., [Riverside, Calif.] : University of California, Riverside, 2008. http://proquest.umi.com/pqdweb?index=0&did=1663077871&SrchMode=2&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268244498&clientId=48051.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of California, Riverside, 2008.
Includes abstract. Available via ProQuest Digital Dissertations. Title from first page of PDF file (viewed March 10, 2010). Includes bibliographical references (p. 96-104). Also issued in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Lannergård, Joakim, und Mikael Larsson. „Hur stor betydelse har bakåtblickande respektive framåtblickande förväntningar i Phillipskurvan? - en empirisk studie på svenska data“. Thesis, Uppsala University, Department of Economics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5957.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Andersson, Emanuel. „Hur företag inom dagligvaruhandeln kan uppnåett framgångsrikt arbetssätt med CRM-system : Vilken roll har insamlade data?“ Thesis, Karlstads universitet, Handelshögskolan (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-78886.

Der volle Inhalt der Quelle
Annotation:
Inom dagligvaruhandeln har det varit ett problem att data har samlats in men att företag intehar utnyttjat denna data fullt ut. Det har även varit ett problem att företag har riktat in sig pånya kunder trots att det har visats sig vara mer lönsamt att rikta in sig på befintliga kunder isin marknadsföring. Eftersom ett CRM system samlar in mycket data har det även dykt uppproblem kring vad som är etiskt acceptabelt att göra med sin insamlade data.Syftet med uppsatsen är att förklara hur företag kan uppnå ett framgångsrikt arbetssätt avCRM system inom dagligvaruhandeln. CRM (Customer Relationship Management) är viktigtför företag för att förstå sina kunder och används där företaget integrerar med sina kunder.Samt för att samla in kundinformation som sedan kan användas som ett stöd för att välja ut demest lönsamma kunderna.Denna studie har undersökt hur tidigare forskning har beskrivit ett framgångsrikt arbetssättmed CRM system för att rikta in sig på lönsamma kunder och uppnå ekonomiska fördelar,vilken roll data har i hela CRM arbetet och hur olika etiska aspekter påverkar arbetet medCRM. Semistrukturerade intervjuer har genomförts med fyra verksamma respondenter inomdagligvaruhandeln för att få ytterligare perspektiv på CRM arbetet.Studien visar att data har en central roll för hela organisationen, för beslutstagande, för helalojalitetprogrammets uppbyggnad samt för det dagliga operativa arbetet i butik för att hållabra nivåer på varulager och ett minskat svinn. Studien visar även att komplett data är enframgångsfaktor för hela CRM arbetet vilket möjliggör ett arbetssätt som skapar ekonomiskafördelar. Studien visar även på att företag inom dagligvaruhandeln kan rikta in sig på sinamest lönsamma kunder genom att gynna de som har ett frekvent köpbeteende, regelbundnaköp och är marginalstarka kunder. Studien visar även på att GDPR och etiska aspekter harstor påverkan på organisationens strategi för CRM och relationen mellan kund och företag.Studien bidrar till tidigare forskning genom att förklara vilken roll data har i CRM arbetet föratt sedan veta hur den kan användas i olika delar av organisationen. Samt hur fullständiga dataspelar en stor roll för ett framgångsrikt arbetssätt med CRM genom hela organisationen.Studien bidar även med hur aspekter för bevarande av kundens personliga integritet påverkararbetet med CRM i insamlandet och användandet av kunddata.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie