To see the other types of publications on this topic, follow the link: Sensitive information.

Dissertations / Theses on the topic 'Sensitive information'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sensitive information.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

De, Cristofaro E. "Sharing sensitive information with privacy." Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1450712/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ahlqvist, Ola. "Context Sensitive Transformation of Geographic Information." Doctoral thesis, Stockholm : Univ, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Forti, Cristiano Augusto Borges. "Bank dividends and signaling to information-sensitive depositors." reponame:Repositório Institucional do FGV, 2012. http://hdl.handle.net/10438/10518.

Full text
Abstract:
Submitted by Cristiano Forti (crforti@gmail.com) on 2013-02-20T17:15:40Z No. of bitstreams: 1 Tese Doutorado 2012 - Versão Final.pdf: 1027415 bytes, checksum: 2fa17755a9aff4536228a46badc3d5f2 (MD5)
Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-02-20T17:46:28Z (GMT) No. of bitstreams: 1 Tese Doutorado 2012 - Versão Final.pdf: 1027415 bytes, checksum: 2fa17755a9aff4536228a46badc3d5f2 (MD5)
Made available in DSpace on 2013-02-20T18:30:20Z (GMT). No. of bitstreams: 1 Tese Doutorado 2012 - Versão Final.pdf: 1027415 bytes, checksum: 2fa17755a9aff4536228a46badc3d5f2 (MD5) Previous issue date: 2012-10-24
This study investigates whether the composition of bank debt affects payout policy. I identify that information-sensitive depositors (Institutional Investors) are targets of dividend signaling by banks. I use a unique database of Brazilian banks, for which I am able to identify several types of debtholders, namely Institutional Investors, nonfinancial firms and individuals, which are potential targets of dividend signaling. I also exploit the features of the Brazilian banking system, such as the existence of several closely held banks, owned and managed by a small group of shareholders, for which shareholder-targeted signaling is implausible, and find that banks that rely more on information-sensitive (institutional) depositors for funding pay larger dividends, controlling for other features. During the financial crisis, this behavior was even more pronounced. This relationship reinforces the role of dividends as a costly and credible signal of the quality of bank assets. I also find that payout is negatively related to the banks’ cost of funding (interest rates paid on certificates of deposits), that dividends have a positive relationship with size and past profitability and that closely held banks pay more dividends than publicly traded banks, a finding that is also in line with the idea that depositors are targets of dividend-signaling. Finally, I find a negative relationship between dividends and the capital adequacy ratio, which indicates that regulatory pressure may induce banks to pay less dividends and that payouts are negatively related to the growth of the loan portfolio, consistent with the idea of banks retaining earnings to increase equity and thus their lending capacity.
Esta tese investiga se a composição do endividamento dos bancos afeta sua política de dividendos. Identificou-se que investidores sensíveis a informações (investidores institucionais) são alvos de sinalização através de dividendos por parte dos bancos. Utilizando uma base de dados exclusiva de bancos brasileiros, foi possível identificar vários tipos de credores, especificamente, investidores institucionais, empresas não financeiras e pessoas físicas, que são alvos potenciais de sinalização por dividendos. Adicionalmente, a existência de vários bancos de capital fechado, controlados e geridos por um pequeno grupo de acionistas, em que a sinalização direcionada a acionistas é implausível, permite inferir que bancos que utilizam mais fundos de investidores sensíveis a informações (institucionais) pagam mais dividendos, controlando por diversas características. Durante a crise financeira, este comportamento foi ainda mais pronunciado. Esta relação reforça o papel dos dividendos como uma forma custosa e crível de comunicar sobre a qualidade dos ativos dos bancos. A hipótese de que os dividendos podem ser utilizados como uma forma de expropriação dos depositantes por parte dos acionistas é refutada, uma vez que, se fosse esse o caso, observar-se-ia esse maiores dividendos em bancos com depositantes menos sensíveis a informação. Além disso, foi verificada uma relação negativa entre o pagamento de dividendos e o custo de captação (juros pagos em certificados de depósito bancário) e uma relação positiva de dividendos com o tamanho e com os lucros passados, e que os bancos de capital fechado pagam mais dividendos do que os de capital aberto, uma descoberta que também se alinha com a ideia de que os depositantes seriam os alvos da sinalização por dividendos. Finalmente, encontrou-se também uma relação negativa entre dividendos e adequação de capital do bancos, o que indica que pressões regulatórias podem induzir os bancos a pagar menos dividendos e que o pagamento de dividendos é negativamente relacionado com o crescimento da carteira de crédito, o que é consistente com a ideia de que os bancos com maiores oportunidades de investimento retêm seus lucros para aumentar seu patrimônio líquido e sua capacidade de conceder crédito.
APA, Harvard, Vancouver, ISO, and other styles
4

Kacem, Sahraoui Ameni. "Personalized information retrieval based on time-sensitive user profile." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30111/document.

Full text
Abstract:
Les moteurs de recherche, largement utilisés dans différents domaines, sont devenus la principale source d'information pour de nombreux utilisateurs. Cependant, les Systèmes de Recherche d'Information (SRI) font face à de nouveaux défis liés à la croissance et à la diversité des données disponibles. Un SRI analyse la requête soumise par l'utilisateur et explore des collections de données de nature non structurée ou semi-structurée (par exemple : texte, image, vidéo, page Web, etc.) afin de fournir des résultats qui correspondent le mieux à son intention et ses intérêts. Afin d'atteindre cet objectif, au lieu de prendre en considération l'appariement requête-document uniquement, les SRI s'intéressent aussi au contexte de l'utilisateur. En effet, le profil utilisateur a été considéré dans la littérature comme l'élément contextuel le plus important permettant d'améliorer la pertinence de la recherche. Il est intégré dans le processus de recherche d'information afin d'améliorer l'expérience utilisateur en recherchant des informations spécifiques. Comme le facteur temps a gagné beaucoup d'importance ces dernières années, la dynamique temporelle est introduite pour étudier l'évolution du profil utilisateur qui consiste principalement à saisir les changements du comportement, des intérêts et des préférences de l'utilisateur en fonction du temps et à actualiser le profil en conséquence. Les travaux antérieurs ont distingué deux types de profils utilisateurs : les profils à court-terme et ceux à long-terme. Le premier type de profil est limité aux intérêts liés aux activités actuelles de l'utilisateur tandis que le second représente les intérêts persistants de l'utilisateur extraits de ses activités antérieures tout en excluant les intérêts récents. Toutefois, pour les utilisateurs qui ne sont pas très actifs dont les activités sont peu nombreuses et séparées dans le temps, le profil à court-terme peut éliminer des résultats pertinents qui sont davantage liés à leurs intérêts personnels. Pour les utilisateurs qui sont très actifs, l'agrégation des activités récentes sans ignorer les intérêts anciens serait très intéressante parce que ce type de profil est généralement en évolution au fil du temps. Contrairement à ces approches, nous proposons, dans cette thèse, un profil utilisateur générique et sensible au temps qui est implicitement construit comme un vecteur de termes pondérés afin de trouver un compromis en unifiant les intérêts récents et anciens. Les informations du profil utilisateur peuvent être extraites à partir de sources multiples. Parmi les méthodes les plus prometteuses, nous proposons d'utiliser, d'une part, l'historique de recherche, et d'autre part les médias sociaux
Recently, search engines have become the main source of information for many users and have been widely used in different fields. However, Information Retrieval Systems (IRS) face new challenges due to the growth and diversity of available data. An IRS analyses the query submitted by the user and explores collections of data with unstructured or semi-structured nature (e.g. text, image, video, Web page etc.) in order to deliver items that best match his/her intent and interests. In order to achieve this goal, we have moved from considering the query-document matching to consider the user context. In fact, the user profile has been considered, in the literature, as the most important contextual element which can improve the accuracy of the search. It is integrated in the process of information retrieval in order to improve the user experience while searching for specific information. As time factor has gained increasing importance in recent years, the temporal dynamics are introduced to study the user profile evolution that consists mainly in capturing the changes of the user behavior, interests and preferences, and updating the profile accordingly. Prior work used to discern short-term and long-term profiles. The first profile type is limited to interests related to the user's current activities while the second one represents user's persisting interests extracted from his prior activities excluding the current ones. However, for users who are not very active, the short-term profile can eliminate relevant results which are more related to their personal interests. This is because their activities are few and separated over time. For users who are very active, the aggregation of recent activities without ignoring the old interests would be very interesting because this kind of profile is usually changing over time. Unlike those approaches, we propose, in this thesis, a generic time-sensitive user profile that is implicitly constructed as a vector of weighted terms in order to find a trade-off by unifying both current and recurrent interests. User profile information can be extracted from multiple sources. Among the most promising ones, we propose to use, on the one hand, searching history. Data from searching history can be extracted implicitly without any effort from the user and includes issued queries, their corresponding results, reformulated queries and click-through data that has relevance feedback potential. On the other hand, the popularity of Social Media makes it as an invaluable source of data used by users to express, share and mark as favorite the content that interests them
APA, Harvard, Vancouver, ISO, and other styles
5

Ema, Ismat. "Sensitive Data Migration to the Cloud." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Forde, Edward Steven. "Security Strategies for Hosting Sensitive Information in the Commercial Cloud." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/3604.

Full text
Abstract:
IT experts often struggle to find strategies to secure data on the cloud. Although current security standards might provide cloud compliance, they fail to offer guarantees of security assurance. The purpose of this qualitative case study was to explore the strategies used by IT security managers to host sensitive information in the commercial cloud. The study's population consisted of information security managers from a government agency in the eastern region of the United States. The routine active theory, developed by Cohen and Felson, was used as the conceptual framework for the study. The data collection process included IT security manager interviews (n = 7), organizational documents and procedures (n = 14), and direct observation of a training meeting (n = 35). Data collection from organizational data and observational data were summarized. Coding from the interviews and member checking were triangulated with organizational documents and observational data/field notes to produce major and minor themes. Through methodological triangulation, 5 major themes emerged from the data analysis: avoiding social engineering vulnerabilities, avoiding weak encryption, maintaining customer trust, training to create a cloud security culture, and developing sufficient policies. The findings of this study may benefit information security managers by enhancing their information security practices to better protect their organization's information that is stored in the commercial cloud. Improved information security practices may contribute to social change by providing by proving customers a lesser amount of risk of having their identity or data stolen from internal and external thieves
APA, Harvard, Vancouver, ISO, and other styles
7

Träutlein, Sarah Anna Elisabeth [Verfasser], Peter [Akademischer Betreuer] Buxmann, and Alexander [Akademischer Betreuer] Benlian. "Employees' sensitive information disclosure behavior in enterprise information systems / Sarah Träutlein ; Peter Buxmann, Alexander Benlian." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2017. http://d-nb.info/1149252448/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Träutlein, Sarah [Verfasser], Peter [Akademischer Betreuer] Buxmann, and Alexander [Akademischer Betreuer] Benlian. "Employees' sensitive information disclosure behavior in enterprise information systems / Sarah Träutlein ; Peter Buxmann, Alexander Benlian." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2017. http://d-nb.info/1149252448/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Xinfeng. "Time-sensitive Information Communication, Sensing, and Computing in Cyber-Physical Systems." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397731767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Yin. "Methodologies, Techniques, and Tools for Understanding and Managing Sensitive Program Information." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103421.

Full text
Abstract:
Exfiltrating or tampering with certain business logic, algorithms, and data can harm the security and privacy of both organizations and end users. Collectively referred to as sensitive program information (SPI), these building blocks are part and parcel of modern software systems in domains ranging from enterprise applications to cyberphysical setups. Hence, protecting SPI has become one of the most salient challenges of modern software development. However, several fundamental obstacles stand on the way of effective SPI protection: (1) understanding and locating the SPI for any realistically sized codebase by hand is hard; (2) manually isolating SPI to protect it is burdensome and error-prone; (3) if SPI is passed across distributed components within and across devices, it becomes vulnerable to security and privacy attacks. To address these problems, this dissertation research innovates in the realm of automated program analysis, code transformation, and novel programming abstractions to improve the state of the art in SPI protection. Specifically, this dissertation comprises three interrelated research thrusts that: (1) design and develop program analysis and programming support for inferring the usage semantics of program constructs, with the goal of helping developers understand and identify SPI; (2) provide powerful programming abstractions and tools that transform code automatically, with the goal of helping developers effectively isolate SPI from the rest of the codebase; (3) provide programming mechanism for distributed managed execution environments that hides SPI, with the goal of enabling components to exchange SPI safely and securely. The novel methodologies, techniques, and software tools, supported by programming abstractions, automated program analysis, and code transformation of this dissertation research lay the groundwork for establishing a secure, understandable, and efficient foundation for protecting SPI. This dissertation is based on 4 conference papers, presented at TrustCom'20, GPCE'20, GPCE'18, and ManLang'17, as well as 1 journal paper, published in Journal of Computer Languages (COLA).
Doctor of Philosophy
Some portions of a computer program can be sensitive, referred to as sensitive program information (SPI). By compromising SPI, attackers can hurt user security/privacy. It is hard for developers to identify and protect SPI, particularly for large programs. This dissertation introduces novel methodologies, techniques, and software tools that facilitate software developments tasks concerned with locating and protecting SPI.
APA, Harvard, Vancouver, ISO, and other styles
11

Boggs, Teresa. "Sharing Sensitive Information with Parents: A Guide for Early Childhood Educators." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etsu-works/1511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rubaiy, Hussein Nori. "Allosteric information transfer through inter-subunit contacts in ATP-sensitive potassium channels." Thesis, University of Leicester, 2012. http://hdl.handle.net/2381/11003.

Full text
Abstract:
KATP channels are ubiquitously expressed and link metabolic state to electrical excitability. In heart, in response to ischaemic stress, they play a protective role and in vascular smooth muscle regulation of vascular tone (vasorelaxation). Functional KATP channels are hetero-octamers composed of two subunits, a pore forming Kir6, which is a member of the inwardly rectifying potassium channels family and a regulatory sulphonylurea receptor (SUR). In response to nucleotides and pharmacological agents, SUR allosterically regulate KATP channel gating. Multidisciplinary techniques (molecular biology, biochemistry, electrophysiology, pharmacology) were used to study the allosteric regulation between these two heterologous subunits in KATP channels. This project was divided into three major sub-projects: 1) Application of site directed mutagenesis and biochemical techniques to identify the cognate interaction domain on Kir6.2 for SUR2A-NBD2 (nucleotide binding domain 2). 2) Electrophysiological techniques to investigate the allosteric information transfer between heterologous subunits Kir6 and SUR2A. 3) Recombinant fusion protein to express and purify the cytoplasmic domains of Kir6.2 for structural analysis of the interaction between the two subunits. This study reports on the identification of three cytoplasmic electrostatic interfaces between Kir6 and SUR2A involved in determining the sensitivity of KATP channel agonist, pinacidil, and antagonist, glibenclamide, from SUR2A to the Kir6 channel pore. For structural study of cytoplasmic domains of Kir6.2, bacterial TM1070 was used as fusion partner with Kir6.2. A TM1070-Kir6.2 NC (CT-His6 tag) fusion construct expressed in Arctic Express competent cells permitted successful expression of folded cytoplasmic domains of Kir6.2 in near native form. Immobilized metal ion affinity chromatography, IMAC (Ni2+), and gel filtration chromatography (GFC) column as second purification step were performed to purify this recombinant protein. The purification was confirmed by CBS and Western blot analysis. Possibly, this new information on channel structure-function relationships may contribute to the design of novel and more effective drugs.
APA, Harvard, Vancouver, ISO, and other styles
13

Yoon, Janghyun. "A network-aware semantics-sensitive image retrieval system." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180459/unrestricted/yoon%5fjanghyun%5f200312%5fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Johnson, Kenneth Tyrone. "The Training Deficiency in Corporate America: Training Security Professionals to Protect Sensitive Information." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/4145.

Full text
Abstract:
Increased internal and external training approaches are elements senior leaders need to know before creating a training plan for security professionals to protect sensitive information. The purpose of this qualitative case study was to explore training strategies telecommunication industry leaders use to ensure security professionals can protect sensitive information. The population consisted of 3 senior leaders in a large telecommunication company located in Dallas, Texas that has a large footprint of securing sensitive information. The conceptual framework on which this study was based was the security risk planning model. Semistructured interviews and document reviews helped to support the findings of this study. Using the thematic approach, 3 major themes emerged. The 3 themes included security training is required for all professionals, different approaches to training are beneficial, and using internal and external training's to complement each other. The findings revealed senior leaders used different variations of training programs to train security professionals on how to protect sensitive information. The senior leaders' highest priority was the ability to ensure all personnel accessing the network received the proper training. The findings may contribute to social change by enhancing area schools' technology programs with evolving cyber security technology, helping kids detect and eradicate threats before any loss of sensitive information occurs.
APA, Harvard, Vancouver, ISO, and other styles
15

Belkacem, Thiziri. "Neural models for information retrieval : towards asymmetry sensitive approaches based on attention models." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30167.

Full text
Abstract:
Ce travail se situe dans le contexte de la recherche d'information (RI) utilisant des techniques d'intelligence artificielle (IA) telles que l'apprentissage profond (DL). Il s'intéresse à des tâches nécessitant l'appariement de textes, telles que la recherche ad-hoc, le domaine du questions-réponses et l'identification des paraphrases. L'objectif de cette thèse est de proposer de nouveaux modèles, utilisant les méthodes de DL, pour construire des modèles d'appariement basés sur la sémantique de textes, et permettant de pallier les problèmes de l'inadéquation du vocabulaire relatifs aux représentations par sac de mots, ou bag of words (BoW), utilisées dans les modèles classiques de RI. En effet, les méthodes classiques de comparaison de textes sont basées sur la représentation BoW qui considère un texte donné comme un ensemble de mots indépendants. Le processus d'appariement de deux séquences de texte repose sur l'appariement exact entre les mots. La principale limite de cette approche est l'inadéquation du vocabulaire. Ce problème apparaît lorsque les séquences de texte à apparier n'utilisent pas le même vocabulaire, même si leurs sujets sont liés. Par exemple, la requête peut contenir plusieurs mots qui ne sont pas nécessairement utilisés dans les documents de la collection, notamment dans les documents pertinents. Les représentations BoW ignorent plusieurs aspects, tels que la structure du texte et le contexte des mots. Ces caractéristiques sont très importantes et permettent de différencier deux textes utilisant les mêmes mots et dont les informations exprimées sont différentes. Un autre problème dans l'appariement de texte est lié à la longueur des documents. Les parties pertinentes peuvent être réparties de manières différentes dans les documents d'une collection. Ceci est d'autant vrai dans les documents volumineux qui ont tendance à couvrir un grand nombre de sujets et à inclure un vocabulaire variable. Un document long pourrait ainsi comporter plusieurs passages pertinents qu'un modèle d'appariement doit capturer. Contrairement aux documents longs, les documents courts sont susceptibles de concerner un sujet spécifique et ont tendance à contenir un vocabulaire plus restreint. L'évaluation de leur pertinence est en principe plus simple que celle des documents plus longs. Dans cette thèse, nous avons proposé différentes contributions répondant chacune à l'un des problèmes susmentionnés. Tout d'abord, afin de résoudre le problème d'inadéquation du vocabulaire, nous avons utilisé des représentations distribuées des mots (plongement lexical) pour permettre un appariement basé sur la sémantique entre les différents mots. Ces représentations ont été utilisées dans des applications de RI où la similarité document-requête est calculée en comparant tous les vecteurs de termes de la requête avec tous les vecteurs de termes du document, indifféremment. Contrairement aux modèles proposés dans l'état-de-l'art, nous avons étudié l'impact des termes de la requête concernant leur présence/absence dans un document. Nous avons adopté différentes stratégies d'appariement document/requête. L'intuition est que l'absence des termes de la requête dans les documents pertinents est en soi un aspect utile à prendre en compte dans le processus de comparaison. En effet, ces termes n'apparaissent pas dans les documents de la collection pour deux raisons possibles : soit leurs synonymes ont été utilisés ; soit ils ne font pas partie du contexte des documents en questions
This work is situated in the context of information retrieval (IR) using machine learning (ML) and deep learning (DL) techniques. It concerns different tasks requiring text matching, such as ad-hoc research, question answering and paraphrase identification. The objective of this thesis is to propose new approaches, using DL methods, to construct semantic-based models for text matching, and to overcome the problems of vocabulary mismatch related to the classical bag of word (BoW) representations used in traditional IR models. Indeed, traditional text matching methods are based on the BoW representation, which considers a given text as a set of independent words. The process of matching two sequences of text is based on the exact matching between words. The main limitation of this approach is related to the vocabulary mismatch. This problem occurs when the text sequences to be matched do not use the same vocabulary, even if their subjects are related. For example, the query may contain several words that are not necessarily used in the documents of the collection, including relevant documents. BoW representations ignore several aspects about a text sequence, such as the structure the context of words. These characteristics are important and make it possible to differentiate between two texts that use the same words but expressing different information. Another problem in text matching is related to the length of documents. The relevant parts can be distributed in different ways in the documents of a collection. This is especially true in large documents that tend to cover a large number of topics and include variable vocabulary. A long document could thus contain several relevant passages that a matching model must capture. Unlike long documents, short documents are likely to be relevant to a specific subject and tend to contain a more restricted vocabulary. Assessing their relevance is in principle simpler than assessing the one of longer documents. In this thesis, we have proposed different contributions, each addressing one of the above-mentioned issues. First, in order to solve the problem of vocabulary mismatch, we used distributed representations of words (word embedding) to allow a semantic matching between the different words. These representations have been used in IR applications where document/query similarity is computed by comparing all the term vectors of the query with all the term vectors of the document, regardless. Unlike the models proposed in the state-of-the-art, we studied the impact of query terms regarding their presence/absence in a document. We have adopted different document/query matching strategies. The intuition is that the absence of the query terms in the relevant documents is in itself a useful aspect to be taken into account in the matching process. Indeed, these terms do not appear in documents of the collection for two possible reasons: either their synonyms have been used or they are not part of the context of the considered documents. The methods we have proposed make it possible, on the one hand, to perform an inaccurate matching between the document and the query, and on the other hand, to evaluate the impact of the different terms of a query in the matching process. Although the use of word embedding allows semantic-based matching between different text sequences, these representations combined with classical matching models still consider the text as a list of independent elements (bag of vectors instead of bag of words). However, the structure of the text as well as the order of the words is important. Any change in the structure of the text and/or the order of words alters the information expressed. In order to solve this problem, neural models were used in text matching
APA, Harvard, Vancouver, ISO, and other styles
16

Boggs, Teresa. "Sharing Sensitive Information with Parents: A Guide for Discussing Speech and Language Concerns." Digital Commons @ East Tennessee State University, 2009. https://dc.etsu.edu/etsu-works/1512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Egbert, Matthew. "Adaptation from interactions between metabolism and behaviour : self-sensitive behaviour in protocells." Thesis, University of Sussex, 2012. http://sro.sussex.ac.uk/id/eprint/39564/.

Full text
Abstract:
This thesis considers the relationship between adaptive behaviour and metabolism, using theoretical arguments supported by computational models to demonstrate mechanisms of adaptation that are uniquely available to systems based upon the metabolic organisation of self-production. It is argued how, by being sensitive to their metabolic viability, an organism can respond to the quality of its environment with respect to its metabolic well-being. This makes possible simple but powerful ‘self-sensitive' adaptive behaviours such as “If I am healthy now, keep doing the same as I have been doing – otherwise do something else.” This strategy provides several adaptive benefits, including the ability to respond appropriately to phenomena never previously experienced by the organism nor by any of its ancestors; the ability to integrate different environmental influences to produce an appropriate response; and sensitivity to the organism's present context and history of experience. Computational models are used to demonstrate these capabilities, as well as the possibility that self-sensitive adaptive behaviour can facilitate the adaptive evolution of populations of self-sensitive organisms through (i) processes similar to the Baldwin effect, (ii) increasing the likelihood of speciation events, and (iii) automatic behavioural adaptation to changes in the organism itself (such as genetic changes). In addition to these theoretical contributions, a computational model of self-sensitive behaviour is presented that recreates chemotaxis patterns observed in bacteria such as Azospirillum brasilense and Campylobacter jejuni. The models also suggest new explanations for previously unexplained asymmetric distributions of bacteria performing aerotaxis. More broadly, the work advocates further research into the relationship between behaviour and the metabolic organisation of self-production, an organisational property shared by all life. It also acts as an example of how abstract models that target theoretical concepts rather than natural phenomena can play a valuable role in the scientific endeavour.
APA, Harvard, Vancouver, ISO, and other styles
18

Strötgen, Jannik [Verfasser], and Michael [Akademischer Betreuer] Gertz. "Domain-sensitive Temporal Tagging for Event-centric Information Retrieval / Jannik Strötgen ; Betreuer: Michael Gertz." Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180395689/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Boggs, Teresa. "Sharing Sensitive Information with Parents: A Guide for Discussing Speech, Language, and Developmental Concerns." Digital Commons @ East Tennessee State University, 2000. https://dc.etsu.edu/etsu-works/1517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Nilsson, Håkan. "Reliable Communication of Time- and Security-Sensitive Information over a Single Combat Vehicle Network." Thesis, Linköpings universitet, Kommunikationssystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162444.

Full text
Abstract:
A common trend, in general as well as in the field of combat vehicles, is the rapidly increasing demand for data network capacity and even more in transferred data. To handle this increased demand, different countries with their armed forces and equipment manufacturers evaluate methods to increase the data transmission capacity in combat vehicles. The different types of transmitted data are of different criticality and have different security demands. An easy solution to this is to have separated networks for each type of traffic, but that is quite expensive and uses a lot of hardware. This thesis focuses on a different solution, with a shared network for all types of data transmissions. This is done by evaluating different types of data networks and add-on protocols and then testing the networks practically with varying transmission rates. In the thesis, all the practical testing is done with data networks according to the Ethernet standard, which is the standard evaluated with a throughput that is high enough for the use case. Ethernet as a standard is not suitable for critical data traffic and therefore add-on protocols for Ethernet to optimize the system for critical data traffic are tested. With these optimizations made, Ethernet can be considered more suitable for critical traffic, but this depends entirely on the system requirements.
APA, Harvard, Vancouver, ISO, and other styles
21

Weatherford, Mark T. "Interpretive analysis of the Joint Maritime Command Information System (JMCIS) Sensitive Compartmented Information (SCI) Local Area Network (LAN) security requirements." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA285529.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1994.
Thesis advisor(s): Carl F. Jones, Cynthia E. Irvine. "September 1994." Bibliography: p. 108-112. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
22

Subbiah, Arun. "Efficient Proactive Security for Sensitive Data Storage." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19719.

Full text
Abstract:
Fault tolerant and secure distributed data storage systems typically require that only up to a threshold of storage nodes can ever be compromised or fail. In proactively-secure systems, this requirement is modified to hold only in a time interval (also called epoch), resulting in increased security. An attacker or adversary could compromise distinct sets of nodes in any two time intervals. This attack model is also called the mobile adversary model. Proactively-secure systems require all nodes to "refresh" themselves periodically to a clean state to maintain the availability, integrity, and confidentiality properties of the data storage service. This dissertation investigates the design of a proactively-secure distributed data storage system. Data can be stored at storage servers using encoding schemes called secret sharing, or encryption-with-replication. The primary challenge is that the protocols that the servers run periodically to maintain integrity and confidentiality must scale with large amounts of stored data. Determining how much data can be proactively-secured in practical settings is an important objective of this dissertation. The protocol for maintain the confidentiality of stored data is developed in the context of data storage using secret sharing. We propose a new technique called the GridSharing framework that uses a combination of XOR secret sharing and replication for storing data efficiently. We experimentally show that the algorithm can secure several hundred GBs of data. We give distributed protocols run periodically by the servers for maintaining the integrity of replicated data under the mobile adversary model. This protocol is integrated into a document repository to make it proactively-secure. The proactively-secure document repository is implemented and evaluated on the Emulab cluster (http://www.emulab.net). The experimental evaluation shows that several 100 GBs of data can be proactively-secured. This dissertation also includes work on fault and intrusion detection - a necessary component in any secure system. We give a novel Byzantine-fault detection algorithm for quorum systems, and experimentally evaluate its performance using simulations and by deploying it in the AgileFS distributed file system.
APA, Harvard, Vancouver, ISO, and other styles
23

Sousa, Rita Cristina Pinto de. "Parameter estimation in the presence of auxiliary information." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/11295.

Full text
Abstract:
Dissertação para obtenção do Grau de Doutora em Estatística e Gestão de Risco, Especialidade em Estatística
In survey research, there are many situations when the primary variable of interest is sensitive. The sensitivity of some queries can give rise to a refusal to answer or to false answers given intentionally. Survey can be conducted in a variety of settings, in part dictated by the mode of data collection, and these settings can differ in how much privacy they offer the respondent. The estimates obtained from a direct survey on sensitive questions would be subject to high bias. A variety of techniques have been used to improve reporting by increasing the privacy of the respondents. The Randomized Response Technique (RRT), introduced byWarner in 1965, develops a random relation between the individual’s response and the question. This technique provides confidentiality to respondents and still allows the interviewers to estimate the characteristic of interest at an aggregate level. In this thesis we propose some estimators to improve the mean estimation of a sensitive variable based on a RRT by making use of available non-sensitive auxiliary information. In the first part of this thesis we present the ratio and the regression estimators as well as some generalizations in order to study the gain in the estimation over the ordinary RRT mean estimator. In chapters 4 and 5 we study the performance of some exponential type estimators, also based on a RRT. The final part of the thesis illustrates an approach to mean estimation in stratified sampling. This study confirms some previous results for a different sample design. An extensive simulation study and an application to a real dataset are done for all the study estimators to evaluate their performance. In the last chapter we present a general discussion referring to the main results and conclusions as well as showing an application to a real dataset which compares the performance of study estimators.
APA, Harvard, Vancouver, ISO, and other styles
24

Mathew, John. "Disclosure apprehension the influence of media and survey technique on the disclosure of sensitive information /." Online access for everyone, 2008. http://www.dissertations.wsu.edu/Dissertations/Summer2008/j_mathew_043008.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Avranas, Apostolos. "Resource allocation for latency sensitive wireless systems." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT021.

Full text
Abstract:
La nouvelle génération de systèmes de communication sans fil 5G vise non seulement à dépasser le débit de données du prédécesseur (LTE), mais à améliorer le système sur d'autres dimensions. Dans ce but, davantage de classes d'utilisateurs ont été introduites afin de fournir plus de choix de types de service. Chaque classe est un point différent sur le compromis entre le débit de données, la latence et la fiabilité. Maintenant, beaucoup de nouvelles applications, notamment la réalité augmentée, la conduite autonome, l'automatisation de l'industrie et la téléchirurgie, poussent vers un besoin de communications fiables avec une latence extrêmement faible. Comment gérer la couche physique afin de garantir ces services sans gaspiller des ressources précieuses et coûteuses est une question difficile. En outre, comme les latences de communication autorisées diminuent, l'utilisation d'un protocole de retransmission est contestable. Dans cette thèse, nous tentons de répondre à ces deux questions. En particulier, nous considérons un système de communication point à point, et nous voulons répondre s'il existe une allocation de ressources de puissance et de bande passante qui pourrait rendre le protocole Hybrid Automatic ReQuest (HARQ) avec n'importe quel nombre de retransmissions avantageux. Malheureusement, les exigences de très faible latence obligent à transmettre qu'un nombre limité de symboles. Par conséquent, l'utilisation de la théorie traditionnelle de Shannon est inadaptée et une autre beaucoup plus compliquée doit être employée, qui s'appelle l'analyse à bloc fini. Nous parvenons à résoudre le problème dans le cas du bruit additif blanc gaussien (AWGN) en appliquant des manipulations mathématiques et l'introduction d'un algorithme basé sur la programmation dynamique. À l'étape suivante, nous passons au cas plus général où le signal est déformé par un évanouissement de Rice. Nous étudions comment l'allocation de ressources est affectées étant donné les deux cas opposés d'informations sur l'état du canal (CSI), l'un où seules les propriétés statistiques du canal sont connues (CSI statistique), et l'autre où la valeur exacte du canal est fournie au émetteur(CSI complet).Finalement, nous posons la même question concernant le couche au-dessus, c'est-à-dire le Medium Access Control (MAC). L'allocation des ressources est maintenant effectuée sur plusieurs utilisateurs. La configuration pour chaque utilisateur reste la même, c'est-à-dire qu'une quantité précise de données doit être délivrée sous des contraintes de latence stricte et il y a toujours la possibilité d'utiliser des retransmissions. Comme la 5G classe les utilisateurs en classes d'utilisateurs différentes selon leurs besoins, nous modélisons le trafic d'utilisateurs avec le même concept. Chaque utilisateur appartient à une classe différente qui détermine sa latence et ses besoins en données. Nous développons un algorithme d'apprentissage par renforcement profond qui réussit à entraîner un modèle de réseau de neurones artificiels que nous comparons avec des méthodes conventionnelles en utilisant des algorithmes d'optimisation ou d'approches combinatoires. En fait, dans nos simulations le modèle de réseau de neurones artificiels parvient à les surpasser dans les deux cas de connaissance du canal (CSI statistique et complet)
The new generation of wireless systems 5G aims not only to convincingly exceed its predecessor (LTE) data rate but to work with more dimensions. For instance, more user classes were introduced associated with different available operating points on the trade-off of data rate, latency, reliability. New applications, including augmented reality, autonomous driving, industry automation and tele-surgery, push the need for reliable communications to be carried out under extremely stringent latency constraints. How to manage the physical level in order to successfully meet those service guarantees without wasting valuable and expensive resources is a hard question. Moreover, as the permissible communication latencies shrink, allowing retransmission protocol within this limited time interval is questionable. In this thesis, we first pursue to answer those two questions. Concentrating on the physical layer and specifically on a point to point communication system, we aim to answer if there is any resource allocation of power and blocklength that will render an Hybrid Automatic ReQuest (HARQ) protocol with any number of retransmissions beneficial. Unfortunately, the short latency requirements force only a limited number of symbols to possibly be transmitted which in its turn yields the use of the traditional Shannon theory inaccurate. Hence, the more involved expression using finite blocklength theory must be employed rendering the problem substantially more complicate. We manage to solve the problem firstly for the additive white gaussian noise (AWGN) case after appropriate mathematical manipulations and the introduction of an algorithm based on dynamic programming. Later we move on the more general case where the signal is distorted by a Ricean channel fading. We investigate how the scheduling decisions are affected given the two opposite cases of Channel State Information (CSI), one where only the statistical properties of the channel is known, i.e. statistical CSI, and one where the exact value of the channel is provided to the transmitter, i.e., full CSI.Finally we ask the same question one layer above, i.e. the Medium Access Contron (MAC). The resource allocation must be performed now accross multiple users. The setup for each user remains the same, meaning that a specific amount of information must be delivered successfully under strict latency constraints within which retransmissions are allowed. As 5G categorize users to different classes users according to their needs, we model the traffic under the same concept so each user belongs to a different class defining its latency and data needs. We develop a deep reinforcement learning algorithm that manages to train a neural network model that competes conventional approaches using optimization or combinatorial algorithms. In our simulations, the neural network model actually manages to outperform them in both statistical and full CSI case
APA, Harvard, Vancouver, ISO, and other styles
26

Wimmer, Raphael [Verfasser], and Heinrich [Akademischer Betreuer] Hußmann. "Grasp-sensitive surfaces : utilizing grasp information for human-computer interaction / Raphael Wimmer. Betreuer: Heinrich Hußmann." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2015. http://d-nb.info/1070762814/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Barboa, Elvia. "The use of a culturally sensitive video in presenting AIDS information to a Hispanic population." Scholarly Commons, 1998. https://scholarlycommons.pacific.edu/uop_etds/2719.

Full text
Abstract:
There is a growing number of Hispanics contracting the AIDS virus. Very little comprehensive culturally sensitive information is available to less acculturated Hispanics. Research has supported that the most effective channel of AIDS information is electronic media. Pamphlets and other print media appear to be an effective source of information for more acculturated literate Hispanics. The present study compared the effectiveness of two videos which differ in cultural sensitivity versus a control group to teach AIDS awareness to less acculturated Hispanics. Ninety (44 males and 46 females) Spanish speaking Hispanics were randomly assigned to the three groups. It was predicted that the more culturally sensitive video would be more informative and would reduce erroneous beliefs more than the standard factual, less culturally sensitive video. There were no significant differences found between the two video groups as measured by the AIDS knowledge questionnaire. Significant differences were found when the video groups were compared to the control group. Video groups scored higher on the AIDS knowledge questionnaire. Implications of the study are discussed.
APA, Harvard, Vancouver, ISO, and other styles
28

Harding, Genevieve E. "Developing methods to access sensitive industrial wastewater information in South Africa (with treatment in mind)." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29446.

Full text
Abstract:
South Africa is a water stressed country, therefore it is important to understand water use and wastewater generation. Previous research and workshops have identified gaps in the characterisation and remediation of wastewaters in South Africa. Wastewater management can take advantage of wastewater as a valuable resource. However, treatment is required to recover this value, while characterisation is required to develop treatments. Yet wastewater characterisation information is often poorly reported. The nature of industrial wastewaters (in terms of volume, location and composition), and the norms of wastewater characterisation reporting (in terms of quality and accessibility) formed the basis for two research questions. A major component of this research was developing methods to access sensitive wastewater information. Relational approaches were based on building relationships through phone calls, emails, meetings and site visits. Formal, legal requests for were made with application in terms if the Promotion of Access to Information Act (PAIA). Even though wastewater information is not confidential, it is not readily accessible. 87 people from 42 companies or institutions were contacted; 14% of interactions lead to shared data or a meeting, and 12% shared resources. Key industries of interest were: pulp and paper, fish processing, power generation, mining and petroleum. Previous estimates of South African industrial wastewater volumes ranged from 70 – 350 Mm3 /annum. The pulp and paper industry contributed between 28 and 43% of this volume; petroleum contributed 9 to 26%. Both industries were located inland and in coastal regions of South Africa. These industries were most concerned with COD. Mining and power generation contributed 10 – 15% and 7 – 14% respectively. These industries were located inland, and were concerned with total dissolved solids, and specifically sulphate, sodium and chlorides. The fish processing industry contributed between 0 and 23% of volumes, depending whether wastewaters released to a marine environment were included. Seven parameters were reported for over half of the streams considered (65 in total). These parameters were: pH, volume, electrical conductivity, nitrogen, sulphate, sodium and COD. Sulphate and sodium were dominant ions. Calcium was not measured, even though discharge limits were listed in environmental licenses. Characterisation information was reported for compliance and not for treatability. The parameters measured should be expanded to include important parameters for treatability. Industry, research institution and governmental bodies can work together to identify such parameters and develop locally relevant treatments. It is recommended that possible synergies between these groupings be enhanced to improve wastewater management. But an atmosphere of trust and transparency is required to facilitate synergistic relationships. The legal framework in South Africa can be used to motivate for transparency with respect to wastewaters.
APA, Harvard, Vancouver, ISO, and other styles
29

Thompson, Dale. "Sensitive information an inquiry into the interpretation of information in the workplace from an individual's perspective using qualitative methods / by Dale Thompson." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2008. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sledz, Larysa. "A GIS model for environmentally sensitive areas in Delaware County, Indiana." Virtual Press, 2004. http://liblink.bsu.edu/uhtbin/catkey/1294899.

Full text
Abstract:
This study has created a GIS model and comprehensive analysis of environmentally sensitive areas in Delaware County, Indiana. Values were assigned to environmentally sensitive areas for four categories, including woodlands, wetlands, floodplains, and threatened and endangered species. There was an inverse relationship between the size of an area and the environmental sensitivity of the area. These areas occupy twenty-three percent of the total county area. The distribution of these areas is almost equal throughout the county; however, a large portion is located along the banks of the White River and other water bodies. Forty two soil types were identified within environmentally sensitive areas. Poorly drained soils are slightly more represented in the environmentally sensitive areas, and somewhat poorly drained soils are under-represented compared with soils in other drainage classes.
Department of Natural Resources and Environmental Management
APA, Harvard, Vancouver, ISO, and other styles
31

Francis, Anthony G. Jr. "Context-sensitive asynchronous memory : a general experience-based method for managing information access in cognitive agents." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/9177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yoong, Ho Liang. "IP services design and implementation in a prototype device for transient tactical access to sensitive information." Thesis, Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/4982.

Full text
Abstract:
Approved for public release; distribution is unlimited
In network-centric warfare, access to critical information can result in a strategic advantage. During critical situations, a soldier using tactical devices may need transient access to information beyond their normal clearances. The Least Privilege Separation Kernel (LPSK) being developed at the Naval Postgraduate School, can be the basis of an extended multilevel security (MLS) system that can support and control such access. A Trusted Services Layer (TSL), which depends on the LPSK, provides support for various multilevel security services. Currently, the LPSK lacks a software network stack for networking communications. Without networking functionality, tactical devices cannot share vital situational updates and information superiority is unattainable. An Internet Protocol (IP) stack was proposed for the LPSK-based system. The IP stack is to be implemented in the context of the LPSK architecture, which uses modularity and layering to organize its software. Open source implementations of the IP stack were evaluated to leverage the common functionality required by all IP stacks. Lightweight Internet Protocol (LWIP) was selected as a starting point for use with the LPSK. LWIP required modifications for use with the LPSK. The IP stack and a proof of concept networking demonstration were successfully implemented in this project.
APA, Harvard, Vancouver, ISO, and other styles
33

Vega, Laurian. "Security in Practice: Examining the Collaborative Management of Sensitive Information in Childcare Centers and Physicians' Offices." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/37552.

Full text
Abstract:
Traditionally, security has been conceptualized as rules, locks, and passwords. More recently, security research has explored how people interact in secure (or insecure) ways in part of a larger socio-technical system. Socio-technical systems are comprised of people, technology, relationships, and interactions that work together to create safe praxis. Because information systems are not just technical, but also social, the scope of privacy and security concerns must include social and technical factors. Clearly, computer security is enhanced by developments in the technical arena, where researchers are building ever more secure and robust systems to guard the privacy and confidentiality of information. However, when the definition of security is broadened to encompass both human and technical mechanisms, how security is managed with and through the day-to-day social work practices becomes increasingly important. In this dissertation I focus on how sensitive information is collaboratively managed in socio-technical systems by examining two domains: childcare centers and physiciansâ offices. In childcare centers, workers manage the enrolled children and also the enrolled childâ s personal information. In physiciansâ offices, workers manage the patientsâ health along with the patientsâ health information. My dissertation presents results from interviews and observations of these locations. The data collected consists of observation notes, interview transcriptions, pictures, and forms. The researchers identified breakdowns related to security and privacy. Using Activity Theory to first structure, categorize, and analyze the observed breakdowns, I used phenomenological methods to understand the context and experience of security and privacy. The outcomes from this work are three themes, along with corresponding future scenarios. The themes discussed are security embodiment, communities of security, and zones of ambiguity. Those themes extend the literature in the areas of usable security, human-computer interaction, and trust. The presentation will use future scenarios to examine the complexity of developing secure systems for the real world.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
34

Murandu, Ngoni. "The extent to which sensitive information is secured by institutions that donate computers to educational settings." Morgantown, W. Va. : [West Virginia University Libraries], 2003. http://etd.wvu.edu/templates/showETD.cfm?recnum=3295.

Full text
Abstract:
Thesis (M.A.)--West Virginia University, 2003.
Title from document title page. Document formatted into pages; contains ix, 67 p. Vita. Includes abstract. Includes bibliographical references (p. 53-57).
APA, Harvard, Vancouver, ISO, and other styles
35

Ehlin, Max. "An overview of Product Service System through Integrated Vehicle Health Management in an information sensitive industry." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75861.

Full text
Abstract:
Purpose – The research purpose is to enhance knowledge of how organizations can form a PSS through an IVHM system when information is sensitive. Method – A single case study design of abductive approach was used, with data collection through six semi-structured interviews. Findings – A system combining IVHM and PSS has many potential benefits, however there are several challenges that need to be overcome in order to implementing a successful model. Theoretical implications – This study treads a new area not previously explored in the literature when it combines PSS and IVHM, which relies heavily on information flow to succeed, with a case of information sensitivity. This study hence explores a problematic area for either PSS or IVHM, expanding the current literature and providing initial suggestions of how to navigate this. Practical implications – Firstly, it shows managers the challenges that comes with implementing PSS-IVHM and increasing involvement in the customers’ processes. Secondly, this study shows the theoretical and general challenges of PSS-IVHM and applies the case study’s perspective of information management, granting managers a larger foundation of knowledge before starting their initiatives of PSS-IVHM. Limitations and future research – This study provides a limited amount of empirical data. Therefore, future research should focus on increasing and widening data collection. The study suggests there is a considerable challenge in conservatism within the defence industry and therefore future research is suggested to explore how change management can combat this challenge.
APA, Harvard, Vancouver, ISO, and other styles
36

Ording, Marcus. "Context-Sensitive Code Completion : Improving Predictions with Genetic Algorithms." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205334.

Full text
Abstract:
Within the area of context-sensitive code completion there is a need for accurate predictive models in order to provide useful code completion predictions. The traditional method for optimizing the performance of code completion systems is to empirically evaluate the effect of each system parameter individually and fine-tune the parameters. This thesis presents a genetic algorithm that can optimize the system parameters with a degree-of-freedom equal to the number of parameters to optimize. The study evaluates the effect of the optimized parameters on the prediction quality of the studied code completion system. Previous evaluation of the reference code completion system is also extended to include model size and inference speed. The results of the study shows that the genetic algorithm is able to improve the prediction quality of the studied code completion system. Compared with the reference system, the enhanced system is able to recognize 1 in 10 additional previously unseen code patterns. This increase in prediction quality does not significantly impact the system performance, as the inference speed remains less than 1 ms for both systems.
Inom området kontextkänslig kodkomplettering finns det ett behov av precisa förutsägande modeller för att kunna föreslå användbara kodkompletteringar. Den traditionella metoden för att optimera prestanda hos kodkompletteringssystem är att empiriskt utvärdera effekten av varje systemparameter individuellt och finjustera parametrarna. Det här arbetet presenterar en genetisk algoritm som kan optimera systemparametrarna med en frihetsgrad som är lika stor som antalet parametrar att optimera. Studien utvärderar effekten av de optimerade parametrarna på det studerade kodkompletteringssystemets pre- diktiva kvalitet. Tidigare utvärdering av referenssystemet utökades genom att även inkludera modellstorlek och slutledningstid. Resultaten av studien visar att den genetiska algoritmen kan förbättra den prediktiva kvali- teten för det studerade kodkompletteringssystemet. Jämfört med referenssystemet så lyckas det förbättrade systemet korrekt känna igen 1 av 10 ytterligare kodmönster som tidigare varit osedda. Förbättringen av prediktiv kvalietet har inte en signifikant inverkan på systemet, då slutledningstiden förblir mindre än 1 ms för båda systemen.
APA, Harvard, Vancouver, ISO, and other styles
37

Hayden, Angela. "THE DEVELOPMENT OF EXPERT FACE PROCESSING: ARE INFANTS SENSITIVE TO NORMAL DIFFERENCES IN SECOND-ORDER RELATIONAL INFORMATION?" Lexington, Ky. : [University of Kentucky Libraries], 2006. http://lib.uky.edu/ETD/ukypeps2006t00518/Masters.pdf.

Full text
Abstract:
Thesis (M.A.)--University of Kentucky, 2006.
Title from document title page (viewed on January 29, 2007). Document formatted into pages; contains: viii, 50 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 44-48).
APA, Harvard, Vancouver, ISO, and other styles
38

Gjære, Erlend Andreas. "Sensitive Information on Display : Using flexible de-identification for protecting patient privacy in (semi-) public hospital environments." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-14089.

Full text
Abstract:
In later years, the health care work in hospitals has become increasingly fragmented, in a sense where different people and professions are required for the treatment of every single patient. As a consequence, personnel should be assisted to greater awareness of what is happening, so that they can better plan where to put in their efforts. Making information about ongoing activities more accessible to its users is hence important, but this will in turn require increased distribution of sensitive data inside the hospital. The concept of flexible de-identification has been proposed as a solution for the privacy issues raised by this, but then again new issues emerge when it comes to how useful the de-identified data are to its authorized end users, in practice.A series of six rapid field tests was executed along with a literature review on de-identification. The purpose was to explore some ideas to how de-identification could be implemented for information screens located in public and semi-public hospital environments, such as hallways, where personnel are likely to see them. The appropriateness of several techniques for de-identification was hence evaluated for being used in real-time visualizations, in contrast to previous known applications of the concept. This input was in turn used to design a high-fidelity prototype for use in a series of four experiments in a usability laboratory. The experiments involved role-play sessions, where nurses from a university hospital used the prototype in a simulation of realistic ward work. In a focused interview directly afterwards, they each assessed the usefulness of having a system available in such locations, considering that the information was de-identified. Moreover, the nurses evaluated six alternative approaches to de-identification of the sensitive information, and ranked them with respect to which, if any, would be best suited for use in their regular work environment.The experiments indicate that users appreciate being notified via large screens when new information is available, but disagree on what is the preferred level of de-identification. Some would emphasize the legislative requirements and privacy issues raised, while others would put their own utility needs first. As a response to this, an interactive prototype was designed to demonstrate how users can be given interactive control over how identifiable the displayed information is. This idea of giving users flexible control over what is seen on a screen, depending on how they assess the context for access, is grounded in a framework for evaluation that considers the quality requirements of identification utility, legislation and usability.Useful applications of non-interactive de-identification to screens in public environments, are effectively disqualified by the legislative requirements regulating how personal health information can be disclosed. The de-identification can however be useful for enabling an intermediate security level, which can be accessed as long as there is a authorized user present. Appropriate techniques for achieving such de-identification, are found to be suppression of variables, coding, masking and generalization. With this overall approach, users may gradually authorize themselves until the required utility is reached, and hence be able to access useful information in public places. The information depth available must also be accordingly limited, so that the increased risk of abuse is mitigated. The result is possibly a security mechanism that is both legal to implement, it serves the utility needs of personnel, and it is more usable in practice than existing time-demanding login routines. Finally, these ideas have been included in the design of an interactive prototype, which still remains to see tested in practice.
APA, Harvard, Vancouver, ISO, and other styles
39

Engin, Melih. "Text Classificaton In Turkish Marketing Domain And Context-sensitive Ad Distribution." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610457/index.pdf.

Full text
Abstract:
Online advertising has a continuously increasing popularity. Target audience of this new advertising method is huge. Additionally, there is another rapidly growing and crowded group related to internet advertising that consists of web publishers. Contextual advertising systems make it easier for publishers to present online ads on their web sites, since these online marketing systems automatically divert ads to web sites with related contents. Web publishers join ad networks and gain revenue by enabling ads to be displayed on their sites. Therefore, the accuracy of automated ad systems in determining ad-context relevance is crucial. In this thesis we construct a method for semantic classification of web site contexts in Turkish language and develop an ad serving system to display context related ads on web documents. The classification method uses both semantic and statistical techniques. The method is supervised, and therefore, needs processed sample data for learning classification rules. Therefore, we generate a Turkish marketing dataset and use it in our classification approaches. We form successful classification methods using different feature spaces and support vector machine configurations. Our results present a good comparison between these methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Thinyane, Mamello P. "A knowledge-oriented, context-sensitive architectural framework for service deployment in marginalized rural communities." Thesis, Rhodes University, 2009. http://hdl.handle.net/10962/d1004843.

Full text
Abstract:
The notion of a global knowledge society is somewhat of a misnomer due to the fact that large portions of the global community are not participants in this global knowledge society which is driven, shaped by and socio-technically biased towards a small fraction of the global population. Information and Communication Technology (ICT) is culture-sensitive and this is a dynamic that is largely ignored in the majority of ICT for Development (ICT4D) interventions, leading to the technological determinism flaw and ultimately a failure of the undertaken projects. The deployment of ICT solutions, in particular in the context of ICT4D, must be informed by the cultural and socio-technical profile of the deployment environments and solutions themselves must be developed with a focus towards context-sensitivity and ethnocentricity. In this thesis, we investigate the viability of a software architectural framework for the development of ICT solutions that are context-sensitive and ethnocentric1, and so aligned with the cultural and social dynamics within the environment of deployment. The conceptual framework, named PIASK, defines five tiers (presentation, interaction, access, social networking, and knowledge base) which allow for: behavioural completeness of the layer components; a modular and functionally decoupled architecture; and the flexibility to situate and contextualize the developed applications along the dimensions of the User Interface (UI), interaction modalities, usage metaphors, underlying Indigenous Knowledge (IK), and access protocols. We have developed a proof-of-concept service platform, called KnowNet, based on the PIASK architecture. KnowNet is built around the knowledge base layer, which consists of domain ontologies that encapsulate the knowledge in the platform, with an intrinsic flexibility to access secondary knowledge repositories. The domain ontologies constructed (as examples) are for the provisioning of eServices to support societal activities (e.g. commerce, health, agriculture, medicine) within a rural and marginalized area of Dwesa, in the Eastern Cape province of South Africa. The social networking layer allows for situating the platform within the local social systems. Heterogeneity of user profiles and multiplicity of end-user devices are handled through the access and the presentation components, and the service logic is implemented by the interaction components. This services platform validates the PIASK architecture for end-to-end provisioning of multi-modal, heterogeneous, ontology-based services. The development of KnowNet was informed on one hand by the latest trends within service architectures, semantic web technologies and social applications, and on the other hand by the context consideration based on the profile (IK systems dynamics, infrastructure, usability requirements) of the Dwesa community. The realization of the service platform is based on the JADE Multi-Agent System (MAS), and this shows the applicability and adequacy of MAS’s for service deployment in a rural context, at the same time providing key advantages such as platform fault-tolerance, robustness and flexibility. While the context of conceptualization of PIASK and the implementation of KnowNet is that of rurality and of ICT4D, the applicability of the architecture extends to other similarly heterogeneous and context-sensitive domains. KnowNet has been validated for functional and technical adequacy, and we have also undertaken an initial prevalidation for social context sensitivity. We observe that the five tier PIASK architecture provides an adequate framework for developing context-sensitive and ethnocentric software: by functionally separating and making explicit the social networking and access tier components, while still maintaining the traditional separation of presentation, business logic and data components.
APA, Harvard, Vancouver, ISO, and other styles
41

Lang, Martin. "Secure Automotive Ethernet : Balancing Security and Safety in Time Sensitive Systems." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18235.

Full text
Abstract:
Background.As a result of the digital era, vehicles are being digitalised in arapid pace. Autonomous vehicles and powerful infotainment systems are justparts of what is evolving within the vehicles. These systems require more in-formation to be transferred within the vehicle networks. As a solution for this,Ethernet was suggested. However, Ethernet is a ’best effort’ protocol which cannot be considered reliable. To solve that issue, specific implementations weredone to create Automotive Ethernet. However, the out-of-the-box vulnerabil-ities from Ethernet persist and need to be mitigated in a way that is suitableto the automotive domain. Objectives.This thesis investigates the vulnerabilities of Ethernet out-of-the-box and identify which vulnerabilities cause the largest threat in regard tothe safety of human lives and property. When such vulnerabilities are iden-tified, possible mitigation methods using security measures are investigated.Once two security measures are selected, an experiment is conducted to see ifthose can manage the latency requirements. Methods.To achieve the goals of this thesis, literature studies were conductedto learn of any vulnerabilities and possible mitigation. Then, those results areused in an OMNeT++experiment making it possible to record latency in a sim-ple automotive topology and then add the selected security measures to get atotal latency. This latency must be less than 10 ms to be considered safe in cars. Results. In the simulation, the baseline communication is found to take1.14957±0.02053 ms. When adding a security measure latency, the total dura-tion is found. For Hash-based Message Authentication Code (HMAC)-SecureHash Algorithm (SHA)-512 the total duration is 1.192274 ms using the up-per confidence interval. Elliptic Curve Digital Signature Algorithm (ECDSA)- ED25519 has the total latency of 3.108424 ms using the upper confidenceinterval. Conclusions. According to the results, both HMAC-SHA-512 and ECDSA- ED25519 are valid choices to implement as a integrity and authenticity secu-rity measure. However, these results are based on a simulation and should beverified using physical hardware to ensure that these measures are valid.
Bakgrund.Som en påföljd av den digitala eran, så har fordon blivit digitalis-erade i ett hastigt tempo. Självkörande bilar och kraftfulla infotainmentsys-tem är bara några få av förändringarna som sker med bilarna. Dessa systemkräver att mer information skickas genom fordonets nätverk. För att nå dessahastigheter föreslogs Ethernet. Dock så är Ethernet ett så kallat ’best-effort’protokoll, vilket inte kan garantera tillförlitlig leverans av meddelanden. För attlösa detta har speciella tillämpningar skett, vilket skapar Automotive Ethernet.Det finns fortfarande sårbarheterna av Ethernet kvar, och behöver hanteras föratt tillämpningen skall vara lämplig för fordonsindustrin. Syfte.Denna studie undersöker vilka sårbarheter som finns i Ethernet ’out-of-the-box’ och identifierar vilka sårbarheter som har värst konsekvenser urperspektivet säkerhet för människor och egendom. Två säkerhetsimplementa-tioner väljs ut för att se över vidare de kan användas för kommunikation i bilar. Metod.För att nå arbetets mål, så genomfördes en literaturstudie för attundersöka sårbarheter och potentiella motverkningar. Studiens resulat använ-des sedan i en simulering för att kunna mäta fördröjningen av en enkel topologii en OMNeT++miljö. Sedan addera den tiden med exekveringstiden för säker-hetsimplementationerna för att få en total fördröjning. Kommunikationstidenmåste vara mindre än 10 ms för att räknas som säker för bilar. Resultat.I simuleringen, så ger mätningarna en basal kommunikation på1.14957±0.02053 ms. När säkerhetsimplementationerna tillsätts så får manden totala kommunikationstiden. För HMAC-SHA-512 mäts den totala kom-munikationstiden till 1.192274 ms genom att använda den övre gränsen av kon-fidensintervallet. För ECDSA - ED25519 mäts tiden till 3.108424 ms. Slutsatser.Enligt resultaten så är både HMAC-SHA-512 och ECDSA - ED25519möjliga alternativ för integritets- och äkthetstillämpningar i fordorns kommu-nikation. Dessa resultaten är dock framtagna ur en simulering och bör verifierasmed hjälp av fysisk hårdvara så mätningarna är sanningsenliga.
APA, Harvard, Vancouver, ISO, and other styles
42

Brox, Elin Anette. "Information Security in Distribued Health Information Systems in Scandinavia : A Comparative Study of External Conditions and Solutions for Exchange and Sharing of Sensitive Health Information in Denmark, Norway and Sweden." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9469.

Full text
Abstract:

Exchange and sharing of sensitive health information have to happen according to prevailing external conditions established by laws, regulations and liable authorities. These external conditions create various limitations, making requests for adaptative health information systems. Especially, when developing new solutions, defining the balance between protection of personal privacy and availability of information, is a great challenge. Several projects are working on possible solutions to the problem of sharing health information in a distributed way. Based on two different pilot projects in each of the countries Denmark, Norway and Sweden, and seen from an information security perspective, this thesis does a comparison of external conditions and various approaches to these conditions. Main focus is on the Scandinavian health legislation, but organisation of health services will also be considered briefly. The objective is to acquire new knowledge about and to contribute to the debate concerning exchange and sharing of health information. The results of this project are founded on an inductive multiple case study, and empirical data have been collected through semi-structured interviews. Through this thesis, it has become evident that health care in the Scandinavian countries is upon the whole equally organised and struggles with many of the same technological challenges. All three countries' health legislation promotes personal integrity, with Sweden as the most expressive. Nevertheless, there is a tendency towards enhancement of the patient's autonomy and a request for more united health care processes, leading to needs for new types of technological tools to ensure information security. In order to meet these requests, common national technological standards, concepts and infrastructure have become more important. In addition, the systems made have to be in accordance with Acts and regulations. Parts of the prevailing legislation are to a hindrance for exchange and sharing of information across organisational borders. The technological solutions chosen within the scope of the limiting external conditions are generally well-defined, high quality systems which have information security in focus. Still, there has become evident that some weak points exist, and there is room for improvements. In order to make health care of higher quality and ensure information security to an even larger degree, legal amendments and a more extensive national co-operation will arrange for the possibility of developing better information security solutions.

APA, Harvard, Vancouver, ISO, and other styles
43

Ljungberg, Lucas. "Using unsupervised classification with multiple LDA derived models for text generation based on noisy and sensitive data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255010.

Full text
Abstract:
Creating models to generate contextual responses to input queries is a difficult problem. It is even more difficult when available data contains noise and sensitive data. Finding models or methods to handle such issues is important in order to use data for productive means.This thesis proposes a model based on a cooperating pair of Topic Models of differing tasks (LDA and GSDMM) in order to alleviate the problematic properties of data. The model is tested on a real-world dataset with these difficulties as well as a dataset without them. The goal is to 1) look at the behaviour of the different topic models to see if their topical representation of the data is of use as input or output to other models and 2) find out what properties can be alleviated as a result.The results show that topic modeling can represent the semantic information of documents well enough to produce well-behaved input data for other models, which can also deal well with large vocabularies and noisy data. The topical clustering of the response data is sufficient enough for a classification model to predict the context of the response, from which valid responses can be created.
Att skapa modeller som genererar kontextuella svar på frågor är ett svårt problem från början, någonting som blir än mer svårt när tillgänglig data innehåller både brus och känslig information. Det är både viktigt och av stort intresse att hitta modeller och metoder som kan hantera dessa svårigheter så att även problematisk data kan användas produktivt.Detta examensarbete föreslår en modell baserat på ett par samarbetande Topic Models (ämnesbaserade modeller) med skiljande ansvarsområden (LDA och GSDMM) för att underlätta de problematiska egenskaperna av datan. Modellen testas på ett verkligt dataset med dessa svårigheter samt ett dataset utan dessa. Målet är att 1) inspektera båda ämnesmodellernas beteende för att se om dessa kan representera datan på ett sådant sätt att andra modeller kan använda dessa som indata eller utdata och 2) förstå vilka av dessa svårigheter som kan hanteras som följd.Resultaten visar att ämnesmodellerna kan representera semantiken och betydelsen av dokument bra nog för att producera välartad indata för andra modeller. Denna representation kan även hantera stora ordlistor och brus i texten. Resultaten visar även att ämnesgrupperingen av responsdatan är godartad nog att användas som mål för klassificeringsmodeller sådant att korrekta meningar kan genereras som respons.
APA, Harvard, Vancouver, ISO, and other styles
44

Skoglund, Caroline. "Risk-aware Autonomous Driving Using POMDPs and Responsibility-Sensitive Safety." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300909.

Full text
Abstract:
Autonomous vehicles promise to play an important role aiming at increased efficiency and safety in road transportation. Although we have seen several examples of autonomous vehicles out on the road over the past years, how to ensure the safety of autonomous vehicle in the uncertain and dynamic environment is still a challenging problem. This thesis studies this problem by developing a risk-aware decision making framework. The system that integrates the dynamics of an autonomous vehicle and the uncertain environment is modelled as a Partially Observable Markov Decision Process (POMDP). A risk measure is proposed based on the Responsibility-Sensitive Safety (RSS) distance, which quantifying the minimum distance to other vehicles for ensuring safety. This risk measure is incorporated into the reward function of POMDP for achieving a risk-aware decision making. The proposed risk-aware POMDP framework is evaluated in two case studies. In a single-lane car following scenario, it is shown that the ego vehicle is able to successfully avoid a collision in an emergency event where a vehicle in front of it makes a full stop. In the merge scenario, the ego vehicle successfully enters the main road from a ramp with a satisfactory distance to other vehicles. As a conclusion, the risk-aware POMDP framework is able to realize a trade-off between safety and usability by keeping a reasonable distance and adapting to other vehicles behaviours.
Autonoma fordon förutspås spela en stor roll i framtiden med målen att förbättra effektivitet och säkerhet för vägtransporter. Men även om vi sett flera exempel av autonoma fordon ute på vägarna de senaste åren är frågan om hur säkerhet ska kunna garanteras ett utmanande problem. Det här examensarbetet har studerat denna fråga genom att utveckla ett ramverk för riskmedvetet beslutsfattande. Det autonoma fordonets dynamik och den oförutsägbara omgivningen modelleras med en partiellt observerbar Markov-beslutsprocess (POMDP från engelskans “Partially Observable Markov Decision Process”). Ett riskmått föreslås baserat på ett säkerhetsavstånd förkortat RSS (från engelskans “Responsibility-Sensitive Safety”) som kvantifierar det minsta avståndet till andra fordon för garanterad säkerhet. Riskmåttet integreras i POMDP-modellens belöningsfunktion för att åstadkomma riskmedvetna beteenden. Den föreslagna riskmedvetna POMDP-modellen utvärderas i två fallstudier. I ett scenario där det egna fordonet följer ett annat fordon på en enfilig väg visar vi att det egna fordonet kan undvika en kollision då det framförvarande fordonet bromsar till stillastående. I ett scenario där det egna fordonet ansluter till en huvudled från en ramp visar vi att detta görs med ett tillfredställande avstånd till andra fordon. Slutsatsen är att den riskmedvetna POMDP-modellen lyckas realisera en avvägning mellan säkerhet och användbarhet genom att hålla ett rimligt säkerhetsavstånd och anpassa sig till andra fordons beteenden.
APA, Harvard, Vancouver, ISO, and other styles
45

Nightingale, Sarah. "Culturally sensitive and community-based HIV/AIDS prevention messages for African American women." Thesis, Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Sadler, Pauline Barbara. "Balancing the public interest: The D-Notice system and the suppression of sensitive government information relating to national security." Thesis, Sadler, Pauline Barbara (1999) Balancing the public interest: The D-Notice system and the suppression of sensitive government information relating to national security. PhD thesis, Murdoch University, 1999. https://researchrepository.murdoch.edu.au/id/eprint/51263/.

Full text
Abstract:
Balancing the public interest: the D-Notice system and the suppression of sensitive government information relating to national security The D-Notice system is a voluntary arrangement between the government and the media where the media agree not to publish certain information in the interests of national security. The D-Notice system operates only in Australia and the U.K. and is now known as the DA-Notice system in the U.K. It represents voluntary censorship on the part of the media. The alternative to the D-Notice system is legal action by the government to suppress allegedly sensitive material, for example by use of the civil action of breach of confidence, or to punish the media after publication by use of the criminal law. The thesis identifies a major deficiency in both the D-Notice system and the legal alternatives to the system. The public interest, that is the interests of the general public, is insufficiently represented in the operation of the former and is often not sufficiently recognised by the judiciary in the latter. The result is that material may be suppressed which has no connection with national security but instead exposes the government to embarrassment. First the study examines the history and present operation of the D-Notice system in the U.K. and Australia. This reveals that there is not, and never has been, any independent evaluation of the interests of the general public when decisions are made. Next the legal alternatives to the system in both countries are analysed. It is shown that the reluctance of some judges to actually examine the information in question, and the background to the information, is a flaw in the various procedures which in turn favours the government interest in suppression. Finally the study explores some related issues that clarify a number of the concepts discussed earlier. It will be shown that while both the government and the media purport to represent the public interest, in reality both primarily represent their own interests which are not always synonymous with the interests of the general public.
APA, Harvard, Vancouver, ISO, and other styles
47

Kini, Ananth Ullal. "On the effect of INQUERY term-weighting scheme on query-sensitive similarity measures." Texas A&M University, 2005. http://hdl.handle.net/1969.1/3116.

Full text
Abstract:
Cluster-based information retrieval systems often use a similarity measure to compute the association among text documents. In this thesis, we focus on a class of similarity measures named Query-Sensitive Similarity (QSS) measures. Recent studies have shown QSS measures to positively influence the outcome of a clustering procedure. These studies have used QSS measures in conjunction with the ltc term-weighting scheme. Several term-weighting schemes have superseded the ltc term-weighing scheme and demonstrated better retrieval performance relative to the latter. We test whether introducing one of these schemes, INQUERY, will offer any benefit over the ltc scheme when used in the context of QSS measures. The testing procedure uses the Nearest Neighbor (NN) test to quantify the clustering effectiveness of QSS measures and the corresponding term-weighting scheme. The NN tests are applied on certain standard test document collections and the results are tested for statistical significance. On analyzing results of the NN test relative to those obtained for the ltc scheme, we find several instances where the INQUERY scheme improves the clustering effectiveness of QSS measures. To be able to apply the NN test, we designed a software test framework, Ferret, by complementing the features provided by dtSearch, a search engine. The test framework automates the generation of NN coefficients by processing standard test document collection data. We provide an insight into the construction and working of the Ferret test framework.
APA, Harvard, Vancouver, ISO, and other styles
48

Lindberg, Susanne. "Involving Children in the Design of Online Peer Support for Children with Cancer." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4528.

Full text
Abstract:
Information Technology (IT) in health services has become increasingly important for people's wellbeing. The nature of design of these technologies is complex – even more so when the context is of a sensitive nature, such as the user's health. Furthermore, when the users are children, several additional difficulties surface. Apart from the design context being sensitive, children have cognitive and communicational limitations that make any design method employed require adaptations. This thesis is conducted within the research project Child Health Interactive Peer Support (CHIPS) at Halmstad University. The project aims at developing Online Peer Support (OPS) for advancing the wellbeing of children who have or have had cancer. The project thus presents a unique design situation, and the aim of this thesis is to answer the question: How can children participate in the design of Online Peer Support for children with cancer? In order to answer this question, a literature review was performed to identify common properties of design methods with children, children were involved in the design of OPS for children with cancer, and the lessons learned from the empirical case were discussed. The properties of design methods with children were organised into three categories and later supplemented with properties of methods for performing research in a sensitive context. The empirical material was made up of six design workshops with two groups of children who were, or had previously been treated for cancer. From the design workshops and the subsequent discussions several lessons were learned, in addition to the result from the literature review, about how children can be involved in the design of OPS for children with cancer. Based on this, seven suggestions were made for adapting methods to suit design with children in a sensitive context.
APA, Harvard, Vancouver, ISO, and other styles
49

Renshaw, K. J. "Semi-natural vegetation characteristics and the prediction of hydrological and hydrochemical information within moorland, acid-sensitive catchments in upland Wales." Thesis, Swansea University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638645.

Full text
Abstract:
This study investigates the potential of utilizing semi-natural vegetation characteristics to predict hydrological and hydrochemical source areas in upland moorland catchments. It centres upon the intensive vegetational, hydrological and hydrochemical investigation of the Nant Gruffydd catchment, a tributary of the Camddwr at Llyn Brianne in upland, mid-Wales. A nested catchment approach was adopted and intensive monitoring during baseflows and stormflows was used to establish hydrological and hydrochemical source areas at stages through the storm hydrograph. The hydrological points raised include: 1) plateau peatlands ranked as the most important hydrological source during typical storm events, whilst the lower valley riparian peat areas ranked most important during very large storms and/or under very wet antecedent conditions. 2) isotopic investigations, although suggesting that 'old' pre-event water are dominant in the stream hydrograph in typical storm events, also demonstrate the invalidity of assumptions involved in conventional isotopic separation techniques. 3) hypotheses linking rapid runoff mechanisms with the delivery of pre-storm waters can be envisaged and are proposed. The hydrochemical points raised include: 1) the Nant Gruffydd catchment exhibits spatially variable levels of acidity and aluminium, specifically, plateau peatlands are characterised with low pH levels (4.2-4.4 pH units) and ranked as the most important source of Hydrogen, and the catchment slopes with mineral soils were identified as important sources of Aluminium (7 mmols/l) and supplied water of higher pH helping to account for the pH of 5.5 at the catchment outlet. 2) aluminium levels in the mid to upper catchment were as high as those recorded in acidified afforested catchments in the UK (Hornung et al. 1987). 3) changes in within-catchment sources of runoff, as opposed to at-a-point chemical changes exert a prime influence upon the hydrochemical dynamics of streamwater. 4) within-channel chemical reactions appear to influence runoff hydrochemistry more than hitherto has been recognised. The data gathered enabled associations between vegetation patterns and different hydrological/hydrochemical parameters to be explored at three scales. Whilst this demonstrated potentially useful associations, multiple-regression analysis failed to establish strong relationships.
APA, Harvard, Vancouver, ISO, and other styles
50

BELIAN, Rosalie Barreto. "A context-based name resolution approach for semantic schema integration." Universidade Federal de Pernambuco, 2008. https://repositorio.ufpe.br/handle/123456789/1512.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:50:47Z (GMT). No. of bitstreams: 2 arquivo1988_1.pdf: 1433897 bytes, checksum: 2bd67eddaeadba13aa380ec5c913b7e0 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008
Uma das propostas da Web Semântica é fornecer uma grande diversidade de serviços de diferentes domínios na Web. Estes serviços são, em sua maioria, colaborativos, cujas tarefas se baseiam em processos de tomada de decisão. Estas decisões, por sua vez, serão mais bem embasadas se considerarem a maior quantidade possível de informação relacionada às tarefas em execução. Neste sentido, este cenário encoraja o desenvolvimento de técnicas e ferramentas orientadas para a integração de informação, procurando soluções para a heterogeneidade das fontes de dados. A arquitetura baseada em mediação, utilizada no desenvolvimento de sistemas de integração de informações tem como objetivo isolar o usuário das fontes de dados distribuídas utilizando uma camada intermediária de software chamada de mediador. O mediador, em um sistema de integração de informações, utiliza um esquema global para a execução das consultas do usuário que são reformuladas em sub-consultas de acordo com os esquemas locais das fontes de dados. Neste caso, um processo de integração de esquemas gera o esquema global (esquema de mediação) como resultado da integração dos esquemas individuais das fontes de dados. O problema maior em integração de esquemas é a heterogeneidade das fontes de dados locais. Neste sentido, a resolução semântica é primordial. A utilização de métodos puramente estruturais e sintáticos na integração de esquemas é pouco eficaz se antes não houver a identificação do real significado dos elementos dos esquemas. Um processo de integração de esquemas tem como resultado um esquema global integrado e um conjunto de mapeamentos inter-esquema e usualmente, compreende algumas etapas básicas como: pré-integração, comparação, mapeamento e unificação de esquemas e geração do esquema de mediação. Em integração de esquemas, resolução de nomes é o processo que determina a qual entidade do mundo real um dado elemento de esquema se refere, levando em consideração um conjunto de informações semânticas disponíveis. A informação semântica necessária para resolução de nomes, em geral, é obtida de vocabulários genéricos e/ou específicos de um determinado domínio de conhecimento. Nomes de elementos podem apresentar significados diferentes dependendo do contexto semântico ao qual eles estão relacionados. Assim, o uso de informação contextual, além da de domínio, pode trazer uma maior precisão na interpretação dos elementos permitindo modificar o seu significado de acordo com um dado contexto. Este trabalho propõe uma abordagem de resolução de nomes baseada em contexto para integração de esquemas. Um de seus pontos fortes é a utilização e modelagem da informação contextual necessária à resolução de nomes em diferentes etapas do processo de integração de esquemas. A informação contextual está modelada utilizando uma ontologia, o que favorece a utilização de mecanismos de inferência, compartilhamento e reuso da informação. Além disto, este trabalho propõe um processo de integração de esquemas simples e extensível de forma que seu desenvolvimento se concentrasse principalmente nos requisitos relacionados à resolução de nomes. Este processo foi desenvolvido para um sistema de integração de informações baseado em mediação, que adota a abordagem GAV e XML como modelo comum para intercâmbio de dados e integração de fontes de dados na Web
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography