Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Informational entities.

Dissertationen zum Thema „Informational entities“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Informational entities" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Tarbouriech, Cédric. „Avoir une partie 2 × 2 = 4 fois : vers une méréologie des slots“. Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30316.

Der volle Inhalt der Quelle
Annotation:
La méréologie est la discipline qui s'intéresse aux relations entre une partie et son tout et entre parties au sein d'un même tout. Selon la théorie la plus communément utilisée, appelée "méréologie classique extensionnelle", une entité ne peut être partie d'une autre entité qu'une seule fois. Par exemple, votre cœur n'est qu'une seule fois partie de votre corps. Ce principe a été remis en question par certains travaux antérieurs. En effet, il n'est pas possible de décrire la structure méréologique de certaines entités, telles que les universaux structurés ou les types de mots, dans le cadre de la méréologique classique extensionnelle. Ces entités peuvent avoir plusieurs fois la même partie. Par exemple, l'universel de molécule d'eau (H2O) a comme partie l'universel d'atome d'hydrogène (H) deux fois, alors qu'une molécule d'eau particulière a comme parties deux atomes d'hydrogène distincts. Dans ce travail, nous suivons la piste ouverte par Karen Bennett en 2013. Bennett a ébauché une nouvelle méréologie qui permette de représenter la structure méréologique de ces entités. Dans sa théorie, être une partie d'une entité, c'est remplir un "slot" de cette entité. Ainsi, dans le mot "patate", la lettre "a" est partie du mot deux fois, parce qu'elle occupe deux "slots" de ce mot : le deuxième et le quatrième. La proposition de Bennett est innovante en cela qu'elle offre un cadre général, qui n'est pas restreint à un type d'entités. Toutefois, la théorie souffre de plusieurs problèmes. D'abord, elle est limitée : de nombreuses notions de méréologie classique n'y ont pas d'équivalent, telles que la somme méréologique ou l'extensionnalité. Ensuite, parce que la théorie, par son axiomatique, provoque des problèmes de comptage. Ainsi, l'universel d'électron n'est partie que sept fois de l'universel de méthane, au lieu des dix fois qui sont attendues. Nous avons proposé une solution dont le principe est que les slots doivent être dupliqués autant de fois que nécessaires pour obtenir un comptage correct. Cette duplication est opérée grâce à un mécanisme appelé "contextualisation", qui permet de copier les slots en rajoutant un contexte supplémentaire. Ainsi, nous avons établi une théorie permettant de représenter des entités qui peuvent avoir plusieurs la même partie tout en évitant les problèmes de comptage. Nous avons développé une méréologie des slots sur la base de cette théorie, c'est-à-dire une théorie représentant des relations méréologiques entre slots. Ainsi, nous avons pu développer les diverses notions présentes en méréologie classique, telles que la supplémentation, l'extensionnalité, la somme et la fusion méréologiques. Cette proposition fournit une méréologie très expressive et logiquement bien fondée qui permettra d'explorer, dans de futurs travaux, des questions complexes soulevées dans la littérature scientifique. En effet, certaines entités ne peuvent pas être différenciées par leurs seules structures méréologiques, mais requièrent de représenter des relations additionnelles entre leurs parties. Notre théorie méréologique offre des outils et des pistes permettant d'explorer de telles questions
Mereology is the discipline concerned with the relationships between a part and its whole and between parts within a whole. According to the most commonly used theory, "classical extensional mereology", an entity can only be part of another one once. For example, your heart is part once of your body. Some earlier works have challenged this principle. Indeed, it is impossible to describe the mereological structure of certain entities, such as structural universals or word types, within the framework of classical extensional mereology. These entities may have the same part several times over. For example, the universal of water molecule (H2O) has as part the universal of hydrogen atom (H) twice, while a particular water molecule has two distinct hydrogen atoms as parts. In this work, we follow the track opened by Karen Bennett in 2013. Bennett sketched out a new mereology to represent the mereological structure of these entities. In her theory, to be a part of an entity is to fill a "slot" of that entity. Thus, in the word "potato", the letter "o" is part of the word twice because it occupies two "slots" of that word: the second and the sixth. Bennett's proposal is innovative in offering a general framework that is not restricted to one entity type. However, the theory has several problems. Firstly, it is limited: many notions of classical mereology have no equivalent, such as mereological sum or extensionality. Secondly, the theory's axiomatics give rise to counting problems. For example, the electron universal is only part of the methane universal seven times instead of the expected ten times. We have proposed a solution based on the principle that slots must be duplicated as often as necessary to obtain a correct count. This duplication is achieved through a mechanism called "contextualisation", which allows slots to be copied by adding context. In this way, we have established a theory for representing entities that may have the same part multiple times while avoiding counting problems. We have developed a mereology of slots based on this theory, which is a theory representing mereological relationships between slots. In this way, we have developed the various notions present in classical mereology, such as supplementation, extensionality, mereological sum and fusion. This proposal provides a very expressive and logically sound mereology that will enable future work to explore complex issues raised in the scientific literature. Indeed, some entities cannot be differentiated by their mereological structures alone but require the representation of additional relationships between their parts. Our mereological theory offers tools and avenues to explore such questions
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Fisichella, Marco [Verfasser]. „Clustering information entities based on statistical methods / Marco Fisichella“. Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover (TIB), 2013. http://d-nb.info/1032803355/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nomura, Shigueo. „Novel advanced treatments of morphological entities in spatial information processing“. 京都大学 (Kyoto University), 2006. http://hdl.handle.net/2433/143899.

Der volle Inhalt der Quelle
Annotation:
Kyoto University (京都大学)
0048
新制・課程博士
博士(情報学)
甲第12451号
情博第205号
新制||情||44(附属図書館)
24287
UT51-2006-J442
京都大学大学院情報学研究科システム科学専攻
(主査)教授 片井 修, 教授 松田 哲也, 助教授 杉本 直三
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Wei. „Unsupervised Information Extraction From Text - Extraction and Clustering of Relations between Entities“. Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00998390.

Der volle Inhalt der Quelle
Annotation:
Unsupervised information extraction in open domain gains more and more importance recently by loosening the constraints on the strict definition of the extracted information and allowing to design more open information extraction systems. In this new domain of unsupervised information extraction, this thesis focuses on the tasks of extraction and clustering of relations between entities at a large scale. The objective of relation extraction is to discover unknown relations from texts. A relation prototype is first defined, with which candidates of relation instances are initially extracted with a minimal criterion. To guarantee the validity of the extracted relation instances, a two-step filtering procedures is applied: the first step with filtering heuristics to remove efficiently large amount of false relations and the second step with statistical models to refine the relation candidate selection. The objective of relation clustering is to organize extracted relation instances into clusters so that their relation types can be characterized by the formed clusters and a synthetic view can be offered to end-users. A multi-level clustering procedure is design, which allows to take into account the massive data and diverse linguistic phenomena at the same time. First, the basic clustering groups similar relation instances by their linguistic expressions using only simple similarity measures on a bag-of-word representation for relation instances to form high-homogeneous basic clusters. Second, the semantic clustering aims at grouping basic clusters whose relation instances share the same semantic meaning, dealing with more particularly phenomena such as synonymy or more complex paraphrase. Different similarities measures, either based on resources such as WordNet or distributional thesaurus, at the level of words, relation instances and basic clusters are analyzed. Moreover, a topic-based relation clustering is proposed to consider thematic information in relation clustering so that more precise semantic clusters can be formed. Finally, the thesis also tackles the problem of clustering evaluation in the context of unsupervised information extraction, using both internal and external measures. For the evaluations with external measures, an interactive and efficient way of building reference of relation clusters proposed. The application of this method on a newspaper corpus results in a large reference, based on which different clustering methods are evaluated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hryhorevska, O. O. „Information and analytical provision of management of diversification of activities of business entities“. Thesis, SSPG Publish, 2020. https://er.knutd.edu.ua/handle/123456789/17370.

Der volle Inhalt der Quelle
Annotation:
There is no doubt that high-quality information support will increase the validity, efficiency and analytical information on the development of a strategy to diversify the activities of economic entities in accordance with modern management requirements, strengthen the responsibility of performers and minimize risks. Thus, thanks to the information support system of diversification of business entities are able to adapt to the external business environment and risks of society, strengthen the competitive position, maximize and effectively use development opportunities.
Якісна інформаційна підтримка підвищить обгрунтованість, ефективність та аналітичну інформацію щодо розробки стратегії диверсифікації діяльності суб'єктів господарювання відповідно до сучасних вимог управління, посилення відповідальності виконавців та мінімізації ризиків. Таким чином, завдяки системі інформаційного забезпечення диверсифікації суб’єкти господарювання здатні адаптуватися до зовнішнього ділового середовища та ризиків суспільства, зміцнити конкурентні позиції, максимізувати та ефективно використовувати можливості розвитку.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Perera, Pathirage Dinindu Sujan Udayanga. „Knowledge-driven Implicit Information Extraction“. Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1472474558.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ngwobia, Sunday C. „Capturing Knowledge of Emerging Entities from the Extended Search Snippets“. University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton157309507473671.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Alsarem, Mazen. „Semantic snippets via query-biased ranking of linked data entities“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI044/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, nous introduisons un nouvel artefact interactif pour le SERP: le "Snippet sémantique". Les snippets sémantiques s'appuient sur la coexistence des deux Webs pour faciliter le transfert des connaissances aux utilisateurs grâce a une contextualisation sémantique du besoin d'information de l'utilisateur. Ils font apparaître les relations entre le besoin d'information et les entités les plus pertinentes présentes dans la page Web
In this thesis, we introduce a new interactive artifact for the SERP: the "Semantic Snippet". Semantic Snippets rely on the coexistence of the two webs to facilitate the transfer of knowledge to the user thanks to a semantic contextualization of the user's information need. It makes apparent the relationships between the information need and the most relevant entities present in the web page
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Rouse, L. Jesse. „Data points or cultural entities a GIS-based archaeological predictive model in a post-positivist framework /“. Morgantown, W. Va. : [West Virginia University Libraries], 2000. http://etd.wvu.edu/templates/showETD.cfm?recnum=1756.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.A.)--West Virginia University, 2000.
Title from document title page. Document formatted into pages; contains v, 95 p. : ill. (some col.), maps (some col.). Vita. Includes abstract. Includes bibliographical references (p. 83-89).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Feiser, Craig D. „Privatization and freedom of information an analysis of public access to private entities in the United States /“. : State University System of Florida, 1998. http://purl.fcla.edu/fcla/etd/amd0038.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Ruan, Wei. „Topic Segmentation and Medical Named Entities Recognition for Pictorially Visualizing Health Record Summary System“. Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39023.

Der volle Inhalt der Quelle
Annotation:
Medical Information Visualization makes optimized use of digitized data of medical records, e.g. Electronic Medical Record. This thesis is an extended work of Pictorial Information Visualization System (PIVS) developed by Yongji Jin (Jin, 2016) Jiaren Suo (Suo, 2017) which is a graphical visualization system by picturizing patient’s medical history summary depicting patients’ medical information in order to help patients and doctors to easily capture patients’ past and present conditions. The summary information has been manually entered into the interface where the information can be taken from clinical notes. This study proposes a methodology of automatically extracting medical information from patients’ clinical notes by using the techniques of Natural Language Processing in order to produce medical history summarization from past medical records. We develop a Named Entities Recognition system to extract the information of the medical imaging procedure (performance date, human body location, imaging results and so on) and medications (medication names, frequency and quantities) by applying the model of conditional random fields with three main features and others: word-based, part-of-speech, Metamap semantic features. Adding Metamap semantic features is a novel idea which raised the accuracy compared to previous studies. Our evaluation shows that our model has higher accuracy than others on medication extraction as a case study. For enhancing the accuracy of entities extraction, we also propose a methodology of Topic Segmentation to clinical notes using boundary detection by determining the difference of classification probabilities of subsequence sequences, which is different from the traditional Topic Segmentation approaches such as TextTiling, TopicTiling and Beeferman Statistical Model. With Topic Segmentation combined for Named Entities Extraction, we observed higher accuracy for medication extraction compared to the case without the segmentation. Finally, we also present a prototype of integrating our information extraction system with PIVS by simply building the database of interface coordinates and the terms of human body parts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Matthew, Gordon Derrac. „Benoemde-entiteitherkenning vir Afrikaans / G.D. Matthew“. Thesis, North-West University, 2013. http://hdl.handle.net/10394/10170.

Der volle Inhalt der Quelle
Annotation:
According to the Constitution of South Africa, the government is required to make all the infor-mation in the ten indigenous languages of South Africa (excluding English), available to the public. For this reason, the government made the information, that already existed for these ten languages, available to the public and an effort is also been made to increase the amount of resources available in these languages (Groenewald & Du Plooy, 2010). This release of infor-mation further helps to implement Krauwer‟s (2003) idea that there is an inventory for the mini-mal number of language-related resources required for a language to be competitive at the level of research and teaching. This inventory is known as the "Basic Language Resource Kit" (BLARK). Since most of the languages in South Africa are resource scarce, it is of the best in-terest for the cultural growth of the country, that each of the indigenous South African languages develops their own BLARK. In Chapter 1, the need for the development of an implementable named entity recogniser (NER) for Afrikaans is discussed by first referring to the Constitution of South Africa’s (Republic of South Africa, 2003) language policy. Secondly, the guidelines of BLARK (Krauwer, 2003) are discussed, which is followed by a discussion of an audit that focuses on the number of re-sources and the distribution of human language technology for all eleven South African languages (Sharma Grover, Van Huyssteen & Pretorius, 2010). In respect of an audit conducted by Sharma Grover et al. (2010), it was established that there is a shortage of text-based tools for Afrikaans. This study focuses on this need for text-based tools, by focusing on the develop-ment of a NER for Afrikaans. In Chapter 2 a description is given on what an entity and a named entity is. Later in the chapter the process of technology recycling is explained, by referring to other studies where the idea of technology recycling has been applied successfully (Rayner et al., 1997). Lastly, an analysis is done on the differences that may occur between Afrikaans and Dutch named entities. These differences are divided into three categories, namely: identical cognates, non-identical cognates and unrelated entities. Chapter 3 begins with a description of Frog (van den Bosch et al, 2007), the Dutch NER used in this study, and the functions and operation of its NER-component. This is followed by a description of the Afrikaans-to-Dutch-converter (A2DC) (Van Huyssteen & Pilon, 2009) and finally the various experiments that were completed, are explained. The study consists of six experiments, the first of which was to determine the results of Frog on Dutch data. The second experiment evaluated the effectiveness of Frog on unchanged (raw) Afrikaans data. The following two experiments evaluated the results of Frog on “Dutched” Afrikaans data. The last two experiments evaluated the effectiveness of Frog on raw and “Dutched” Afrikaans data with the addition of gazetteers as part of the pre-processing step. In conclusion, a summary is given with regards to the comparisons between the NER for Afri-kaans that was developed in this study, and the NER-component that Puttkammer (2006) used in his tokeniser. Finally a few suggestions for future research are proposed.
MA (Applied Language and Literary Studies), North-West University, Vaal Triangle Campus, 2013
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

AGLIETTA, BERNARD. „Entite(s) identite(s) : quand information et communication devoilent les limites qui interrogent“. Grenoble 3, 1988. http://www.theses.fr/1988GRE39017.

Der volle Inhalt der Quelle
Annotation:
Cette these traite des fondements des phenomenes de communication sous l'angle de la question : quel est le sens du developpement des phenomenes de communication ? en partant d'une experience professionnelle de concepteur-redacteur independant sont reperees, pointees, relevees, reflechies et debattues differentes questions que revelent et soulevent les pratiques de communication pour les entites socioeconomiques du departement de la savoie de 1984 a 1988. Plusieurs reperes essentiels sont degages comme autant de limites aux velleites de communication. Ces reperes sont les etapes du processus de communication qui comprend : information, reflexion, conception-creation, diffusion. . . Puis retour a l'information. Chacune de ces etapes est analysee pour elle-meme. L'aspect processuel et cyclique des travaux de communication est largement souligne. Deux applications "de terrain" exemplaires sont ensuite evoquees : d'une part la candidature et la preparation des jeux olympiques d'hiver de 1992 en savoie, d'autre part le developpement du technopole savoyard "savoie technolac". Les conclusions s'attachent a confirmer, au sein des pratiques d'information et de communication, la presence et la pertinence des clivages entre digital et analogique, comme autant de "micro-passages" potentiels fondamentaux composants des phenomenes dits d'evolution voire de revolution vers l'apparition progressive d'une nebuleuse de changements d'ordre paradigmatique
Between entity and identity, acts of information and communication reveal interrogative limits
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Rafferty, Kevin. „An investigation of the response of entities in the South African JSE ICT sector in 2005 to environmental sustainability reporting“. Thesis, Rhodes University, 2007. http://hdl.handle.net/10962/d1003874.

Der volle Inhalt der Quelle
Annotation:
Pressure is on organisations the world over to report to their stakeholders, not only on their economic performance, but also on their environmental and social performance. In South Africa the King II code of corporate governance provides the guidance and impetus for this integrated “triple bottom line” sustainability reporting. The ICT sector in South Africa has been cited as lagging behind other sectors with regards to sustainability reporting, particularly in environmental sustainability reporting. Many ICT organisations would appear to be using their office and service based existence as reasons for having little or no impact on the environment. The study of the impacts of ICT on environmental sustainability in this research suggests that this is not necessarily the case. An assessment tool based on the internationally recognised Global Reporting Initiative Guidelines was developed in this research to investigate the level of environmental sustainability reporting in the South African ICT sector. The investigation showed the level of environmental sustainability reporting in the sector’s 2005 annual reports to be low. To get an indication of the level of reporting in more developed countries, a small sample of international ICT and service organisations was assessed using the tool, which showed significantly higher levels of environmental sustainability reporting. A set of ICT specific environmental sustainability performance indicators are proposed to enhance and encourage more significant levels of environmental sustainability reporting in South Africa.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Marinone, Emilio. „Evaluation of New Features for Extractive Summarization of Meeting Transcripts : Improvement of meeting summarization based on functional segmentation, introducing topic model, named entities and domain specific frequency measure“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249560.

Der volle Inhalt der Quelle
Annotation:
Automatic summarization of meeting transcripts has been widely stud­ied in last two decades, achieving continuous improvements in terms of the standard summarization metric (ROUGE). A user study has shown that people noticeably prefer abstractive summarization rather than the extractive approach. However, a fluent and informative ab­stract depends heavily on the performance of the Information Extrac­tion method(s) applied. In this work, basic concepts useful for understanding meeting sum­marization methods like Parts-of-Speech (POS), Named Entity Recog­nition (NER), frequency and similarity measure and topic models are introduced together with a broad literature analysis. The proposed method takes inspiration from the current unsupervised extractive state of the art and introduces new features that improve the baseline. It is based on functional segmentation, meaning that it first aims to divide the preprocessed source transcript into monologues and dialogues. Then, two different approaches are used to extract the most impor­tant sentences from each segment, whose concatenation together with redundancy reduction creates the final summary. Results show that a topic model trained on an extended corpus, some variations in the proposed parameters and the consideration of word tags improve the performance in terms of ROUGE Precision, Re­call and F-measure. It outperforms the currently best performing un­supervised extractive summarization method in terms of ROUGE-1 Precision and F-measure. A subjective evaluation of the generated summaries demonstrates that the current unsupervised framework is not yet accurate enough for commercial use, but the new introduced features can help super­vised methods to achieve acceptable performance. A much larger, non-artificially constructed meeting dataset with reference summaries is also needed for training supervised methods as well as a more accu­rate algorithm evaluation. The source code is available on GitHub: https://github.com/marinone94/ThesisMeetingSummarization
Automatgenererade textsammanfattningar av mötestranskript har varit ett allmänt studerat område de senaste två decennierna där resultatet varit ständiga förbättringar mätt mot standardsammanfattningsvärdet (ROUGE). En studie visar att människor märkbart föredrar abstraherade sammanfattningar gentemot omfattande sammanfattningar. En informativ och flytande textsammanfattning förlitar sig däremot mycket på informationsextraheringsmetoden som används. I det har arbetet presenteras grundläggande koncept som är användbara för att förstå textsammanfattningar så som: Parts-of-Speech (POS), Named Entity Recognition (NER), frekvens och likhetsvärden, och ämnesmodeller. Även en bred litterär analys ingår i arbetet. Den föreslagna metoden tar inspiration från de nuvarande främsta omfattande metoderna och introducerar nya egenskaper som förbättrar referensmodellen. Det är helt oövervakat och baseras på funktionell segmentering vilket betyder att den i först fallet försöker dela upp den förbehandlade källtexten i monologer och dialoger. Därefter används två metoder for att extrahera de mest betydelsefulla meningarna ur varje segment vilkas sammanbindning, tillsammans med redundansminskning, bildar den slutliga textsammanfattningen. Resultaten visar att en ämnesmodell, tränad på ett omfattande korpus med viss variation i de föreslagna parametrarna och med ordmärkning i åtanke, förbättrar prestandan mot ROUGE, precision, Recall och F-matning. Den överträffar den nuvarande bästa Rouge-1 precision och F-matning. En subjektiv utvärdering av de genererade textsammanfattningarna visar att det nuvarande, oövervakade ramverket inte är exakt nog for kommersiellt bruk än men att de nyintroducerade egenskaperna kan hjälpa oövervakade metoder uppnå acceptabla resultat. En mycket större, icke artificiellt skapad, datamängd bestående utav textsammanfattningar av möten krävs för att träna de övervakade, metoderna så väl som en mer noggrann utvärdering av de utvalda algoritmerna. Nya och existerande sammanfattningsmetoder kan appliceras på meningar extraherade ur den föreslagna metoden.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Lima, Rinaldo José de, und Frederico Luiz Gonçalves de Freitas. „Ontoilper: an ontology- and inductive logic programming-based method to extract instances of entities and relations from texts“. Universidade Federal de Pernambuco, 2014. https://repositorio.ufpe.br/handle/123456789/12425.

Der volle Inhalt der Quelle
Annotation:
Submitted by Nayara Passos (nayara.passos@ufpe.br) on 2015-03-13T12:33:46Z No. of bitstreams: 2 TESE Rinaldo José de Lima.pdf: 8678943 bytes, checksum: e88c290e414329ee00d2d6a35a466de0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T13:16:54Z (GMT) No. of bitstreams: 2 TESE Rinaldo José de Lima.pdf: 8678943 bytes, checksum: e88c290e414329ee00d2d6a35a466de0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Made available in DSpace on 2015-03-13T13:16:54Z (GMT). No. of bitstreams: 2 TESE Rinaldo José de Lima.pdf: 8678943 bytes, checksum: e88c290e414329ee00d2d6a35a466de0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2014
CNPq, CAPES.
Information Extraction (IE) consists in the task of discovering and structuring information found in a semi-structured or unstructured textual corpus. Named Entity Recognition (NER) and Relation Extraction (RE) are two important subtasks in IE. The former aims at finding named entities, including the name of people, locations, among others, whereas the latter consists in detecting and characterizing relations involving such named entities in text. Since the approach of manually creating extraction rules for performing NER and RE is an intensive and time-consuming task, researchers have turned their attention to how machine learning techniques can be applied to IE in order to make IE systems more adaptive to domain changes. As a result, a myriad of state-of-the-art methods for NER and RE relying on statistical machine learning techniques have been proposed in the literature. Such systems typically use a propositional hypothesis space for representing examples, i.e., an attribute-value representation. In machine learning, the propositional representation of examples presents some limitations, particularly in the extraction of binary relations, which mainly demands not only contextual and relational information about the involving instances, but also more expressive semantic resources as background knowledge. This thesis attempts to mitigate the aforementioned limitations based on the hypothesis that, to be efficient and more adaptable to domain changes, an IE system should exploit ontologies and semantic resources in a framework for IE that enables the automatic induction of extraction rules by employing machine learning techniques. In this context, this thesis proposes a supervised method to extract both entity and relation instances from textual corpora based on Inductive Logic Programming, a symbolic machine learning technique. The proposed method, called OntoILPER, benefits not only from ontologies and semantic resources, but also relies on a highly expressive relational hypothesis space, in the form of logical predicates, for representing examples whose structure is relevant to the information extraction task. OntoILPER automatically induces symbolic extraction rules that subsume examples of entity and relation instances from a tailored graph-based model of sentence representation, another contribution of this thesis. Moreover, this graph-based model for representing sentences also enables the exploitation of domain ontologies and additional background knowledge in the form of a condensed set of features including lexical, syntactic, semantic, and relational ones. Differently from most of the IE methods (a comprehensive survey is presented in this thesis, including the ones that also apply ILP), OntoILPER takes advantage of a rich text preprocessing stage which encompasses various shallow and deep natural language processing subtasks, including dependency parsing, coreference resolution, word sense disambiguation, and semantic role labeling. Further mappings of nouns and verbs to (formal) semantic resources are also considered. OntoILPER Framework, the OntoILPER implementation, was experimentally evaluated on both NER and RE tasks. This thesis reports the results of several assessments conducted using six standard evaluationcorpora from two distinct domains: news and biomedical. The obtained results demonstrated the effectiveness of OntoILPER on both NER and RE tasks. Actually, the proposed framework outperforms some of the state-of-the-art IE systems compared in this thesis.
A área de Extração de Informação (IE) visa descobrir e estruturar informações dispostas em documentos semi-estruturados ou desestruturados. O Reconhecimento de Entidades Nomeadas (REN) e a Extração de Relações (ER) são duas subtarefas importantes em EI. A primeira visa encontrar entidades nomeadas, incluindo nome de pessoas e lugares, entre outros; enquanto que a segunda, consiste na detecção e caracterização de relações que envolvem as entidades nomeadas presentes no texto. Como a tarefa de criar manualmente as regras de extração para realizar REN e ER é muito trabalhosa e onerosa, pesquisadores têm voltado suas atenções na investigação de como as técnicas de aprendizado de máquina podem ser aplicadas à EI a fim de tornar os sistemas de ER mais adaptáveis às mudanças de domínios. Como resultado, muitos métodos do estado-da-arte em REN e ER, baseados em técnicas estatísticas de aprendizado de máquina, têm sido propostos na literatura. Tais sistemas normalmente empregam um espaço de hipóteses com expressividade propositional para representar os exemplos, ou seja, eles são baseado na tradicional representação atributo-valor. Em aprendizado de máquina, a representação proposicional apresenta algums fatores limitantes, principalmente na extração de relações binárias que exigem não somente informações contextuais e estruturais (relacionais) sobre as instâncias, mas também outras formas de como adicionar conhecimento prévio do problema durante o processo de aprendizado. Esta tese visa atenuar as limitações acima mencionadas, tendo como hipótese de trabalho que, para ser eficiente e mais facilmente adaptável às mudanças de domínio, os sistemas de EI devem explorar ontologias e recursos semânticos no contexto de um arcabouço para EI que permita a indução automática de regras de extração de informação através do emprego de técnicas de aprendizado de máquina. Neste contexto, a presente tese propõe um método supervisionado capaz de extrair instâncias de entidades (ou classes de ontologias) e de relações a partir de textos apoiando-se na Programação em Lógica Indutiva (PLI), uma técnica de aprendizado de máquina supervisionada capaz de induzir regras simbólicas de classificação. O método proposto, chamado OntoILPER, não só se beneficia de ontologias e recursos semânticos, mas também se baseia em um expressivo espaço de hipóteses, sob a forma de predicados lógicos, capaz de representar exemplos cuja estrutura é relevante para a tarefa de EI consideradas nesta tese. OntoILPER automaticamente induz regras simbólicas para classificar exemplos de instâncias de entidades e relações a partir de um modelo de representação de frases baseado em grafos. Tal modelo de representação é uma das constribuições desta tese. Além disso, o modelo baseado em grafos para representação de frases e exemplos (instâncias de classes e relações) favorece a integração de conhecimento prévio do problema na forma de um conjunto reduzido de atributos léxicos, sintáticos, semânticos e estruturais. Diferentemente da maioria dos métodos de EI (uma pesquisa abrangente é apresentada nesta tese, incluindo aqueles que também se aplicam a PLI), OntoILPER faz uso de várias subtarefas do Processamento de Linguagem
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Madubela, Albert Dingalethu. „What shareholder information on the shareholder spread is disclosed in the financial statements of JSE listed entities in accordance with listing requirements of the JSE?“ Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/8518.

Der volle Inhalt der Quelle
Annotation:
Thesis (MBA)--University of Stellenbosch, 2011.
The study was undertaken to determine whether companies listed on the Johannesburg Stock Exchange disclose shareholder spread in line with the available statutes such as the JSE Listing Requirements. Further, the study explored the closing balances for group, company, trusts, subsidiaries, and treasuries of all the 50 companies studied to ascertain whether there were differences with the ex WDH share program. Various sources to answering the question were used including the Internet, McGregor BFA, Annual Reports of the companies, and material from University of Stellenbosch Business School (USB). There were varying findings with regards to the study as it was found that some companies had differences in group, company, trusts, subsidiaries, and treasuries. Most of the differences were due to company mistakes, non-consolidation of trusts, and use of different methods. It was found that certain companies tend to omit certain information in disclosing the shareholder spread and this has resulted in many companies with differences in their closing balances for the company, group, trusts, subsidiaries, and treasuries. In addition, it was also discovered that certain companies disclosed major shareholders of less than the prescribed five percent which proved to be very helpful in this study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Fotsoh, Tawaofaing Armel. „Recherche d’entités nommées complexes sur le web : propositions pour l’extraction et pour le calcul de similarité“. Thesis, Pau, 2018. http://www.theses.fr/2018PAUU3003/document.

Der volle Inhalt der Quelle
Annotation:
Les récents développements des nouvelles technologies de l’information et de la communication font du Web une véritable mine d’information. Cependant, les pages Web sont très peu structurées. Par conséquent, il est difficile pour une machine de les traiter automatiquement pour en extraire des informations pertinentes pour une tâche ciblée. C’est pourquoi les travaux de recherche s’inscrivant dans la thématique de l’Extraction d’Information dans les pages web sont en forte croissance. Aussi, l’interrogation de ces informations, généralement structurées et stockées dans des index pour répondre à des besoins d’information précis correspond à la Recherche d’Information (RI). Notre travail de thèse se situe à la croisée de ces deux thématiques. Notre objectif principal est de concevoir et de mettre en œuvre des stratégies permettant de scruter le web pour extraire des Entités Nommées (EN) complexes (EN composées de plusieurs propriétés pouvant être du texte ou d’autres EN) de type entreprise ou de type événement, par exemple. Nous proposons ensuite des services d’indexation et d’interrogation pour répondre à des besoins d’informations. Ces travaux ont été réalisés au sein de l’équipe T2I du LIUPPA, et font suite à une commande de l’entreprise Cogniteev, dont le cœur de métier est centré sur l’analyse du contenu du Web. Les problématiques visées sont, d’une part, l’extraction d’EN complexes sur le Web et, d’autre part, l’indexation et la recherche d’information intégrant ces EN complexes. Notre première contribution porte sur l’extraction d’EN complexes dans des textes. Pour cette contribution, nous prenons en compte plusieurs problèmes, notamment le contexte bruité caractérisant certaines propriétés (pour un événement par exemple, la page web correspondante peut contenir deux dates : la date de l’événement et celle de mise en vente des billets). Pour ce problème en particulier, nous introduisons un module de détection de blocs qui permet de focaliser l’extraction des propriétés sur des blocs de texte pertinents. Nos expérimentations montrent une nette amélioration des performances due à cette approche. Nous nous sommes également intéressés à l’extraction des adresses, où la principale difficulté découle du fait qu’aucun standard ne se soit réellement imposé comme modèle de référence. Nous proposons donc un modèle étendu et une approche d’extraction basée sur des patrons et des ressources libres.Notre deuxième contribution porte sur le calcul de similarité entre EN complexes. Dans l’état de l’art, ce calcul se fait généralement en deux étapes : (i) une première calcule les similarités entre propriétés et (ii) une deuxième agrège les scores obtenus pour le calcul de la similarité globale. En ce qui concerne cette première étape, nous proposons une fonction de calcul de similarité entre EN spatiale, l’une représentée par un point et l’autre par un polygone. Elle complète l’état de l’art. Notons que nos principales propositions se situent au niveau de la deuxième étape. Ainsi, nous proposons trois techniques pour l’agrégation des scores intermédiaires. Les deux premières sont basées sur la somme pondérée des scores intermédiaires (combinaison linéaire et régression logistique). La troisième exploite les arbres de décisions pour agréger les scores intermédiaires. Enfin, nous proposons une dernière approche basée sur le clustering et le modèle vectoriel de Salton pour le calcul de similarité entre EN complexes. Son originalité vient du fait qu’elle ne nécessite pas de passer par le calcul de scores de similarités intermédiaires
Recent developments in information technologies have made the web an important data source. However, the web content is very unstructured. Therefore, it is a difficult task to automatically process this web content in order to extract relevant information. This is a reason why research work related to Information Extraction (IE) on the web are growing very quickly. Similarly, another very explored research area is the querying of information extracted on the web to answer an information need. This other research area is known as Information Retrieval (IR). Our research work is at the crossroads of both areas. The main goal of our work is to develop strategies and techniques for crawling the web in order to extract complex Named Entities (NEs) (NEs with several properties that may be text or other NEs). We then propose to index them and to query them in order to answer information needs. This work was carried out within the T2I team of the LIUPPA laboratory, in collaboration with Cogniteev, a company which core business is focused on the analysis of web content. The issues we had to deal with were the extraction of complex NEs on the web and the development of IR services supplied by the extracted data. Our first contribution is related to complex NEs extraction from text content. For this contribution, we take into consideration several problems, in particular the noisy context characterizing some properties (the web page describing an event for example, may contain more than one dates: the event’s date and the date of ticket’s sales opening). For this particular problem, we introduce a block detection module that focuses property's extraction on relevant text blocks. Our experiments show an improvement of system’s performances. We also focused on address extraction where the main issue arises from the fact that there is not a standard way for writing addresses in general and on the web in particular. We therefore propose a pattern-based approach which uses some lexicons for extracting addresses from text, regardless of proprietary resources.Our second contribution deals with similarity computation between complex NEs. In the state of the art, this similarity computation is generally performed in two steps: (i) first, similarities between properties are calculated; (ii) then the obtained similarities are aggregated to compute the overall similarity. Our main proposals focuses on the second step. We propose three techniques for aggregating property’s similarities. The first two are based on the weighted sum of these property’s similarities (simple linear combination and logistic regression). The third technique however, uses decision trees for the aggregation. Finally, we also propose a last approach based on clustering and Salton vector model. This last approach evaluates the similarity at the complex NE level without computing property’s similarities. We also propose a similarity computation function between spatial EN, one represented by a point and the other by a polygon. This completes those of the state of the art
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Ben, Abacha Asma. „Recherche de réponses précises à des questions médicales : le système de questions-réponses MEANS“. Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00735612.

Der volle Inhalt der Quelle
Annotation:
La recherche de réponses précises à des questions formulées en langue naturelle renouvelle le champ de la recherche d'information. De nombreux travaux ont eu lieu sur la recherche de réponses à des questions factuelles en domaine ouvert. Moins de travaux ont porté sur la recherche de réponses en domaine de spécialité, en particulier dans le domaine médical ou biomédical. Plusieurs conditions différentes sont rencontrées en domaine de spécialité comme les lexiques et terminologies spécialisés, les types particuliers de questions, entités et relations du domaine ou les caractéristiques des documents ciblés. Dans une première partie, nous étudions les méthodes permettant d'analyser sémantiquement les questions posées par l'utilisateur ainsi que les textes utilisés pour trouver les réponses. Pour ce faire nous utilisons des méthodes hybrides pour deux tâches principales : (i) la reconnaissance des entités médicales et (ii) l'extraction de relations sémantiques. Ces méthodes combinent des règles et patrons construits manuellement, des connaissances du domaine et des techniques d'apprentissage statistique utilisant différents classifieurs. Ces méthodes hybrides, expérimentées sur différents corpus, permettent de pallier les inconvénients des deux types de méthodes d'extraction d'information, à savoir le manque de couverture potentiel des méthodes à base de règles et la dépendance aux données annotées des méthodes statistiques. Dans une seconde partie, nous étudions l'apport des technologies du web sémantique pour la portabilité et l'expressivité des systèmes de questions-réponses. Dans le cadre de notre approche, nous exploitons les technologies du web sémantique pour annoter les informations extraites en premier lieu et pour interroger sémantiquement ces annotations en second lieu. Enfin, nous présentons notre système de questions-réponses, appelé MEANS, qui utilise à la fois des techniques de TAL, des connaissances du domaine et les technologies du web sémantique pour répondre automatiquement aux questions médicales.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Bernard, Jocelyn. „Gérer et analyser les grands graphes des entités nommées“. Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1067/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse nous étudierons des problématiques de graphes. Nous proposons deux études théoriques sur la recherche et l'énumération de cliques et quasi-cliques. Ensuite nous proposons une étude appliquée sur la propagation d'information dans un graphe d'entités nommées. Premièrement, nous étudierons la recherche de cliques dans des graphes compressés. Les problèmes MCE et MCP sont des problèmes rencontrés dans l'analyse des graphes. Ce sont des problèmes difficiles, pour lesquels des solutions adaptées doivent être conçues pour les grands graphes. Nous proposons de travailler sur une version compressée du graphe. Nous montrons les bons résultats obtenus par notre méthode pour l'énumération de cliques maximales. Secondement, nous étudierons l'énumération de quasi-cliques maximales. Nous proposons un algorithme distribué qui énumère l'ensemble des quasi-cliques maximales. Nous proposons aussi une heuristique qui liste des quasi-cliques plus rapidement. Nous montrons l'intérêt de l'énumération de ces quasi-cliques par une évaluation des relations en regardant la co-occurrence des noeuds dans l'ensemble des quasi-cliques énumérées. Troisièmement, nous travaillerons sur la diffusion d'événements dans un graphe d'entités nommées. De nombreux modèles existent pour simuler des problèmes de diffusion de rumeurs ou de maladies dans des réseaux sociaux ou des problèmes de propagation de faillites dans les milieux bancaires. Nous proposons de répondre au problème de diffusion d'événements dans des réseaux hétérogènes représentant un environnement économique du monde. Nous proposons un problème de diffusion, nommé problème de classification de l'infection, qui consiste à déterminer quelles entités sont concernées par un événement. Pour ce problème, nous proposons deux modèles inspirés du modèle de seuil linéaire auxquels nous ajoutons différentes fonctionnalités. Finalement, nous testons et validons nos modèles sur un ensemble d'événements
In this thesis we will study graph problems. We will study theoretical problems in pattern research and applied problems in information diffusion. We propose two theoretical studies on the identification/detection and enumeration of dense subgraphs, such as cliques and quasi-cliques. Then we propose an applied study on the propagation of information in a named entities graph. First, we will study the identification/detection of cliques in compressed graphs. The MCE and MCP are problems that are encountered in the analysis of data graphs. These problem are difficult to solve (NP-Hard for MCE and NP-Complete for MCP), and adapted solutions must be found for large graphs. We propose to solve these problems by working on a compressed version of the initial graph. We show the correct results obtained by our method for the enumeration of maximal cliques on compressed graphs. Secondly, we will study the enumeration of maximal quasi-cliques. We propose a distributed algorithm that enumerates the set of maximal quasi-cliques of the graph. We show that this algorithm lists the set of maximal quasi-cliques of the graph. We also propose a heuristic that lists a set of quasi-cliques more quickly. We show the interest of enumerating these quasi-cliques by an evaluation of relations by looking at the co-occurrence of nodes in the set of enumerated quasi-cliques. Finally, we work on the event diffusion in a named entities graph. Many models exist to simulate diffusion problems of rumors or diseases in social networks and bankruptcies in banking networks. We address the issue of significant events diffusion in heterogeneous networks, representing a global economic environment. We propose a diffusion problem, called infection classification problem, which consists to dertemine which entities are concerned by an event. To solve this problem we propose two models inspired by the linear threshold model to which we add different features. Finally, we test and validate our models on a set of events
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Engel, Hugues. „Dislocation et référence aux entités en français L2 : Développement, interaction, variation“. Doctoral thesis, Stockholms universitet, Institutionen för franska, italienska och klassiska språk, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-38716.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates the use and development of dislocations in oral productions by Swedish users of French as a second language (L2). Dislocations are highly frequent in French oral speech and play an essential role in building utterances. L2 users of French must therefore acquire the grammatical means necessary to build this structure as well as the pragmatic principles underlying its use. The study is empirical, and based on a corpus of oral productions from a wide range of non-native speakers (NNS), from beginners studying at university to L2 users who have spent many years in France. The analysis also includes oral productions from a control group of native speakers (NS). The aim is to identify a path of development by which the different forms and functions of dislocations are acquired. Furthermore, the study examines the influence of tasks on the use of dislocations, by analysing two tasks which place very different demands on the informants in terms of cognitive effort, namely interviews and retellings. The analysis focuses on two main kinds of dislocations: on the one hand, [moi je VP] (and its syntactical variants); on the other hand, dislocations referring to third entities (such as [NP il VP] and [NP c’est X]). The results show that both kinds go through a process of development in French L2. However, French learners seem to master the lexical dislocations referring to third entities as well as their pragmatic rules of use from the first stages of acquisition, yet with deviances in some cases. On the other hand, the frequency of use of [moi je VP] and its syntactical variants correlates highly with the level of development of the NNS. Moreover, there is a significantly greater frequency of dislocations in the NNS retelling tasks than in their interviews. In the NS group, the frequency of use remains comparable in both tasks. This difference between NS and NNS is probably due to the additional cognitive load that retellings demand compared with interviews—e.g., recalling the succession of events, solving the lexical problems posed by the story that is to be retold. It is proposed that this additional load may trigger, as a compensation strategy, an increase in the frequency of use of dislocations in the NNS speech.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Dejean, Philippe. „Un formalisme pour les entités du traitement et de l'analyse des images“. Toulouse 3, 1996. http://www.theses.fr/1996TOU30242.

Der volle Inhalt der Quelle
Annotation:
Dans le cadre de l'analyse et de l'interpretation d'images, nous proposons une nouvelle approche pour la modelisation des entites de traitement d'images. Cette approche differe de celles implementees dans des systemes classiques. En effet, notre modele de donnees est unique quelle que soit la dimension ou la nature de l'entite manipulee. Il se decompose en cinq elements caracteristiques dont les expressions sont developpees dans un langage rigoureux: leo. Le modele d'operateurs rend compte des transformations d'informations effectuees par ceux-ci, et decrit precisement les entites produites intervenant comme indices visuels durant le processus d'interpretation. Ainsi une sequence d'operateurs est consideree comme un constructeur de concepts fournissant une description de ceux-ci dans le langage propose. Le formalisme que nous avons developpe, constitue des modeles de donnees et d'operateurs et du langage, permet ainsi de decrire de facon rigoureuse le traitement d'images en tant que manipulation de donnees, mais aussi en tant que manipulation de concepts d'un domaine specifique. De cette facon, nos modeles peuvent etre utilises par un systeme de resolution d'objectifs d'analyse d'images. Par le formalisme utilise, ils fournissent des elements interessant pour une bonne comprehension de la chaine produite ou pour une aide a la configuration et a la mise au point de ces chaines
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Lerner, Paul. „Répondre aux questions visuelles à propos d'entités nommées“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG074.

Der volle Inhalt der Quelle
Annotation:
Cette thèse se positionne à l'intersection de plusieurs domaines de recherche, le traitement automatique des langues, la Recherche d'Information (RI) et la vision par ordinateur, qui se sont unifiés autour des méthodes d'apprentissage de représentation et de pré-entraînement. Dans ce contexte, nous avons défini et étudié une nouvelle tâche multimodale : répondre aux questions visuelles à propos d'entités nommées (KVQAE). Dans ce cadre, nous nous sommes particulièrement intéressés aux interactions cross-modales et aux différentes façons de représenter les entités nommées. Nous avons également été attentifs aux données utilisées pour entraîner mais surtout évaluer les systèmes de question-réponse à travers différentes métriques. Plus précisément, nous avons proposé à cet effet un jeu de données, le premier de KVQAE comprenant divers types d'entités. Nous avons également défini un cadre expérimental pour traiter la KVQAE en deux étapes grâce à une base de connaissances non-structurée et avons identifié la RI comme principal verrou de la KVQAE, en particulier pour les questions à propos d'entités non-personnes. Afin d'améliorer l'étape de RI, nous avons étudié différentes méthodes de fusion multimodale, lesquelles sont pré-entraînées à travers une tâche originale : l'Inverse Cloze Task multimodale. Nous avons trouvé que ces modèles exploitaient une interaction cross-modale que nous n'avions pas considéré à l'origine, et qui permettrait de traiter l'hétérogénéité des représentations visuelles des entités nommées. Ces résultats ont été renforcés par une étude du modèle CLIP qui permet de modéliser cette interaction cross-modale directement. Ces expériences ont été menées tout en restant attentif aux biais présents dans le jeu de données ou les métriques d'évaluation, notamment les biais textuels qui affectent toute tâche multimodale
This thesis is positioned at the intersection of several research fields, Natural Language Processing, Information Retrieval (IR) and Computer Vision, which have unified around representation learning and pre-training methods. In this context, we have defined and studied a new multimodal task: Knowledge-based Visual Question Answering about Named Entities (KVQAE).In this context, we were particularly interested in cross-modal interactions and different ways of representing named entities. We also focused on data used to train and, more importantly, evaluate Question Answering systems through different metrics.More specifically, we proposed a dataset for this purpose, the first in KVQAE comprising various types of entities. We also defined an experimental framework for dealing with KVQAE in two stages through an unstructured knowledge base and identified IR as the main bottleneck of KVQAE, especially for questions about non-person entities. To improve the IR stage, we studied different multimodal fusion methods, which are pre-trained through an original task: the Multimodal Inverse Cloze Task. We found that these models leveraged a cross-modal interaction that we had not originally considered, and which may address the heterogeneity of visual representations of named entities. These results were strengthened by a study of the CLIP model, which allows this cross-modal interaction to be modeled directly. These experiments were carried out while staying aware of biases present in the dataset or evaluation metrics, especially of textual biases, which affect any multimodal task
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Berrios-Ayala, Mark. „Brave New World Reloaded: Advocating for Basic Constitutional Search Protections to Apply to Cell Phones from Eavesdropping and Tracking by Government and Corporate Entities“. Honors in the Major Thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1547.

Der volle Inhalt der Quelle
Annotation:
Imagine a world where someone’s personal information is constantly compromised, where federal government entities AKA Big Brother always knows what anyone is Googling, who an individual is texting, and their emoticons on Twitter. Government entities have been doing this for years; they never cared if they were breaking the law or their moral compass of human dignity. Every day the Federal government blatantly siphons data with programs from the original ECHELON to the new series like PRISM and Xkeyscore so they can keep their tabs on issues that are none of their business; namely, the personal lives of millions. Our allies are taking note; some are learning our bad habits, from Government Communications Headquarters’ (GCHQ) mass shadowing sharing plan to America’s Russian inspiration, SORM. Some countries are following the United States’ poster child pose of a Brave New World like order of global events. Others like Germany are showing their resolve in their disdain for the rise of tyranny. Soon, these new found surveillance troubles will test the resolve of the American Constitution and its nation’s strong love and tradition of liberty. Courts are currently at work to resolve how current concepts of liberty and privacy apply to the current conditions facing the privacy of society. It remains to be determined how liberty will be affected as well; liberty for the United States of America, for the European Union, the Russian Federation and for the people of the World in regards to the extent of privacy in today’s blurred privacy expectations.
B.S.
Bachelors
Health and Public Affairs
Legal Studies
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Гаутам, Аджит Пратап Сингх. „Информационная технология экстракции бизнес знаний из текстового контента интегрированной корпоративной системы“. Thesis, НТУ "ХПИ", 2016. http://repository.kpi.kharkov.ua/handle/KhPI-Press/23555.

Der volle Inhalt der Quelle
Annotation:
Диссертация на соискание ученой степени кандидата технических наук по специальности 05.13.06 – информационные технологии. – Национальный технический университет "Харьковский политехнический институт", Харьков, 2016. Цель диссертационного исследования – создание информационной технологии экстракции бизнес знаний интегрированной корпоративной системы на основе информационно-логических моделей и методов смысловой обработки текстового контента. В работе проанализированы существующие информационные технологии, модели и методы экстракции и идентификации знаний из текстов, сформулированы основные требования к разработке информационного обеспечения подсистемы экстракции бизнес знаний из текстового контента интегрированной корпоративной системы. Обосновано использование инструментов алгебры конечных предикатов в информационно-логических моделях экстракции фактов из текстовых потоков; построена математическая модель генерации фактов из текстов корпорации. Результаты диссертационного исследования внедрены в практику разработки и создания подсистем экстракции знаний из текстового контента реальных ИКС. На основе разработанных в диссертационном исследовании методов и моделей интеллектуальной обработки текстового контента предложена информационная технология формирования единого информационного пространства бизнес деятельности корпорации. При этом под информационным пространством интегрированной корпоративной системы понимается совокупность некоторых актуальных сведений, данных, оформленных таким образом, чтобы обеспечивать качество и оперативность принятия решений в области целевой деятельности корпорации. Предложенная информационная технология позволять извлекать знания из всего многообразия информационных ресурсов современного предприятия: Интернет- и интранет- сайтов предприятий и организаций, почтовых сообщений, файловых систем, хранилищ документов различных ведущих производителей, текстовых полей баз данных, репозитариев, различных бизнес-приложений т. п. Технология включает логико-лингвистическую модель генерации фактов из текстовых потоков ИКС, метод структурирования отношений фактов бизнес знаний, метод выявления актуального множества классифицированных сущностей предметной области, а также специализированные этапы Web Content Mining лингвистического процессора. Разработанные в исследовании математические модели могут быть использованы в различных системах автоматической обработки текстов, системах извлечения знаний, экстракции информации (Information Extraction) и распознавания сущностей (Named Entity Recognition).
Thesis for a candidate degree in technical science, speciality 05.13.06 – Infor-mation Technologies. – National Technical University "Kharkiv Polytechnic Institute". – Kharkiv, 2016. The aim of the thesis is to develop information technology of extraction of business knowledge of integrated corporate system (ICS) based on the information logic models and methods of text content sense processing. The main results are as follows: a logic linguistic model of fact generation from ICS text streams has been developed which is based on surface grammar characteristics of identification of entities of actions and attributes which allows to effectively extract industry specific knowledge about the subjects of monitoring from text content. The thesis further develops the method of comparator identification used for structuring of ICS business knowledge facts relationship. The method allows to classify the attributes of entities according to class relationships due to sense identity of fact triplets which are determined by the comparator objectively. The paper improves the method of determination of actual set of classified entities of a subject domain which is distinguished by an integral use of linguistic, statistical and sense characteristics in the naïve Bayes classifier. The method allows to classify entities extracted according to types determined a priori. The thesis improves the development of information technology of common information space of corporation business activity which allows complicated knowledge generation by means of explicit generalization of information hidden in the collection of partial facts using algebra logic transformations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Nouvel, Damien. „Reconnaissance des entités nommées par exploration de règles d'annotation - Interpréter les marqueurs d'annotation comme instructions de structuration locale“. Phd thesis, Université François Rabelais - Tours, 2012. http://tel.archives-ouvertes.fr/tel-00788630.

Der volle Inhalt der Quelle
Annotation:
Ces dernières décennies, le développement considérable des technologies de l'information et de la communication a modifié en profondeur la manière dont nous avons accès aux connaissances. Face à l'afflux de données et à leur diversité, il est nécessaire de mettre au point des technologies performantes et robustes pour y rechercher des informations. Les entités nommées (personnes, lieux, organisations, dates, expressions numériques, marques, fonctions, etc.) sont sollicitées afin de catégoriser, indexer ou, plus généralement, manipuler des contenus. Notre travail porte sur leur reconnaissance et leur annotation au sein de transcriptions d'émissions radiodiffusées ou télévisuelles, dans le cadre des campagnes d'évaluation Ester2 et Etape. En première partie, nous abordons la problématique de la reconnaissance automatique des entités nommées. Nous y décrivons les analyses généralement conduites pour traiter le langage naturel, discutons diverses considérations à propos des entités nommées (rétrospective des notions couvertes, typologies, évaluation et annotation) et faisons un état de l'art des approches automatiques pour les reconnaître. A travers la caractérisation de leur nature linguistique et l'interprétation de l'annotation comme structuration locale, nous proposons une approche par instructions, fondée sur les marqueurs (balises) d'annotation, dont l'originalité consiste à considérer ces éléments isolément (début ou fin d'une annotation). En seconde partie, nous faisons état des travaux en fouille de données dont nous nous inspirons et présentons un cadre formel pour explorer les données. Les énoncés sont représentés comme séquences d'items enrichies (morpho-syntaxe, lexiques), tout en préservant les ambigüités à ce stade. Nous proposons une formulation alternative par segments, qui permet de limiter la combinatoire lors de l'exploration. Les motifs corrélés à un ou plusieurs marqueurs d'annotation sont extraits comme règles d'annotation. Celles-ci peuvent alors être utilisées par des modèles afin d'annoter des textes. La dernière partie décrit le cadre expérimental, quelques spécificités de l'implémentation du système (mXS) et les résultats obtenus. Nous montrons l'intérêt d'extraire largement les règles d'annotation, même celles qui présentent une moindre confiance. Nous expérimentons les motifs de segments, qui donnent de bonnes performances lorsqu'il s'agit de structurer les données en profondeur. Plus généralement, nous fournissons des résultats chiffrés relatifs aux performances du système à divers point de vue et dans diverses configurations. Ils montrent que l'approche que nous proposons est compétitive et qu'elle ouvre des perspectives dans le cadre de l'observation des langues naturelles et de l'annotation automatique à l'aide de techniques de fouille de données.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Гаутам, Аджіт Пратап Сінгх. „Інформаційна технологія екстракції бізнес знань з текстового контенту інтегрованої корпоративної системи“. Thesis, НТУ "ХПІ", 2016. http://repository.kpi.kharkov.ua/handle/KhPI-Press/23554.

Der volle Inhalt der Quelle
Annotation:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.06 – інформаційні технології. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2016. Мета дисертаційного дослідження – створення інформаційної технології екстракції бізнес знань інтегрованої корпоративної системи (ІКС) на основі інформаційно-логічних моделей і методів смислового опрацювання текстового контенту. Основні результати: вперше розроблено логіко-лінгвістичну модель генерації фактів з текстових потоків ІКС, яка базується на використанні поверхневих граматичних характеристик сутностей, предикатів та атрибутів, що до-зволяє ефективно екстрагувати з текстового контенту профільні знання про суб'єкти моніторингу. Отримав подальший розвиток метод компараторної ідентифікації, який використано для структурування відношень фактів бізнес знань ІКС. Реалізація методу дозволила класифікувати атрибути сутностей за класами відношень за рахунок смислової тотожності триплетів фактів, які об'єктивно визначені компаратором. Удосконалено метод виявлення актуальної множини класифікованих сутностей предметної області, який відрізняється комплексним використанням лінгвістичних, статистичних й смислових характеристик в наївному байєсівському класифікаторі. Метод дозволяє класифікувати сутності, що екстрагуються, за апріорно виділеними типами. Удосконалено інформаційну технологію формування єдиного інформаційного простору бізнес діяльності корпорації, яка дозволяє за рахунок використання алгебро-логічних перетворень здійснювати породження складного знання шляхом експліцитного узагальнення інформації, що прихована у сукупності часткових фактів.
Thesis for a candidate degree in technical science, speciality 05.13.06 – Infor-mation Technologies. – National Technical University "Kharkiv Polytechnic Institute". – Kharkiv, 2016. The aim of the thesis is to develop information technology of extraction of business knowledge of integrated corporate system (ICS) based on the information logic models and methods of text content sense processing. The main results are as follows: a logic linguistic model of fact generation from ICS text streams has been developed which is based on surface grammar characteristics of identification of entities of actions and attributes which allows to effectively extract industry specific knowledge about the subjects of monitoring from text content. The thesis further develops the method of comparator identification used for structuring of ICS business knowledge facts relationship. The method allows to classify the attributes of entities according to class relationships due to sense identity of fact triplets which are determined by the comparator objectively. The paper improves the method of determination of actual set of classified entities of a subject domain which is distinguished by an integral use of linguistic, statistical and sense characteristics in the naïve Bayes classifier. The method allows to classify entities extracted according to types determined a priori. The thesis improves the development of information technology of common information space of corporation business activity which allows complicated knowledge generation by means of explicit generalization of information hidden in the collection of partial facts using algebra logic transformations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Cazau, Pierre-Antoine. „La transparence des personnes morales en droit administratif“. Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0436.

Der volle Inhalt der Quelle
Annotation:
La transparence des personnes morales en droit administratif se présente comme un argument visant à faire prévaloir la réalité de la situation d’une personne morale sur sa forme. L’argument de transparence entraîne une modification du rapport d’altérité entre deux personnes morales dont l’une est entièrement contrôlée par l’autre : alors qu’elles sont distinctes l’une de l’autre, le juge assimile l’organisme dépourvu d’autonomie à un service de la personne publique. La transparence n’est toutefois pas dotée d’un régime juridique stable et cohérent. L’opération de qualification est accessoire à sa mise en oeuvre, de sorte que les rapports juridiques entre les personnes morales varient. La personne morale n’est regardée comme « transparente » qu’à l’occasion d’un litige pour la résolution d’un problème juridique précis ; elle peut à nouveau être regardée comme distincte de la personne publique lors d’un nouveau procès. Avec cette technique, le juge administratif met en échec les effets de contournement des règles du droit administratif sans créer de règles ou d’exceptions jurisprudentielles nouvelles. Aux côtés du mandat administratif, l’argument de transparence permet de compléter l’arsenal de protection de la compétence du juge administratif et du respect des règles propres à l’administration, dont les effets et la portée peuvent être mesurés et adaptés aux situations. Il permet également aux requérants d’envisager une stratégie juridique susceptible de faire sauter l’obstacle de la personnalité morale de l’organisme que maîtrise totalement l’administration
Piercing the veil of corporate entities in French administrative law appears as an argument which aims at letting the reality of the situation of a corporate entity prevail over its form. This argument of transparency modifies the relation of alterity between two corporate entities in which one is completely controlled by the other: while they are distinct from each other, the judge assimilates the organization devoid of autonomy to a service belonging to the public entity. However, transparency is not provided with a stable and coherent legal regime. The operation of qualification is incidental to its implementation, so that judiciary relations between corporate entities vary. The corporate entity is only regarded as “transparent” in the course of a litigation concerning the resolving of a precise judiciary problem; it can be considered as distinct again from the public entity at a new trial. With this process, the administrative judge defeats the bypassing of the rules of administrative law without creating any new rule or jurisprudential exception. Together with administrative mandates, the argument of transparency allows to complete the arsenal of protection of the administrative judge’s authority and to enforce administrative rules, whose effects and reach can be measured and adapted to situations. It also allows petitioners to consider a legal strategy that may overcome the obstacle posed by the corporate entity of the organization which is completely controlled by the administration
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Kortanová, Nikola. „Účetní zobrazení rizik u účetních jednotek veřejného sektoru“. Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-194009.

Der volle Inhalt der Quelle
Annotation:
The diploma thesis is devoted to presentation of risks in the financial statements of the public sector entities, where the main objective is to evaluate the degree of presentation of risks in the financial statements and to assess of the current legislation and its possible amendments. The first chapter deals with the definition of the public sector and defines the term "selected accounting entity". The following chapter describes the general concept of risks, focusing on the public sector and on generally accepted principles and guidelines of accounting, in particular the conservatism. The third chapter discusses the accounting law for selected accounting entities and selected instruments of the accounting presentation of risks, including the comparison with the IAS/IFRS, IPSAS and US GAAP. The last chapter is divided to the practical parts, which the first one is based on data analysis of Central System of Accounting Information of the State (CSUIS) and the second one on the evaluation of the questionnaire survey for addressed accounting entities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Blouin, Baptiste. „Event extraction from facsimiles of ancient documents for history studies“. Electronic Thesis or Diss., Aix-Marseille, 2022. http://www.theses.fr/2022AIXM0453.

Der volle Inhalt der Quelle
Annotation:
À l'heure de la numérisation massive des sources historiques, l'extraction automatique des événements est une étape cruciale dans le traitement des textes historiques. Le traitement des événements est un domaine de recherche actif dans la communauté du traitement automatique du langage naturel, mais les ressources et les systèmes sont principalement développés pour le traitement des textes contemporains.Dans ce contexte, cette thèse vise à extraire automatiquement des événements à partir de documents historiques.Cette thèse propose des échanges pluridisciplinaires afin d'adapter les ontologies récentes à des fins de recherche en histoire.Au-delà des besoins spécifiques des humanités numériques, les documents historiques OCRisés datant de plus d'un siècle sont loin de ce que les approches contemporaines ont l'habitude de traiter. Que ce soit au niveau de la diachronie, de la qualité et de l'adaptation au domaine, le traitement de ce type de document pose des problèmes majeurs en TAL. Nous proposons alors des techniques d'adaptation au domaine combinant l'utilisation d'architectures spécialisées récentes et des étapes de prétraitement, permettant de réduire l'impact de ces difficultés tout en tirant parti des ressources contemporaines.Enfin, sur la base d'un paradigme récent consistant à traduire des tâches comme un problème de questions-réponses, nous proposons un pipeline d'extraction d'événement adapté au traitement de documents historiques. De l'extraction d'un mot déclenchant un événement dans une phrase à la représentation de plus d'un siècle d'événements sous forme de graphes, nous proposons une exploration ciblée d'une grande quantité de sources historiques
In the current era of massive digitization of historical sources, the automatic extraction of events is a crucial step when dealing with historical texts. Event processing is an active area of research in the Natural Language Processing community, but resources and systems are mainly developed for processing contemporary texts.In this context, this thesis aims at automatically extracting events from historical documents.This thesis proposes multidisciplinary exchanges in order to adapt recent ontologies to historical research purposes.Beyond the specific needs of the digital humanities, OCRized historical documents ranging from more than a century are far from what contemporary approaches are used to deal with. Whether in terms of diachrony, quality or adaptation to the domain, the processing of this type of document poses major problems in NLP. We then suggest domain adaptation technics combining the use of recent specialized architectures and pre-processing steps, allowing to reduce the impact of these difficulties while taking advantage of contemporary resources.Finally, based on a recent paradigm consisting of translating tasks as a question and answer problem, we propose an event extraction pipeline suitable for processing historical documents. From the extraction of a word triggering an event in a sentence to a representation of more than a century of events in the form of graphs, we propose a targeted exploration of a large quantity of historical sources
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Esplund, Emil, und Elin Gosch. „Digitalisering av individuell studieplan : Från PDF till en grund för ett digitalt system“. Thesis, Uppsala universitet, Institutionen för informatik och media, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413553.

Der volle Inhalt der Quelle
Annotation:
Denna uppsats ämnar belysa om det går att digitalisera den individuella studieplanen på en institution vid Uppsala universitet. Digitalisering anses vara en av de största trenderna som påverkar dagens samhälle och det har visats bidra till ökad tillgänglighet och effektivitet inom offentlig förvaltning. Vid Uppsala universitet finns det i dagsläget behov av ett digitalt system för hanteringen av den individuella studieplanen. Studieplanen hanteras för tillfället som ett papper i form av en PDF och upplevs enligt tidigare studier, som ett mindre bra planerings- och uppföljningsverktyg. Forskningsarbetet har utförts baserat på forskningsstrategin Design & creation. Resultatet av forskningsprocessen är en informationsmodell, en processmodell i olika nivåer, samt en databas. Dessa modeller och databas är baserade på tidigare forskning, befintliga dokument och empiriskt material från intervjuer. Tidigare forskning omfattar ett verktyg för digitalisering, problem med identifierare, samt forskning kring modellering. De befintliga dokumenten utgörs av en tidigare studie om den individuella studieplan, samt lagstiftning, riktlinjer och allmän information gällande denna studieplan. Intervjuerna genomfördes med 9 informanter som använder studieplanen i sin roll på Uppsala universitet. Modellerna och databasen har utvärderats i en kriteriebaserad intervju med ämnesexpert, samt i en teoribaserad utvärdering. Forskningsresultatet tyder på att det är möjligt att digitalisera studieplanen med hjälp av de framlagda modellerna och databasen. Dessa modeller och databas kan med viss modifikation användas för att bygga ett digitalt gränssnitt och ett fullständigt system för studieplanen.
This thesis aims to illustrate whether it is possible to digitize the individual study plan at an institution at Uppsala university. Digitalization is considered one of the biggest trends that affects today’s society and it has been shown to contribute to increased accessibility and efficiency in public administration. Uppsala university has a need for a digital system for the individual study plan. It is currently handled as a paper in the form of a PDF and is perceived as an inferior planning and monitoring tool, according to previous studies. The research work has been carried out based on the research strategy Design & creation. The result of the research process is an information model, a process model at different levels and a database. These models and database are based on previous research, existing documents and empirical material from interviews. Previous research includes a tool for digitalization, problems regarding identifiers and research regarding modeling. The existing documents comprise a previous study of the ISP, legislation, guidelines and general information regarding the ISP. The interviews were conducted with 9 informants who use the study plan within their role at Uppsala university. The models and database have been evaluated in a criteria-based interview with a subject matter expert and in a theory-based evaluation. The research results indicate that it is possible to digitize the study plan using the presented models and database. These models and database can be used with slight modifications to build a digital interface and a complete system for the ISP.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Arman, Molood. „Machine Learning Approaches for Sub-surface Geological Heterogeneous Sources“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG014.

Der volle Inhalt der Quelle
Annotation:
Dans le domaine de l'exploration et de la production du pétrole et du gaz, il est essentiel de comprendre les structures géologiques de sous-sol, tels que les diagraphies de puits et les échantillons de roche, afin de fournir des outils de prédiction et d'aide à la décision. Exploiter des données provenant de différentes sources, structurées ou non structurées, telles que des bases de données relationnelles et des rapports numérisés portant sur la géologie du sous-sol, est primordial. Le principal défi pour les données structurées réside dans l'absence d'un schéma global permettant de croiser tous les attributs provenant de différentes sources.Les défis sont autres pour les données non structurées. La plupart des rapports géologiques de sous-sol sont des versions scannées de documents. L'objectif de notre travail de thèse est de fournir une représentation structurée des différentes sources de données, et de construire des modèles de language spécifique au domaine pour l'apprentissage des entités nommées relatives à la géologie du sous-sol
In oil and gas exploration and production, understanding subsurface geological structures, such as well logs and rock samples, is essential to provide predictive and decision support tools. Gathering and using data from a variety of sources, both structured and unstructured, such as relational databases and digitized reports on the subsurface geology, are critical. The main challenge for the structured data is the lack of a global schema to cross-reference all attributes from different sources. The challenges are different for unstructured data. Most subsurface geological reports are scanned versions of documents. Our dissertation aims to provide a structured representation of the different data sources and to build domain-specific language models for learning named entities related to subsurface geology
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Chau, Michael, Jennifer J. Xu und Hsinchun Chen. „Extracting Meaningful Entities from Police Narrative Reports“. 2002. http://hdl.handle.net/10150/105786.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence Lab, Department of MIS, University of Arizona
Valuable criminal-justice data in free texts such as police narrative reports are currently difficult to be accessed and used by intelligence investigators in crime analyses. It would be desirable to automatically identify from text reports meaningful entities, such as person names, addresses, narcotic drugs, or vehicle names to facilitate crime investigation. In this paper, we report our work on a neural network-based entity extractor, which applies named-entity extraction techniques to identify useful entities from police narrative reports. Preliminary evaluation results demonstrated that our approach is feasible and has some potential values for real-life applications. Our system achieved encouraging precision and recall rates for person names and narcotic drugs, but did not perform well for addresses and personal properties. Our future work includes conducting larger-scale evaluation studies and enhancing the system to capture human knowledge interactively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Chen, Weifeng. „An efficient and privacy -preserving framework for information dissemination among independent entities“. 2006. https://scholarworks.umass.edu/dissertations/AAI3242342.

Der volle Inhalt der Quelle
Annotation:
Information dissemination is the very reason for the existence of the Internet. Within the community of independent entities that make up the Internet, the quality of openness that has contributed to scalability and connectivity has also introduced numerous security and privacy challenges. This is particularly the case when sensitive information is distributed among entities that do not have pre-existing trust relationships. In this thesis, we concentrate on several important problems that arise in constructing a framework for information transmission within an open environment while providing privacy. We first consider the procedure of establishing mutual trust by exchanging digital credentials, a process referred to as trust negotiation. Different from other existing work that focuses on how to establish trust safely and completely, we investigate the problem of minimizing the amount of credential information that is exchanged during a trust-negotiation process. We prove the NP-hardness of this minimization problem, and propose and evaluate efficient heuristic algorithms that are still safe and complete. We next investigate how to distribute information with a minimum cost among entities that have established trust relationships. Specifically, we study this minimization problem in a so-called publish/subscribe system. Publish/subscribe (pub/sub) is an emerging paradigm for information dissemination in which information published by publishers and interests submitted by subscribers are sent to the pub/sub system. The pub/sub system then matches events and interests and delivers to each user those events that satisfy that user's declared interests. We consider cases where information dissemination is restricted by policy constraints (e.g., due to security or confidentiality concerns), and where information can be combined at so-called brokers in the network, a process known as composition. Unsurprisingly, the minimization problem is shown to be NP-complete. We then propose and compare different approximation approaches, showing that the proposed heuristics found good solutions over a range of problem configurations, especially in a policy-constrained system. We then examine the problem of protecting private information in a stream processing system. We propose a Mulit-Set Attribute (MSA) model to address the need for formal evaluation and verification of the privacy and policy constraints that must be met by the system. The MSA model is designed to provide privacy protection to personally identifiable information under real time requirements and in the presence of untrustworthy processing elements. Under a MSA model, data requests not compliant with privacy policy constraints are denied. This binary-decision (i.e., either allowance or denial) model can be too rigid in practice and fail to balance data privacy and utility. To quantify the trade-off between privacy and utility, we propose a privacy model based on identifiability risk computation that estimates the risk of data access, and allows those requesting data access to decide whether the risk is justified. We present the definition and calculation of the identifiability risk. We further illustrate our approach using data published by the U. S. Census Bureau.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Nwebonyi, Francis Nwebonyi. „Establishing Trust and Confidence Among Entities in Distributed Networks“. Doctoral thesis, 2020. https://hdl.handle.net/10216/127994.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Nwebonyi, Francis Nwebonyi. „Establishing Trust and Confidence Among Entities in Distributed Networks“. Tese, 2020. https://hdl.handle.net/10216/127994.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Chang, Yuan-Jui, und 張原睿. „Related Factors Influencing Information Sharing Within Corporate Entities and Their Effects – A Case Study of Company A“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/15293790727235507064.

Der volle Inhalt der Quelle
Annotation:
碩士
雲林科技大學
資訊管理系碩士班
99
The research investigated the processes of construction of knowledge base within corporations and how information sharing is related to and affected by the processes. The findings can provides an understanding to corporations interested in incorporating information sharing in its information management, and an insight on structural interfaces with the most significant influences to information sharing between corporate workers. This research conducted a questionnaire survey on department workers who have contributed more significantly towards the company. A total of 139 questionnaires were distributed, and 108 were retrieved, with 104 effective samples and 4 invalid samples. Data were analyzed with SPSS12 and AMOS 7 for descriptive statistics, factor analysis, item-total correlation, internal consistency tests, Pearson’s product-moment correlation coefficient, and regression analysis. The dimensions of “information technology infrastructure”, “collaborative culture”, “externalizing integration” and “internalizing society” were tested using Structural Equation Models. The results revealed relationships between factors of information sharing such as “participatory interaction” and “technological communication”. This research found that, “externalizing integration” and “internalizing society” have a significant influence on “participatory interaction” and “technological communication” factors of information sharing. The effects of “information technology infrastructure” and “collaborative culture” are less significant, though results still suggest positive results towards information sharing. The findings allow companies to understand that “externalizing integration” and “internalizing society” in the process of establishing a knowledge base has the main influence on information sharing. Keywords: Information sharing, establishing corporate knowledge base, creating information
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

„Understanding the Importance of Entities and Roles in Natural Language Inference : A Model and Datasets“. Master's thesis, 2019. http://hdl.handle.net/2286/R.I.54921.

Der volle Inhalt der Quelle
Annotation:
abstract: In this thesis, I present two new datasets and a modification to the existing models in the form of a novel attention mechanism for Natural Language Inference (NLI). The new datasets have been carefully synthesized from various existing corpora released for different tasks. The task of NLI is to determine the possibility of a sentence referred to as “Hypothesis” being true given that another sentence referred to as “Premise” is true. In other words, the task is to identify whether the “Premise” entails, contradicts or remains neutral with regards to the “Hypothesis”. NLI is a precursor to solving many Natural Language Processing (NLP) tasks such as Question Answering and Semantic Search. For example, in Question Answering systems, the question is paraphrased to form a declarative statement which is treated as the hypothesis. The options are treated as the premise. The option with the maximum entailment score is considered as the answer. Considering the applications of NLI, the importance of having a strong NLI system can't be stressed enough. Many large-scale datasets and models have been released in order to advance the field of NLI. While all of these models do get good accuracy on the test sets of the datasets they were trained on, they fail to capture the basic understanding of “Entities” and “Roles”. They often make the mistake of inferring that “John went to the market.” from “Peter went to the market.” failing to capture the notion of “Entities”. In other cases, these models don't understand the difference in the “Roles” played by the same entities in “Premise” and “Hypothesis” sentences and end up wrongly inferring that “Peter drove John to the stadium.” from “John drove Peter to the stadium.” The lack of understanding of “Roles” can be attributed to the lack of such examples in the various existing datasets. The reason for the existing model’s failure in capturing the notion of “Entities” is not just due to the lack of such examples in the existing NLI datasets. It can also be attributed to the strict use of vector similarity in the “word-to-word” attention mechanism being used in the existing architectures. To overcome these issues, I present two new datasets to help make the NLI systems capture the notion of “Entities” and “Roles”. The “NER Changed” (NC) dataset and the “Role-Switched” (RS) dataset contains examples of Premise-Hypothesis pairs that require the understanding of “Entities” and “Roles” respectively in order to be able to make correct inferences. This work shows how the existing architectures perform poorly on the “NER Changed” (NC) dataset even after being trained on the new datasets. In order to help the existing architectures, understand the notion of “Entities”, this work proposes a modification to the “word-to-word” attention mechanism. Instead of relying on vector similarity alone, the modified architectures learn to incorporate the “Symbolic Similarity” as well by using the Named-Entity features of the Premise and Hypothesis sentences. The new modified architectures not only perform significantly better than the unmodified architectures on the “NER Changed” (NC) dataset but also performs as well on the existing datasets.
Dissertation/Thesis
Masters Thesis Computer Science 2019
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Rafferty, Kevin. „An investigation of the response of entities in the South African JSE ICT sector in 2005 to environmental sustainability report /“. 2006. http://eprints.ru.ac.za/922/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Singh, Shamila. „The determinants of board decision quality in South Africa : a case of public entities“. Thesis, 2014. http://hdl.handle.net/10500/18233.

Der volle Inhalt der Quelle
Annotation:
Effective corporate governance of boards can become a sustainable competitive advantage for organisations. In the extant literature a number of reasons are cited for dysfunctional boards. Some of the reasons attributed to board failure relate to poor corporate governance, practice and oversight. Some of the reasons for board failure pertain to micromanaging of the organisation, an ineffective nominating committee, size of the board, non-functioning committee structure, absence of strategic plan, no orientation\induction plan and no rotational plan. Poor governance practises across all sectors has negatively tainted economic investment in South Africa consequentially affecting economic growth. Below South Africa’s competitive rating slipped from (52nd) in 2012-2013 to 53rd in 2013-2014 rating is given to show that marked improvement is needed in corporate governance. South Africa’s rating in the Corruption Perceptions Index for 2012 was 43 and slipped to position 69 amongst 176 countries for the Corruption Perception Index, 2013. The trend analysis report of the Public Service Commission reported that In 2006/7, there were 1 042 cases of corruption, amounting to R130.6-million; in 2007/8, there were 868 cases, amounting to R21.7-million; in 2008/9, there were 1 204 cases, amounting to R100.1-million; in 2009/10, there were 1 135 cases, amounting to R346.5-million; in 2010/11, there were 1 035 cases, amounting to R932.3-million; in 2011/12, there were 1 243 cases, amounting to R229.9-million. Good governance frameworks, policies, procedures, processes and practices attract foreign direct investments. Better governance practices are critical for improved economic growth and development that will result in an improvement in the South Africa’s competitiveness and corruption perception index ratings. South Africa’s continued economic growth and development is dependent on attracting foreign direct investment. From 1994 corporate governance regimes were promulgated. Although there are a collection of corporate governance codes and guidelines that have been published, few specifically cover governance practices in public entities. Moreover, with better governance practices state-owned enterprises can significantly contribute to the economic transformation and development in South Africa. The purpose of the study is to establish that improved governance is a function of board structure and board process variables. With the presence of structural and process variables board activism will improve resulting in board decision quality. Independent directors without no conflict of interest, the requisite industry expertise and intelligence (functional area knowledge), the information to make decisions are adequate, accurate and timely (information quality), directors exert the needed effort (effort norms), directors robustly explore all dimensions and options (cognitive conflict) and the board functions optimally (cohesiveness) influence board decision quality. Boards which are configured optimally are able to execute their fiduciary responsibility optimally. In 2012 a budget of R845.5 billion was provisioned for infrastructural development to boost economic development. This budget allocation must be prudently and frugally managed in accordance with good governance practises to achieve economic development. In particular South Africa has to improve its competitiveness rating and corruption perception index to attract investments and continual growth. In terms of the research design, to address the research questions, a mixed research approach was selected for the study. The phenomenological (qualitative) and positivist (quantitative) philosophical paradigms were adopted with the purpose to obtain a greater understanding of board decision quality in the Public Entities in South Africa. The data collection instruments used in the study was in-depth interviews, focus group interviews and administration of a survey. The population for the qualitative research was 19 in-depth interviews and two focus group interviews. For the quantitative study a population of 215 public entity board members were selected for the study. A total of 104 board members of Public Entities completed the survey for the study. In relation to data analysis for the qualitative study Tesch’s coding, thematic analysis was used to analyse the in-depth and focus group interviews. For the quantitative study, SPSS was used to analyse responses from the surveys. The hypothesis was tested using inferential statistics, namely, factor analysis and multiple regression was used.. The findings generated from the first phase, the qualitative study that provided support for the positive relationship between board structure, board process variables and board decision quality. The following five variables are incorporated in a model that seeks to identify the strongest predictor of board decision quality: (1) board independence, (2) effort norms, (3) functional area knowledge and skill, (4) cognitive conflict and (5) information quality. The findings show that information quality is the strongest predictor of board decision quality followed by expert knowledge and skill. As expected, expert knowledge does not only increase the cognitive capacity of the board, but it also positively affects company competitiveness. The findings also show that cognitive conflict has a negative association with decision quality. The study argues that political influence exerted by board political appointees may explain the negative relationship between cognitive conflict and board decision quality. The major contribution of this study is that it provides a 28-item instrument that can be used practically by public entity boards in the reflective process to improve board decision quality. The study concludes by offering avenues for further research. The model suggests that board decision quality is a product of board structure (board independence), board process (functional area knowledge, information quality, and cognitive conflict and effort norms).
Business Management
D.B.L.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Hsieh, Shang-Wei, und 謝尚偉. „An Information Retrieval System for Fast Exploration of Proprietary Experimental Data via Searching and Mining the Biomedical Entities in Related Public Literatures“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/48936134422468587956.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣大學
工程科學及海洋工程學研究所
102
In the beginning of biomedical research works, mapping researchers’ proprietary experiment data to public research literatures is an important work. In this paper, a search engine is proposed to retrieve large scale biomedical literatures which are collected from PubMed in an efficient way. Moreover, we apply a name entity recognition tool which is a kind of text-mining technique to extract protein names from the biomedical literatures. Afterward, the protein names are normalized to IDs which can be linked to the researchers’ proprietary experiment databases and using web techniques automatically plot the charts for the relevant proprietary data; through these processes, the researchers can efficiently get the relevance between their proprietary data and the public papers also can help them to find more available research works.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Martins, Teresa Maria da Cunha Soares. „Good governance practices and information disclosure in portuguese public enterprise entity hospitals“. Master's thesis, 2014. http://hdl.handle.net/1822/28007.

Der volle Inhalt der Quelle
Annotation:
Dissertação de mestrado em Accounting
Health rendering entities are fundamental in every country and encompass an important share of every state’s economy. The theoretical revolution propitiated by New Public Management and Public Governance studies has led governments to act in order to endow public owned entities of mechanisms of accountability through mandatory information disclosure, among others. In Portugal, keeping with international trends, the movement towards better governance followed a path of institutional pressure originated in legal provisions stating mandatory abidance. Through the last 30 years successive Portuguese governments have implemented changes in State-owned entities in general and in public enterprise entity hospitals in particular, aiming at pursuing the best practices regarding good governance. This study leads us through the evolution in New Public Management and Public Governance in order to frame the Portuguese adoption of good governance principles in State-owned entities. It lays down the different legislation issued by Portuguese governments regarding health rendering services and their governance practices. Through multiple case studies, ten hospitals’ annual reports were analysed regarding principles of good governance disclosure, in a timeline of six years (2006-2011), it aims at understanding the drivers of change in information disclosure behaviours in the National Health Services under the light of institutional theory combined with Oliver’s model (1991) of strategic responses to institutional pressures. The study demonstrates that the adoption of the disclosure requirements was progressive and that most of the entities seem to have adopted an avoidance strategy, pretending compliance with the legal requirements in the light of Oliver’s model instead of a full compliance. The strategic response adopted allows concluding that entities appear to be more concerned with apparently fulfilling legal demands than with actually meeting them in what can be described as a ceremonial compliance.
Os Hospitais, sendo entidades prestadoras de cuidados de saúde, são fundamentais em todos os países e representam um setor fundamental do Estado. A revolução teórica propiciada pela New Public Management e pela Public Governance conduziram a que os governos agissem de forma a dotar as empresas detidas pelo Estado de mecanismos de accountability através, nomeadamente, da publicação de legislação sobre divulgação de boas práticas de governo das sociedades. Em Portugal, em consonância com a tendência internacional, o movimento de implementação de boas práticas de governo das sociedades seguiu um caminho de pressão institucional com origem em legislação de cumprimento obrigatório. Nos últimos trinta anos, os sucessivos governos portugueses implementaram mudanças nas entidades detidas pelo Estado, em geral, e nos hospitais entidades públicas empresariais, em particular, com o objetivo de estimular as melhores práticas de governo das entidades. Este estudo apresenta a evolução da New Public Management e da Public Governance com o objetivo de enquadrar a adoção em Portugal de princípios de bom governo nas entidades detidas pelo Estado, especialmente nas entidades prestadoras de cuidados de saúde. É apresentada a evolução em termos normativos do Serviço Nacional de Saúde e suas práticas de bom governo. Com recursos a estudos de caso múltiplos, são analisados os relatórios e contas anuais de 10 hospitais entidade públicas empresariais, com o objetivo de averiguar de que forma evoluiu a divulgação das práticas de bom governo ao longo de seis anos (2006-2011). Esta análise é efetuada à luz da teoria institucional combinada com o modelo de Oliver (1991) de respostas estratégicas a pressões institucionais. O estudo permite concluir que a adoção dos requisitos de divulgação foi progressiva e que a maioria dos hospitais terá adotado uma estratégia de ilusão, aparentando o cumprimento com as disposições legais, à luz do modelo de Oliver, em lugar de uma completa adoção dos requisitos legais. A estratégia adotada permite concluir que as entidades parecem estar mais preocupadas em aparentar o cumprimento da lei do que no seu efetivo respeito, o que pode ser visto como uma adoção cerimonial das disposições legais em vigor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Vellucci, Sherry L. „Bibliographic relationships among musical bibliographic entities a conceptual analysis of music represented in a library catalog with a taxonomy of the relationships discovered /“. 1995. http://catalog.hathitrust.org/api/volumes/oclc/35775014.html.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Columbia University, 1995.
"95-22916." eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 328-332).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Obert, Radim. „Povinné subjekty podle zákona o svobodném přístupu k informacím“. Master's thesis, 2011. http://www.nusl.cz/ntk/nusl-313524.

Der volle Inhalt der Quelle
Annotation:
Presented thesis called "Obliged entities according to The Act on Free Acces to Information" deals with obliged entities in compliance with current legislation, also with legislative development and with current specification in The Act on Free Access to Information. This thesis brings comprehensive view of current problems which arise from practice, primarily from point of view of legal science and specialized literature. Field of obliged entities is especially recently the frequent subject of decision-making practice, which is executed by the constitutional and administrative justice. Mentioned courts spread the number of obliged entities by their judicial working. Author of this thesis tries to present his own solutions of problems, which are related to current legal regulations. Obliged entities are those which have an obligation to provide informatik related to their activities in compliance with The Act on Free Access to Information. The Act on Free Access to Information enumerates four circles of obliged entities. These entities are state authorities, communal authorities and their bodies, public institutions and subjects to whom the law has entrusted deciding about legal matters, legally protected interests or duties of natural persons or legal entities in the area of public administration,...
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Hugues, Engel. „Dislocation et référence aux entités en français L2: Développement, interaction, variation“. Phd thesis, 2010. http://tel.archives-ouvertes.fr/tel-00495686.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates the use and development of dislocations in oral productions by Swedish users of French as a second language (L2). Dislocations are highly frequent in French oral speech and play an essential role in building utterances. L2 users of French must therefore acquire the grammatical means necessary to build this structure as well as the pragmatic principles underlying its use. The study is empirical, and based on a corpus of oral productions from a wide range of non-native speakers (NNS), from beginners studying at university to L2 users who have spent many years in France. The analysis also includes oral productions from a control group of native speakers (NS). The aim is to identify a path of development by which the different forms and functions of dislocations are acquired. Furthermore, the study examines the influence of tasks on the use of dislocations, by analysing two tasks which place very different demands on the informants in terms of cognitive effort, namely interviews and retellings. The analysis focuses on two main kinds of dislocations: on the one hand, [moi je VP] (and its syntactical variants); on the other hand, dislocations referring to third entities (such as [NP il VP] and [NP c'est X]). The results show that both kinds go through a process of development in French L2. However, French learners seem to master the lexical dislocations referring to third entities as well as their pragmatic rules of use from the first stages of acquisition, yet with deviances in some cases. On the other hand, the frequency of use of [moi je VP] and its syntactical variants correlates highly with the level of development of the NNS. Moreover, there is a significantly greater frequency of dislocations in the NNS retelling tasks than in their interviews. In the NS group, the frequency of use remains comparable in both tasks. This difference between NS and NNS is probably due to the additional cognitive load that retellings demand compared with interviews—e.g., recalling the succession of events, solving the lexical problems posed by the story that is to be retold. It is proposed that this additional load may trigger, as a compensation strategy, an increase in the frequency of use of dislocations in the NNS speech.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Chang, Shu-Na, und 張淑那. „We approve the thesis entitled“The impacts of information transmission and information formation on role clearing, employee’s satisfaction and organizational commitment –An evidence from life insurance companies in Taiwan”“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/68505640064283191163.

Der volle Inhalt der Quelle
Annotation:
碩士
朝陽科技大學
保險金融管理系碩士班
99
Abstract This study was the view of organizational commitment through the domestic life insurance companies the distinction between employee satisfaction and the role of the association. This study Parasuraman et al. (1988) argument, and affective commitment, continuance commitment, normative commitment, empathy, tangible measure of organizational commitment as a research and basic theory. Analysis mainly SPSS 18.0 software package for statistical analysis tool, using descriptive statistical analysis, reliability analysis, analysis of variance and regression analysis, multivariate analysis methods. The main results are as follows: First, China''s life insurance company formation and knowledge transfer of information on organizational commitment, satisfaction, and the distinction between the role of Second, the different life insurance company''s internal information and knowledge transfer is the formation of significantly different Third, the different life insurance companies organizational commitment, satisfaction, and whether there is a significant distinction between the role of different Keywords: organizational commitment, employee satisfaction, the distinction between the role of information form, knowledge transfer
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Pires, Carla Guilhermina. „A Importância da Informação Contabilística, em Contexto de Pandemia, no Processo de Tomada de Decisão: Um estudo sobre Micro e Pequenas Entidades“. Master's thesis, 2020. http://hdl.handle.net/10316/94636.

Der volle Inhalt der Quelle
Annotation:
Relatório de Estágio do Mestrado em Contabilidade e Finanças apresentado à Faculdade de Economia
A informação contabilística é considerada como um instrumento indispensável quer para o exercício das atividades dos proprietários/gestores, quer para a tomada de decisão, uma vez que a qualidade das decisões está muitas vezes dependente da qualidade da informação prestada, independentemente da dimensão da empresa. Deste modo, pretende-se analisar a importância atribuída à informação contabilística pelos proprietários/gestores das micro e pequenas empresas, na tomada de decisão estratégica, bem como na tomada de decisão operacional.A importância da informação contabilística no apoio à gestão de uma empresa, particularmente, no processo de tomada de decisão tem sido cada vez mais evidente. Contudo, a literatura apresenta opiniões distintas quanto à importância e utilização dessa informação no referido processo.Esta investigação pretende ainda identificar possíveis fatores, que, segundo a literatura, podem influenciar a importância e utilização da informação contabilística na tomada de decisão. Adicionalmente, propõe-se avaliar o papel do contabilista e dos serviços prestados pela contabilidade em contexto de pandemia, nomeadamente, o requerimento do apoio extraordinário à manutenção do contrato de trabalho (lay-off simplificado).A metodologia utilizada neste estudo baseia-se num inquérito por questionário, dirigido aos proprietários/gestores de micro e pequenas entidades portuguesas.Os resultados indicam que a informação contabilística é utilizada pela maioria dos proprietários/gestores das micro e pequenas entidades, considerando-a um recurso extremamente importante para a tomada de decisão. Também se conclui que o nível de escolaridade do proprietário/gestor apresenta uma relação estatisticamente significativa com a utilização da informação contabilística. Por fim, constata-se que o contabilista assume um papel extremamente importante em contexto de pandemia, no que se refere ao requerimento do lay-off simplificado.
Accounting information is considered an indispensable tool both for the exercise of the activities of the owners/managers, and for decision-making, since the quality of decisions is often dependent on the quality of the information provided, regardless of the size of the company. In this way, we intend to analyse the importance attributed to accounting information by the owners/managers of micro and small companies, in making strategic decisions, as well as in making operational decisions.The importance of accounting information in supporting the management of a company, particularly in the decision-making process, has been increasingly evident. However, the literature presents different opinions regarding the importance and use of this information in the referred process.This investigation also intends to identify possible factors, which, according to the literature, can influence the importance and use of accounting information in decision-making. Additionally, the intention is to assess the role of the accountant and the services provided by accounting in a pandemic context, namely, the request for extraordinary support for the maintenance of the employment contract (simplified lay-off).The methodology used in this study was based on a questionnaire survey, aimed at owners/managers of Portuguese micro and small entities.The results indicate that accounting information is used by most owners/managers of micro and small entities, considering it an extremely important resource for decision-making. It is also concluded that the level of education of the owner/manager has a statistically significant relationship with the use of accounting information. Finally, it appears that the accountant plays an extremely important role in a pandemic context, with regard to the simplified lay-off requirement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Зайцев, Дмитро Сергійович. „Інформаційна безпека як складова національної безпеки України в умовах глобалізаційних правових процесів“. Магістерська робота, 2020. https://dspace.znu.edu.ua/jspui/handle/12345/2821.

Der volle Inhalt der Quelle
Annotation:
Зайцев Д. С. Інформаційна безпека як складова національної безпеки України в умовах глобалізаційних правових процесів : кваліфікаційна робота магістра спеціальності 081 "Право" / наук. керівник Г. С. Журавльова. Запоріжжя : ЗНУ, 2020. 121 с.
UA : Кваліфікаційна робота складається зі 121 сторінок, перелік посилань містить 98 джерел. В умовах євроінтеграції України, з урахуванням побудови інформаційного суспільства, все більшої актуальності набувають питання правового забезпечення інформаційної безпеки – запобігання й усунення різноманітними засобами і способами загроз людині, суспільству, державі в інформаційній сфері. Однак у сучасному багатогранному та динамічному світі проблеми інформаційної безпеки набувають принципово нових рис, тепер вони виходять далеко за межі запобігання війн і збройних конфліктів. Сьогодні вони стали їх підґрунтям, першоджерелом, основним ресурсом та головною зброєю. В українських реаліях ці проблеми стали очевидними. У таких умовах стає пріоритетним осмислення сукупності інформаційних процесів у контексті правового забезпечення безпеки України. З огляду на це цілком природним вбачається пошук основоположних цінностей, цілей, інтересів та надбань в інформаційній сфері, які слугуватимуть орієнтирами європейської перспективи України, майбутнього розвитку українського суспільства, заснованої на інформаційних технологіях, закладуть фундамент подальшого розвитку інформаційного права в Україні. У цьому контексті закономірно зростає роль правових механізмів у забезпеченні інформаційної безпеки України, потреба ефективного законодавчого врегулювання суспільних відносин, що виникають у досліджуваній сфері, визначення правових засад організації та координації дій суб’єктів забезпечення інформаційної безпеки України, розробці пріоритетних напрямів державної політики у сфері інформаційної безпеки. Дослідження інформаційної безпеки України з правничої точки зору пов’язане з формуванням якісної системи інформаційної безпеки, що відповідатиме сучасним вимогам та нагальним потребам України як повноправного члена й надійного партнера європейської спільноти. Обравши євроінтеграційний курс та визначивши вступ до НАТО своїм стратегічним пріоритетом, Україна має орієнтуватися передусім на стратегію розвитку країн-учасниць ЄС та НАТО в інформаційній сфері. Для нашої держави імплементація європейських стандартів правового забезпечення інформаційної безпеки держави є пріоритетним засобом інтеграції в європейський правовий простір. Об’єктом кваліфікаційної роботи є суспільні відносини, що виникають у сфері інформаційної безпеки. Предметом дослідження інформаційна безпека як складова національної безпеки України в умовах глобалізацій них правових процесів. Мета роботи полягає у тому, щоб на підставі комплексного аналізу наявних наукових та нормативних джерел визначити зміст інформаційної безпеки як складової національної безпеки України.
EN : The qualification consists of 121pages, the list of references contains 98 sources. In the context of Ukraine's European integration, taking into account the construction of the information society, the issues of legal security of information security - prevention and elimination by various means and methods of threats to the person, society, and the state in the information sphere become more and more urgent. However, in today's multifaceted and dynamic world, information security issues are taking on a whole new set of features, now they go far beyond preventing wars and armed conflict. Today, they have become their foundation, primary source, primary resource and primary weapon. In Ukrainian realities, these problems have become apparent. In such circumstances, it becomes a priority to understand the set of information processes in the context of legal security of Ukraine. In view of this, it is quite natural to search for the fundamental values, goals, interests and assets in the information sphere, which will serve as a guide for the European perspective of Ukraine, the future development of Ukrainian society based on information technologies, will lay the foundation for the further development of information law in Ukraine. In this context, the role of legal mechanisms in ensuring information security of Ukraine, the need for effective legislative regulation of public relations arising in the research sphere, the definition of legal principles of organization and coordination of actions of subjects of information security of Ukraine, the development of priority directions of state policy in the field of information security. From a legal point of view, the study of information security of Ukraine is connected with the formation of a high-quality information security system that will meet modern requirements and urgent needs of Ukraine as a full member and reliable partner of the European community. Having chosen a European integration course and identifying NATO as a strategic priority, Ukraine should focus primarily on the development strategy of the EU and NATO member states in the information field. For our country, the implementation of the European standards of legal support for information security of the state is a priority means of integration into the European legal space. The subject of the qualification is the public relations that arise in the field of information security. The subject of research is information security as a component of national security of Ukraine in the context of globalization of legal processes. The purpose of the work is to determine the content of information security as a component of national security of Ukraine on the basis of a comprehensive analysis of available scientific and regulatory sources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Zaghouani, Wajdi. „Le repérage automatique des entités nommées dans la langue arabe : vers la création d'un système à base de règles“. Thèse, 2009. http://hdl.handle.net/1866/7933.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Fernandes, Ana Rita da Silva. „Os crimes contra o mercado de valores mobiliários: O abuso de informação privilegiada e a manipulação de mercado“. Master's thesis, 2020. http://hdl.handle.net/10071/21474.

Der volle Inhalt der Quelle
Annotation:
Atualmente, os crimes de mercado são constituídos por duas condutas: o abuso de informação privilegiada e a manipulação de mercado. Por um lado, falamos de abuso informação privilegiada quando um ou mais agentes de mercado efetua(m) transações que têm por base uma informação por ele(s) obtida, que não é do conhecimento geral dos investidores e que, por isso, o(s) coloca numa situação de privilégio face aos demais. Por outro, a manipulação de mercado verifica-se sempre que há uma divulgação de informação falsa ou enganosa, cuja intenção é induzir os investidores em erro, e com isso fazer alterar o normal funcionamento do mercado. Tanto uma conduta como outra podem ser extremamente lesivas para o mercado. Por este motivo, os intervenientes de mercado estão obrigados à divulgação de determinadas informações. De forma a controlar a informação que é (ou devia ser) divulgada no mercado e a detetar indícios de crime, foram criadas entidades supervisoras, cuja atuação se prende com o principal objetivo de tornar o mercado transparente e de confiança para os investidores.
Nowadays, market abuse consist of two conducts: insider trading and market manipulation. On the one hand, we speak of insider trading when one or more market agents carry out transactions based on self-obtained information, which is not known by the investors, putting them in a privileged situation. On the other hand, market manipulation occurs whenever there is a disclosure of false or deceiving information, intending to mislead the investors and thereby change the normal market operations. Both conducts can be extremely damaging to the market. Therefore, market players are required to disclose specific information. In order to control that information, as well as detecting signs of crime, some supervisory entities are created, making the market transparent and trustworthy for investors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie