Dissertations / Theses on the topic 'Domain specific knowledge graph'

To see the other types of publications on this topic, follow the link: Domain specific knowledge graph.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Domain specific knowledge graph.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lalithsena, Sarasi. "Domain-specific Knowledge Extraction from the Web of Data." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1527202092744638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

PORRINI, RICCARDO. "Construction and Maintenance of Domain Specific Knowledge Graphs for Web Data Integration." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2016. http://hdl.handle.net/10281/126789.

Full text
Abstract:
A Knowledge Graph (KG) is a semantically organized, machine readable collection of types, entities, and relations holding between them. A KG helps in mitigating semantic heterogeneity in scenarios that require the integration of data from independent sources into a so called dataspace, realized through the establishment of mappings between the sources and the KG. Applications built on top of a dataspace provide advanced data access features to end-users based on the representation provided by the KG, obtained through the enrichment of the KG with domain specific facets. A facet is a specialized type of relation that models a salient characteristic of entities of particular domains (e.g., the vintage of wines) from an end-user perspective. In order to enrich a KG with a salient and meaningful representation of data, domain experts in charge of maintaining the dataspace must be in possess of extensive knowledge about disparate domains (e.g., from wines to football players). From an end-user perspective, the difficulties in the definition of domain specific facets for dataspaces significantly reduce the user-experience of data access features and thus the ability to fulfill the information needs of end-users. Remarkably, this problem has not been adequately studied in the literature, which mostly focuses on the enrichment of the KG with a generalist, coverage oriented, and not domain specific representation of data occurring in the dataspace. Motivated by this challenge, this dissertation introduces automatic techniques to support domain experts in the enrichment of a KG with facets that provide a domain specific representation of data. Since facets are a specialized type of relations, the techniques proposed in this dissertation aim at extracting salient domain specific relations. The fundamental components of a dataspace, namely the KG and the mappings between sources and KG elements, are leveraged to elicitate such domain specific representation from specialized data sources of the dataspace, and to support domain experts with valuable information for the supervision of the process. Facets are extracted by leveraging already established mappings between specialized sources and the KG. After extraction, a domain specific interpretation of facets is provided by re-using relations already defined in the KG, to ensure tight integration of data. This dissertation introduces also a framework to profile the status of the KG, to support the supervision of domain experts in the above tasks. Altogether, the contributions presented in this dissertation provide a set of automatic techniques to support domain experts in the evolution of the KG of a dataspace towards a domain specific, end-user oriented representation. Such techniques analyze and exploit the fundamental components of a dataspace (KG, mappings, and source data) with an effectiveness not achievable with state-of-the-art approaches, as shown by extensive evaluations conducted in both synthetic and real world scenarios.
APA, Harvard, Vancouver, ISO, and other styles
3

Jen, Chun-Heng. "Exploring Construction of a Company Domain-Specific Knowledge Graph from Financial Texts Using Hybrid Information Extraction." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291107.

Full text
Abstract:
Companies do not exist in isolation. They are embedded in structural relationships with each other. Mapping a given company’s relationships with other companies in terms of competitors, subsidiaries, suppliers, and customers are key to understanding a company’s major risk factors and opportunities. Conventionally, obtaining and staying up to date with this key knowledge was achieved by reading financial news and reports by highly skilled manual labor like a financial analyst. However, with the development of Natural Language Processing (NLP) and graph databases, it is now possible to systematically extract and store structured information from unstructured data sources. The current go-to method to effectively extract information uses supervised machine learning models, which require a large amount of labeled training data. The data labeling process is usually time-consuming and hard to get in a domain-specific area. This project explores an approach to construct a company domain-specific Knowledge Graph (KG) that contains company-related entities and relationships from the U.S. Securities and Exchange Commission (SEC) 10-K filings by combining a pre-trained general NLP with rule-based patterns in Named Entity Recognition (NER) and Relation Extraction (RE). This approach eliminates the time-consuming data-labeling task in the statistical approach, and by evaluating ten 10-k filings, the model has the overall Recall of 53.6%, Precision of 75.7%, and the F1-score of 62.8%. The result shows it is possible to extract company information using the hybrid methods, which does not require a large amount of labeled training data. However, the project requires the time-consuming process of finding lexical patterns from sentences to extract company-related entities and relationships.
Företag existerar inte som isolerade organisationer. De är inbäddade i strukturella relationer med varandra. Att kartlägga ett visst företags relationer med andra företag när det gäller konkurrenter, dotterbolag, leverantörer och kunder är nyckeln till att förstå företagets huvudsakliga riskfaktorer och möjligheter. Det konventionella sättet att hålla sig uppdaterad med denna viktiga kunskap var genom att läsa ekonomiska nyheter och rapporter från högkvalificerad manuell arbetskraft som till exempel en finansanalytiker. Men med utvecklingen av ”Natural Language Processing” (NLP) och grafdatabaser är det nu möjligt att systematiskt extrahera och lagra strukturerad information från ostrukturerade datakällor. Den nuvarande metoden för att effektivt extrahera information använder övervakade maskininlärningsmodeller som kräver en stor mängd märkta träningsdata. Datamärkningsprocessen är vanligtvis tidskrävande och svår att få i ett domänspecifikt område. Detta projekt utforskar ett tillvägagångssätt för att konstruera en företagsdomänspecifikt ”Knowledge Graph” (KG) som innehåller företagsrelaterade enheter och relationer från SEC 10-K-arkivering genom att kombinera en i förväg tränad allmän NLP med regelbaserade mönster i ”Named Entity Recognition” (NER) och ”Relation Extraction” (RE). Detta tillvägagångssätt eliminerar den tidskrävande datamärkningsuppgiften i det statistiska tillvägagångssättet och genom att utvärdera tio SEC 10-K arkiv har modellen den totala återkallelsen på 53,6 %, precision på 75,7 % och F1-poängen på 62,8 %. Resultatet visar att det är möjligt att extrahera företagsinformation med hybridmetoderna, vilket inte kräver en stor mängd märkta träningsdata. Projektet kräver dock en tidskrävande process för att hitta lexikala mönster från meningar för att extrahera företagsrelaterade enheter och relationer.
APA, Harvard, Vancouver, ISO, and other styles
4

Kerzhner, Aleksandr A. "Using domain specific languages to capture design knowledge for model-based systems engineering." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dubé, Denis 1981. "Graph layout for domain-specific modeling." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97943.

Full text
Abstract:
The aim of this thesis is to investigate automatic graph layout in the context of domain-specific modeling. Inherent in the nature of domain-specific modeling is the creation of new formalisms to solve the current problem as well as the combined use of multiple formalisms. Unfortunately, graph layout algorithms tend to be formalism-specific, thus limiting their applicability.
As a starting point, all major graph drawing techniques and many of their variants are summarized from the literature. Thereafter, several of these graph drawing techniques are chosen and implemented in AToM3, A Tool for Multi-formalism and Meta-Modeling.
A new means of specifying formalism-specific user-interface behaviour is then described. By fully modeling the reactive behaviour of a formalism-specific modeling environment, including layout, existing graph drawing algorithms can be re-used without modification. The DCharts formalism is modeled to demonstrate the effectiveness of this approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Yoon, Changwoo. "Domain-specific knowledge-based informational retrieval model using knowledge reduction." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eryarsoy, Enes. "Using domain-specific knowledge in support vector machines." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kelemen, Deborah Ann 1967. "The effects of domain-specific knowledge on similarity judgements." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278269.

Full text
Abstract:
The study contrasts natural kinds versus artifacts in order to assess the impact of domain-specific knowledge on adult subjects strategies in a perceptual classification task. Subjects classifications show differential weighting of perceptual dimensions as a consequence of background context. In addition, subjects display a tendency to reject identity within a specific dimension when such a non-identity based strategy permitted the creation of a theoretically cohesive category. This provides evidence against the view that identity possesses an inherent value in classification and supports the alternative, that background knowledge determines the degree to which identity is valued and the manner in which categories are constructed.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Mengjiao. "Cache and NUMA optimizations in a domain-specific language for graph processing." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119915.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-67).
High-performance graph processing is challenging because the sizes and structures of real-world graphs can vary widely. Graph algorithms also have distinct performance characteristics that lead to different performance bottlenecks. Even though memory technologies such as CPU cache and non-uniform memory access (NUMA) have been designed to improve software performance, the existing graph processing frameworks either do not take advantage of these hardware features or over-complicate the original graph algorithms. In addition, these frameworks do not provide an interface for easily composing and fine-tuning performance optimizations from various levels of the software stack. As a result, they achieve suboptimal performance. The work described in this thesis builds on recent research in developing a domain-specific language (DSL) for graph processing. GraphIt is a DSL designed to provide a comprehensive set of performance optimizations and an interface to combine the best optimization schedules. This work extends the GraphIt DSL to support locality optimizations on modern multisocket multicore machines, while preserving the simplicity of graph algorithms. To our knowledge, this is the first work to support cache and NUMA optimizations in a graph DSL. We show that cache and NUMA optimizations together are able to improve the performance of GraphIt by up to a factor of 3. Combined with all of the optimizations in GraphIt, our performance is up to 4.8x faster than the next fastest existing framework. In addition, algorithms implemented in GraphIt use fewer lines of code than existing frameworks. The work in this thesis supports the design choice of a compiler approach to constructing graph processing systems. The high performance and simplicity of GraphIt justify the separation of concerns (modularity) design principle in computer science, and contribute to the larger effort of agile software systems development.
by Mengjiao Yang.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
10

Lewenhaupt, Adam, and Emil Brismar. "The impact of corpus choice in domain specific knowledge representation." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-220679.

Full text
Abstract:
Recent advents in the machine learning community, driven by larger datasets and novel algorithmic approaches to deep reinforcement learning, reward the use of large datasets. In this thesis, we examine whether dataset size has a signicant impact on the recall quality in a very specic knowledge domain. We compare a large corpus extracted from Wikipedia to smaller ones from Stackoverow and evaluate their representational quality of niche computer science knowledge. We show that a smaller dataset with high-quality data points greatly outperform a larger one, even though the smaller is a subset of the latter. This implicates that corpus choice is highly relevant for NLP-applications aimed toward complex and specic knowledge representations.
APA, Harvard, Vancouver, ISO, and other styles
11

Mallede, Wondimagegn Yalew. "Mapping relational databases to semantic web using domain-specific knowledge." Thesis, London Metropolitan University, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.603305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Xiaodan Hu Xiaohua. "Exploiting external/domain knowledge to enhance traditional text mining using graph-based methods /." Philadelphia, Pa. : Drexel University, 2009. http://hdl.handle.net/1860/3076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gee, Eric J. "The Interactive and Combined Effects of Domain-Specific Knowledge and Strategic Knowledge on Reading Comprehension." DigitalCommons@USU, 1997. https://digitalcommons.usu.edu/etd/6099.

Full text
Abstract:
The literature in reading comprehension has demonstrated that both domain-specific knowledge and strategic knowledge are vital to good comprehension. However, few studies have actually compared the effects of the two types of knowledge on reading comprehension. Fewer studies have examined the effects of combining the two strategies even though cognitive theories indicate that true comprehension occurs when certain procedures act upon knowledge constructed from the text being read and "link" that knowledge with knowledge in the long-term memory. This study compared subjects receiving strategic knowledge and content knowledge to subjects receiving strategic knowledge only, subjects receiving content-knowledge only, and a control group. Subjects were 9- and 10-year-old students in four fourth-grade classrooms. The design was a pretest-posttest quasi-experimental design. Subjects were given the comprehension and verbal subtests of the Stanford Achievement Test. Based on these tests, subjects were identified as high- or low-ability readers. In addition, they were given a comprehension pretest designed by the instructor before intervention began. The intervention took place over a 4-week period and consisted of a different series of lessons presented by an independent instructor. After the intervention, subjects took the posttest. SAT subtest scores and pretest scores were used as covariates in the final analysis. Results showed a decrease in the posttest means and no differences among the four experimental groups. Lack of findings was attributed to several factors, including lack of interest in the reading material on the comprehension tests and brevity of the intervention.
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Haishan, and Haishan Liu. "A Graph-based Approach for Semantic Data Mining." Thesis, University of Oregon, 2012. http://hdl.handle.net/1794/12567.

Full text
Abstract:
Data mining is the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. It is widely acknowledged that the role of domain knowledge in the discovery process is essential. However, the synergy between domain knowledge and data mining is still at a rudimentary level. This motivates me to develop a framework for explicit incorporation of domain knowledge in a data mining system so that insights can be drawn from both data and domain knowledge. I call such technology "semantic data mining." Recent research in knowledge representation has led to mature standards such as the Web Ontology Language (OWL) by the W3C's Semantic Web initiative. Semantic Web ontologies have become a key technology for knowledge representation and processing. The OWL ontology language is built on the W3C's Resource Description Framework (RDF) that provides a simple model to describe information resources as a graph. On the other hand, there has been a surge of interest in tackling data mining problems where objects of interest can be best described as a graph of interrelated nodes. I notice that the interface between domain knowledge and data mining can be achieved by using graph representations. Therefore I explore a graph-based approach for modeling both knowledge and data and for analyzing the combined information source from which insight can be drawn systematically. In summary, I make three main contributions in this dissertation to achieve semantic data mining. First, I develop an information integration solution based on metaheuristic optimization when data mining task require accessing heterogeneous data sources. Second, I describe how a graph interface for both domain knowledge and data can be structured by employing the RDF model and its graph representations. Finally, I describe several graph theoretic analysis approaches for mining the combined information source. I showcase the utility of the proposed methods on finding semantically associated itemsets, a particular case of the frequent pattern mining. I believe these contributions in semantic data mining can provide a novel and useful way to incorporate domain knowledge. This dissertation includes published and unpublished coauthored material.
APA, Harvard, Vancouver, ISO, and other styles
15

Hänig, Christian. "Unsupervised Natural Language Processing for Knowledge Extraction from Domain-specific Textual Resources." Doctoral thesis, Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-112706.

Full text
Abstract:
This thesis aims to develop a Relation Extraction algorithm to extract knowledge out of automotive data. While most approaches to Relation Extraction are only evaluated on newspaper data dealing with general relations from the business world their applicability to other data sets is not well studied. Part I of this thesis deals with theoretical foundations of Information Extraction algorithms. Text mining cannot be seen as the simple application of data mining methods to textual data. Instead, sophisticated methods have to be employed to accurately extract knowledge from text which then can be mined using statistical methods from the field of data mining. Information Extraction itself can be divided into two subtasks: Entity Detection and Relation Extraction. The detection of entities is very domain-dependent due to terminology, abbreviations and general language use within the given domain. Thus, this task has to be solved for each domain employing thesauri or another type of lexicon. Supervised approaches to Named Entity Recognition will not achieve reasonable results unless they have been trained for the given type of data. The task of Relation Extraction can be basically approached by pattern-based and kernel-based algorithms. The latter achieve state-of-the-art results on newspaper data and point out the importance of linguistic features. In order to analyze relations contained in textual data, syntactic features like part-of-speech tags and syntactic parses are essential. Chapter 4 presents machine learning approaches and linguistic foundations being essential for syntactic annotation of textual data and Relation Extraction. Chapter 6 analyzes the performance of state-of-the-art algorithms of POS tagging, syntactic parsing and Relation Extraction on automotive data. The findings are: supervised methods trained on newspaper corpora do not achieve accurate results when being applied on automotive data. This is grounded in various reasons. Besides low-quality text, the nature of automotive relations states the main challenge. Automotive relation types of interest (e. g. component – symptom) are rather arbitrary compared to well-studied relation types like is-a or is-head-of. In order to achieve acceptable results, algorithms have to be trained directly on this kind of data. As the manual annotation of data for each language and data type is too costly and inflexible, unsupervised methods are the ones to rely on. Part II deals with the development of dedicated algorithms for all three essential tasks. Unsupervised POS tagging (Chapter 7) is a well-studied task and algorithms achieving accurate tagging exist. All of them do not disambiguate high frequency words, only out-of-lexicon words are disambiguated. Most high frequency words bear syntactic information and thus, it is very important to differentiate between their different functions. Especially domain languages contain ambiguous and high frequent words bearing semantic information (e. g. pump). In order to improve POS tagging, an algorithm for disambiguation is developed and used to enhance an existing state-of-the-art tagger. This approach is based on context clustering which is used to detect a word type’s different syntactic functions. Evaluation shows that tagging accuracy is raised significantly. An approach to unsupervised syntactic parsing (Chapter 8) is developed in order to suffice the requirements of Relation Extraction. These requirements include high precision results on nominal and prepositional phrases as they contain the entities being relevant for Relation Extraction. Furthermore, accurate shallow parsing is more desirable than deep binary parsing as it facilitates Relation Extraction more than deep parsing. Endocentric and exocentric constructions can be distinguished and improve proper phrase labeling. unsuParse is based on preferred positions of word types within phrases to detect phrase candidates. Iterating the detection of simple phrases successively induces deeper structures. The proposed algorithm fulfills all demanded criteria and achieves competitive results on standard evaluation setups. Syntactic Relation Extraction (Chapter 9) is an approach exploiting syntactic statistics and text characteristics to extract relations between previously annotated entities. The approach is based on entity distributions given in a corpus and thus, provides a possibility to extend text mining processes to new data in an unsupervised manner. Evaluation on two different languages and two different text types of the automotive domain shows that it achieves accurate results on repair order data. Results are less accurate on internet data, but the task of sentiment analysis and extraction of the opinion target can be mastered. Thus, the incorporation of internet data is possible and important as it provides useful insight into the customer\'s thoughts. To conclude, this thesis presents a complete unsupervised workflow for Relation Extraction – except for the highly domain-dependent Entity Detection task – improving performance of each of the involved subtasks compared to state-of-the-art approaches. Furthermore, this work applies Natural Language Processing methods and Relation Extraction approaches to real world data unveiling challenges that do not occur in high quality newspaper corpora.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Qin 1962. "Comprehension of science texts : effects of domain-specific knowledge and language proficiency." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=28710.

Full text
Abstract:
This study focused on the comprehension and cognitive processing of texts in biology by 36 graduate science students for whom Chinese was their first (L1) and English their second language (L2). The students in the study were from two disciplines: one in biology, and the other in engineering. These groups were subdivided into less proficient L2 (i.e., low-intermediate to intermediate) and more proficient L2 group (i.e., high-intermediate to high). From the perspective of a stratified model, the study examined L1 and L2 comprehension of general biology texts. Specifically, it investigated the effects of readers' domain-specific knowledge and language proficiency on various levels of discourse processing. It also examined two methodological issues: the effects of language of recall on processing of semantic and syntactic information from the L2 texts and the validity of using self-rating of text difficulty or content familiarity to index background knowledge.
Domain-specific knowledge was found to affect every aspect of comprehension of semantic information that was assessed in the study for both the L1 and the L2 texts. It also affected efficiency of processing for the L2 texts. Language proficiency, on the other hand, consistently affected lower-level processing. However, it appeared to have few concomitant effects on processing of semantic information. These results were consistent with predictions from stratified models of discourse comprehension in which processing of syntactic and semantic information are viewed as being both multilevel and modular. The results of the study also suggest the importance of investigating background knowledge in content-specific terms. Although the science students generally were comparable both in their knowledge of science text structures and in their patterns of comprehension of different types of semantic information, this comparability did not result in comparable comprehension. Rather, comprehension depended heavily on domain-specific knowledge.
With reference to linguistic distance, the results of this study suggest that caution is needed in applying conclusions drawn from studies of speakers of languages of the same Indo-European family to speakers of languages of greater linguistic distance such as Chinese and English. The lack of production effects observed in this study may be due to differential processing of syntactic information as well as differential processing strategies that many readers reported to have used with different language conditions. Finally, the general discrepancy between perceived text difficulty vs. comprehension and efficiency of processing as assessed by the objective measures suggests caution in using self-rating of text difficulty or content familiarity to index background knowledge.
APA, Harvard, Vancouver, ISO, and other styles
17

Winkler, Peter Karsten. "Semantic XML tagging of domain-specific text archives: a knowledge discovery approach." München Verl. Dr. Hut, 2009. http://d-nb.info/993260276/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Adindla, Suma. "Navigating the knowledge graph : automatically acquiring and utilising a domain model for intranet search." Thesis, University of Essex, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.653061.

Full text
Abstract:
Search is becoming more interactive, and query logs are the commonly used resources to propose search suggestions. An alternative to exploiting query logs can be an extraction of a domain model based on actual documents. This is particularly promising when restricting search to an intranet or a Web site where the size of the collection allows the application of full natural language parsing and where the documents can be expected to be virtually spam-free. Using a university Web site as an exemplar, we can automatically extract predicate-argument structures from documents to acquire a domain model. This domain model is a term association graph which can be employed to guide users in information finding. This can be done by locating a user query in the model and suggesting directly connected terms as query suggestions. Alternatively, one could apply various graph-based algorithms to the initial model with a purpose of identifying the best possible suggestions.
APA, Harvard, Vancouver, ISO, and other styles
19

Nusca, Virginia. "The role of domain-specific knowledge in the reading comprehension of adult readers." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ51217.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Carroll, Erin Brianne. "Domain-specific secrecy in middle childhood associations with parental knowledge and child well-being /." Pullman, Wash. : Washington State University, 2009. http://www.dissertations.wsu.edu/Thesis/Summer2009/e_carroll_051909.pdf.

Full text
Abstract:
Thesis (M.A. in human development)--Washington State University, August 2009.
Title from PDF title page (viewed on July 15, 2009). "Department of Human Development." Includes bibliographical references (p. 37-43).
APA, Harvard, Vancouver, ISO, and other styles
21

Etezadi, Ali Reza. "Semantic desktop focusing on harvesting domain specific information in planningaid documents." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12542.

Full text
Abstract:

Planning is indeed a highly regulated procedure at the operational level such as military related activities where the staff may benefit from documents such as guidelines that regulate the work process, responsibilities and results of such planning activities.

This thesis proposes a method for analyzing office documents that make up an operational order according to document ontology. With the semantic desktops aiming at combining semantic annotations and intelligent reasoning in desktop computers, the product of this project intends to add a plug-in to such environments such as IRIS semantic desktop, which accordingly enables such application to interpret documents whether the they  or change within the application.

The result of our work helps the end user to extract data using his/her favorite patterns such as goals, targets or even milestones that make up decisive points. This information eventually form semantic objects, which ultimately reside in the knowledgebase of the semantic desktop for further reasoning in the future referring of the application, whether automatically or upon the user's request.

APA, Harvard, Vancouver, ISO, and other styles
22

Tian, Hao. "A methodology for domain-specific conceptual data modeling and querying." restricted, 2007. http://etd.gsu.edu/theses/available/etd-02272007-140033/.

Full text
Abstract:
Thesis (Ph. D.)--Georgia State University, 2007.
Rajshekhar Sunderraman, committee chair; Paul S. Katz, Yanqing Zhang, Ying Zhu, committee members. Electronic text (128 p. : ill.) : digital, PDF file. Description based on contents viewed Oct. 15, 2007; title from file title page. Includes bibliographical references (p. 124-128).
APA, Harvard, Vancouver, ISO, and other styles
23

Hochstein, Lorin Michael. "Development of an empirical approach to building domain-specific knowledge applied to high-end computing." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3797.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
24

Kilgour, A. Mark. "The Creative Process: The Effects of Domain Specific Knowledge and Creative Thinking Techniques on Creativity." The University of Waikato, 2007. http://hdl.handle.net/10289/2566.

Full text
Abstract:
As we move further into the 21st century there are few processes that are more important for us to understand than the creative process. The aim of this thesis is to assist in deepening that understanding. To achieve this a review of the literature is first undertaken. Combining the many different streams of research from the literature results in the development of a four-stage model of the creative thinking process. The four stages are problem definition, idea generation, internal evaluation, and idea expression. While a large range of factors influence the various stages in this model, two factors are identified for further analysis as their effect on creativity is unclear. These two factors are domain-specific knowledge and creative thinking techniques. The first of these factors relates to the first stage of the creative thinking process (problem definition), specifically the extent to which informational cues prime domain specific knowledge that then sets the starting point for the creative combination process. The second factor relates to stage two of the model (idea generation), and the proposition by some researchers and practitioners that creative output can be significantly improved through the use of techniques. While the semantics of these techniques differ, fundamentally all techniques encourage the use of divergent thinking by providing remote associative cues as the basis for idea generation. These creative thinking techniques appear to result in the opening of unusual memory categories to be used in the creative combination process. These two potential influences on the creative outcomes of individuals: 1) domain specific knowledge, and 2) creative thinking techniques, form the basis for an experimental design. Qualitative and quantitative research is undertaken at two of the world's leading advertising agencies, and with two student samples, to identify how creative thinking techniques and domain-specific knowledge, when primed, influence creative outcomes. In order to measure these effects a creative thinking measurement instrument is developed. Results found that both domain-specific knowledge and creative thinking techniques are key influences on creative outcomes. More importantly, results also found interaction effects that significantly extend our current understanding of the effects of both primed domain-specific knowledge and creativity techniques on different sample populations. Importantly, it is found that there is no 'one size fits all' for the use of creative thinking techniques, and to be effectively applied, creative thinking techniques must be developed based upon the respondent's current domain and technique expertise. Moreover, the influence of existing domain-specific knowledge on individual creativity is also dependent upon how that information is primed and the respondent's knowledge of cognitive thinking strategies.
APA, Harvard, Vancouver, ISO, and other styles
25

Ishaq, Ali Javid. "How does a general-purpose neural network with no domain knowledge operate as opposed to a domain-specific adapted chess engine?" Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281964.

Full text
Abstract:
This report is about how a general-purpose neural network (LC0) operates compares to the domain-specific adapted chess engine (Stockfish). Specifically, to examine the depth and total simulations per move. Furthermore, to investigate how the selection of the moves are conducted. The conclusion was that Stockfish searches and evaluates a significantly larger amount of positions than LC0. Moreover, Stockfish analyses every possible move at a rather great depth. On the contrary, LC0 determines the moves sensibly and explores a few moves at a greater depth. Consequently, the argument can be made that a general-purpose neural network can conserve resources and calculation time that could serve us towards sustainability. However, training the neural network is not very environmentally friendly. Therefore, stakeholders should seek collaboration and have a generalpurpose approach that could solve problems in many fields.
Denna rapport handlar om hur ett allmänt neuronnät (LC0) som spelar schack fungerar jämför med den domänspecifika anpassade schackmotorn (Stockfish). Specifikt, att granska djupet samt totala simuleringar per drag för att uppfatta hur dragen väljs och värderas. Slutsatsen var att Stockfish söker och värderar betydlig fler positioner än LC0.  Vidare, Stockfish förbrukade mer resurser, alltså ungefär sju  gånger mer elförbrukning. Ett argument gjordes att ett allmänt neuronnät har potentialen att spara resurser och hjälpa oss mot ett hållbart samhälle. Men, det kostar mycket resurser att träna neuronnäten och därför ska vi försöka samarbeta för att undvika onödiga träningar samt lära från andras misstag. Slutligen, vi måste sträva efter ett allmänt neuronnät som ska kunna lösa många problem på flera fält.
APA, Harvard, Vancouver, ISO, and other styles
26

Grahn, Fredrik, and Kristian Nilsson. "Object Detection in Domain Specific Stereo-Analysed Satellite Images." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159917.

Full text
Abstract:
Given satellite images with accompanying pixel classifications and elevation data, we propose different solutions to object detection. The first method uses hierarchical clustering for segmentation and then employs different methods of classification. One of these classification methods used domain knowledge to classify objects while the other used Support Vector Machines. Additionally, a combination of three Support Vector Machines were used in a hierarchical structure which out-performed the regular Support Vector Machine method in most of the evaluation metrics. The second approach is more conventional with different types of Convolutional Neural Networks. A segmentation network was used as well as a few detection networks and different fusions between these. The Convolutional Neural Network approach proved to be the better of the two in terms of precision and recall but the clustering approach was not far behind. This work was done using a relatively small amount of data which potentially could have impacted the results of the Machine Learning models in a negative way.
APA, Harvard, Vancouver, ISO, and other styles
27

Renata, Vaderna. "Algoritmi i jezik za podršku automatskom raspoređivanju elemenata dijagrama." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2018. https://www.cris.uns.ac.rs/record.jsf?recordId=107524&source=NDLTD&language=en.

Full text
Abstract:
U sklopu doktorske disertacije izvršeno je istraživanje vezano za automatskoraspoređivanje elemenata dijagrama. Kroz analizu postojećih rešenja uočen jeprostor za poboljšanja, posebno po pitanju raznovrsnosti dostupnih algoritamai pomoći korisniku pri izboru najpogodnijeg od njih. U okviru istraživanjaproučavan, implementiran i u pojedinim slučajevima unapređen je širokspektar algoritama za crtanje i analizu grafova. Definisan je postupakautomatskog izbora odgovarajućeg algoritma za raspoređivanje elemenatagrafova na osnovu njihovih osobina. Dodatno, osmišljen je jezik specifičan zadomen koji korisnicima grafičkih editora pruža pomoć u izboru algoritma zaraspoređivanje, a programerima brže pisanje koda za poziv željenog algoritma.
This thesis presents a research aimed towards the problem of automaticallylaying out elements of a diagram. The analysis of existing solutions showed that thereis some room for improvement, especially regarding variety of available algorithms.Also, none of the solutions offer possibility of automatically choosing an appropriategraph layout algorithm. Within the research, a large number of different algorithms forgraph drawing and analysis were studied, implemented, and, in some cases,enhanced. A method for automatically choosing the best available layout algorithmbased on properties of a graph was defined. Additionally, a domain-specific languagefor specifying a graph’s layout was designed.
APA, Harvard, Vancouver, ISO, and other styles
28

Hänig, Christian [Verfasser], Gerhard [Gutachter] Heyer, and Jonas [Gutachter] Kuhn. "Unsupervised Natural Language Processing for Knowledge Extraction from Domain-specific Textual Resources / Christian Hänig ; Gutachter: Gerhard Heyer, Jonas Kuhn." Leipzig : Universitätsbibliothek Leipzig, 2013. http://d-nb.info/1238366481/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Adibelli, Elif. "Investigating Pre-service Science Teachers." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611624/index.pdf.

Full text
Abstract:
The main purpose of this study was to determine preservice science teachers&rsquo
(PSTs) epistemological beliefs regarding the nature of knowledge and learning in the domain of environment through comparing with the domains of biology, physics, chemistry, and mathematics. A total of 12 PSTs voluntarily participated in the study. The sample of this study was consisted of senior elementary PSTs who registered for an elective course titled &ldquo
Laboratory Applications in Science and Environmental Education&rdquo
in the fall semester of 2008-2009 at a public university, in Ankara. The major data of this study was collected by using a semi-structured interview protocol, developed by Schommer-Aikins (2008). The data of this study were analyzed through descriptive statistics and Miles and Huberman approach (1994). The data analyses of this study were presented along with five dimensions of epistemological beliefs. The analysis of omniscient authority indicated that the PSTs less trust in environmental experts&rsquo
opinions, give more importance to informal education in the acquisition of environmental knowledge, and believe that environmental knowledge is justified more on the basis of direct observation. The analysis of stability of knowledge revealed that the PSTs conceived of environmental knowledge as more uncertain. The analysis of structure of knowledge pointed out that the PSTs consider environmental knowledge as more complex. The analysis of control of learning revealed that the PSTs believe that the large percentage of ability to learn can be acquired after the birth more in environment. The analysis of speed of learning indicated that the PSTs believe that much of learning takes less time in the domain of environment. This study provided evidence that epistemological beliefs are multidimensional and domain-specific. Moreover, this study highlighted that the nature of environmental knowledge and learning are also an important issue to be addressed in environmental education.
APA, Harvard, Vancouver, ISO, and other styles
30

Sharara, Harold. "How Structural Assessment of Knowledge Can Be Used for the Identification of Specific Alternative Conceptions and for Assessing Domain Competence in Physics." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20039.

Full text
Abstract:
The purpose of this study is to investigate the viability of Structural Assessment of Knowledge (SAK) as a tool for identifying alternative conceptions and for predicting domain performance in Physics. The process begins by eliciting and then representing students‘ knowledge. One of these types of knowledge is conceptual knowledge, which is important for performing procedural tasks. This thesis employs a cognitively based theoretical framework to uncover students‘ knowledge, and then represents that knowledge for analytical purposes using SAK. SAK uses the Pathfinder algorithm to empirically derive the semantic networks of the students‘ and experts‘ cognitive structures, by asking them both to rate the relatedness of pairs of physics terms. Comparing students‘ and experts‘ knowledge structures provided some support for the structural assessment theory. In particular, supporting evidence that Pathfinder networks help in predicting student‘s problem solving capabilities was attained.
APA, Harvard, Vancouver, ISO, and other styles
31

Dillabough, Jo-Anne. "The domain specific nature of children's self-perceptions of competence : an exploratory paradigm for understanding the social construction of self-knowledge in children." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29598.

Full text
Abstract:
In recent years we have witnessed a burgeoning interest in the role socializing agents' play in the development of children's self-perceptions of competence. Outlined extensively by Harter (1981, 1982, 1985), the basic assumption underlying this work is that the self-concept is a multidimensional construct reflecting cognitive representations of individuals' socialization experiences across achievement contexts. These multiple dimensions are subsumed under the guise of self-perceptions and are thought to reflect distinct cognitive structures within the phenomenological world of the child. To date, however, the majority of research stemming from Harter's original theoretical conceptualizations has been limited to examining the impact of socializing agents' expectations on children's self-perceptions of academic competence. The differential contributions made by socializing agents to the prediction of children's self-perceptions of competence across achievement domains, however, has not been assessed. In the present study, an attempt was made to fill this research gap. In accordance with the recognition of the multidimensional nature of perceived competence, the purposes of this study were: (1) to compare the contributions made by different socializing agents' expectations to the prediction of children's self-perceived academic, social, behavioral and athletic competence; (2) to assess the extent to which socializers' expectations contribute differentially to children's perceived competence when examined in conjunction with additional variables instrumental in the development of self-concept in children; (3) to extend Harter's (1981) original conceptualization of the self by testing a uniform perceived competence model across achievement domains; and (4) to identify the primary references children utilize to define themselves. Data were collected from 87 fourth and fifth grade children. The children completed questionnaires that assessed their self-perceived academic, social, behavioral and athletic competence. Teachers' and parents' actual expectations, children's perceptions of these expectations and children's academic and social performance were also measured. Four stepwise hierarchical regression analyses were conducted (i.e., one for self-perceived academic, social, behavioral and athletic competence, respectively) to identify those variables which best predict children's domain-specific self-perceptions. Results revealed that: (a) the relative contributions made by socializers' expectations to the prediction of children's perceived competence across achievement contexts vary as a function of the domain assessed; (b) children's perceptions of significant others' expectations and performance factors also play a significant role in the prediction of domain-specific perceived competence; and (c) the social references children utilize when making self-evaluations can be conceptualized within a domain and context specific framework. Issues related to the development of self-concept theory, empirical research and counselling practices are discussed in relation to the acquisition of self-knowledge in children.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
32

Carnaz, Gonçalo José Freitas. "A graph-based framework for data retrieved from criminal-related documents." Doctoral thesis, Universidade de Évora, 2021. http://hdl.handle.net/10174/29954.

Full text
Abstract:
A digitalização das empresas e dos serviços tem potenciado o tratamento e análise de um crescente volume de dados provenientes de fontes heterogeneas, com desafios emergentes, nomeadamente ao nível da representação do conhecimento. Também os Órgãos de Polícia Criminal (OPC) enfrentam o mesmo desafio, tendo em conta o volume de dados não estruturados, provenientes de relatórios policiais, sendo analisados manualmente pelo investigadores criminais, consumindo tempo e recursos. Assim, a necessidade de extrair e representar os dados não estruturados existentes em documentos relacionados com o crime, de uma forma automática, permitindo a redução da análise manual efetuada pelos investigadores criminais. Apresenta-se como um desafio para a ciência dos computadores, dando a possibilidade de propor uma alternativa computacional que permita extrair e representar os dados, adaptando ou propondo métodos computacionais novos. Actualmente existem vários métodos computacionais aplicados ao domínio criminal, nomeadamente a identificação e classificação de entidades nomeadas, por exemplo narcóticos, ou a extracção de relações entre entidades relevantes para a investigação criminal. Estes métodos são maioritariamente aplicadas à lingua inglesa, e em Portugal não há muita atenção à investigação nesta área, inviabilizando a sua aplicação no contexto da investigação criminal. Esta tese propõe uma solução integrada para a representação dos dados não estruturados existentes em documentos, usando um conjunto de métodos computacionais: Preprocessamento de Documentos, que agrupa uma tarefa de Extracção, Transformação e Carregamento adaptado aos documentos relacionados com o crime, seguido por um pipeline de Processamento de Linguagem Natural aplicado à lingua portuguesa, para uma análise sintática e semântica dos dados textuais; Método de Extracção de Informação 5W1H que agrupa métodos de Reconhecimento de Entidades Nomeadas, a detecção da função semântica e a extracção de termos criminais; Preenchimento da Base de Dados de Grafos e Enriquecimento, permitindo a representação dos dados obtidos numa base de dados de grafos Neo4j. Globalmente a solução integrada apresenta resultados promissores, cujos resultados foram validados usando protótipos desemvolvidos para o efeito. Demonstrou-se ainda a viabilidade da extracção dos dados não estruturados, a sua interpretação sintática e semântica, bem como a representação na base de dados de grafos; Abstract: The digitalization of companies processes has enhanced the treatment and analysis of a growing volume of data from heterogeneous sources, with emerging challenges, namely those related to knowledge representation. The Criminal Police has similar challenges, considering the amount of unstructured data from police reports manually analyzed by criminal investigators, with the corresponding time and resources. There is a need to automatically extract and represent the unstructured data existing in criminal-related documents and reduce the manual analysis by criminal investigators. Computer science faces a challenge to apply emergent computational models that can be an alternative to extract and represent the data using new or existing methods. A broad set of computational methods have been applied to the criminal domain, such as the identification and classification named-entities (NEs) or extraction of relations between the entities that are relevant for the criminal investigation, like narcotics. However, these methods have mainly been used in the English language. In Portugal, the research on this domain, applying computational methods, lacks related works, making its application in criminal investigation unfeasible. This thesis proposes an integrated solution for the representation of unstructured data retrieved from documents, using a set of computational methods, such as Preprocessing Criminal-Related Documents module. This module is supported by Extraction, Transformation, and Loading tasks. Followed by a Natural Language Processing pipeline applied to the Portuguese language, for syntactic and semantic analysis of textual data. Next, the 5W1H Information Extraction Method combines the Named-Entity Recognition, Semantic Role Labelling, and Criminal Terms Extraction tasks. Finally, the Graph Database Population and Enrichment allows us the representation of data retrieved into a Neo4j graph database. Globally, the framework presents promising results that were validated using prototypes developed for this purpose. In addition, the feasibility of extracting unstructured data, its syntactic and semantic interpretation, and the graph database representation has also been demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
33

Ceglarek, Petra. "Der Wortassoziationsversuch als wissensdiagnostisches Instrument im arbeitspsychologischen Kontext : eine Befundintegration zur Verfahrensvalidierung." Phd thesis, Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2009/3049/.

Full text
Abstract:
Wissensanalysen besitzen arbeitspsychologische Relevanz, da kompetentes Arbeitshandeln das Beherrschen eines gesicherten Basiswissens voraussetzt. In der arbeitspsychologischen Praxis werden Wissensdiagnosen beispielsweise eingesetzt in Wissensmanagementprozessen, zur Evaluation von Weiterbildungsmaßnahmen oder zur Entwicklung wissensbasierter Systeme. Der Wortassoziationsversuch als ein Verfahren zur Verbalisation fachspezifischen Wissens kann dazu einen Beitrag leisten. Dabei werden Probanden Stimuli aus einer umschriebenen Domäne des Fachwissenbereichs vorgegeben, auf welche diese stichwortartig alle Assoziationen benennen sollen, welche ihnen einfallen. Je mehr jemand assoziiert, desto größer ist – gemäß der Annahme einer netzwerkanalogen Repräsentation – dessen Wissensbesitz. Da die Verfahrensgüte des Wortassoziationsversuchs bisher ungeklärt war, sollten anhand von insgesamt 17 Feldstudien die Haupt- und Nebengütekriterien bestimmt werden. Es zeigte sich, daß der Wortassoziationsversuch in der Lage ist, explizites, deklaratives Fachwissen von Probanden zu erheben, und somit ein brauchbares wissensdiagnostisches Instrument darstellt. Die Reliabilität des Wortassoziationsversuchs konnte belegt werden, somit ist eine wichtige Voraussetzung zur Beurteilung der Validität sowie der Veränderungssensitivität gegeben. Auch die Prüfung der Validität anhand der Außenkriterien Geschäftsführerbeurteilung sowie Klausurleistung erbrachte zufriedenstellende Koeffizienten und kann daher ebenfalls als belegt angesehen werden. Ebenso konnte i.S. der diskriminanten Validierung gezeigt werden, daß mittels der Assoziationstechnik tatsächlich das Konstrukt des Fachwissens und nicht der generellen Fähigkeit zur Wortflüssigkeit erfaßt wird. Insgesamt zeigt sich der Wortassoziationsversuch damit als ein valides, reliables, m.E. Objektives, veränderungssensitves, von den Probanden akzeptiertes, ökonomisches und damit für die arbeitspsychologische Praxis nützliches Verfahren.
Providing methods and instruments to assess the elicitation of domain-specific knowledge from (working) persons is of major relevance for occupational psychology, since basic knowledge is a precondition for competent work performance. In occupational practice, knowledge elicitation methods are realised in organisational knowledge management processes, for training evaluations or for developing knowledge based systems. Free term entry (FTE), which helps to verbalise domain specific knowledge, can contribute greatly in this context. The method involves presenting subjects with stimuli from a specific domain, then the subjects have to list in note form all associations that come to their minds. The more the subject associates, the grater his knowledge – assuming a network-analog representation. Since the quality of the performance data of FTE tests has as of yet been inconclusive, I identified primary and secondary quality criteria using a total of 17 field studies. I was able to show that FTE is able to elicit explicit, declarative domain specific knowledge, and thus is a useful tool for this purpose. Its reliability, an important precondition for validity and sensitivity, was proved. An assessment of the validity on the basis of two external criteria (an appraisal of the subject's vocational expertise by the managing director as a performance measure performance and the subject´s exam performance as a measure of individual domain-specific knowledge) leads to good coefficients. Assessment of the discriminant validity shows that the FTE method captures the construct of domain specific knowledge instead of the general word fluency ability. Overall, the mean frequency of associations is a sensitive measurement for the extent of the individual domain-specific knowledge as well as the extent of vocational expertise – the FTE method is a valid, reliable, objective, economical instrument accepted by the subjects, and therefore is useful for the practice of occupational psychology.
APA, Harvard, Vancouver, ISO, and other styles
34

Degenne, Pascal. "Une approche générique de modélisation spatiale et temporelle : application à la modélisation de la dynamique des paysages." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1071/document.

Full text
Abstract:
Les sciences qui traitent de la réalité, qu'elles soient naturelles, de la société ou de la vie, fonctionnent avec des modèles. Une partie de ces modèles décrivent les relations entre certaines grandeurs mesurables de la réalité, sans aller jusqu'au détail des interactions entre les éléments qui la composent. D'autres modèles décrivent ces interactions en prenant le point de vue des individus qui constituent le système, le comportement global n'est alors plus décrit à priori, mais observé à posteriori. Nous faisons le constat que dans les deux cas le scientifique a peu de liberté pour décrire les structures, en particulier spatiales, susceptibles de porter ces interactions. Nous proposons une approche de modélisation que l'on peut situer à mi-chemin entre les deux, et qui incite à étudier un système à travers la nature de ses interactions et des structures de graphes qui peuvent les porter. En plaçant au même niveau les relations spatiales, fonctionnelles, sociales ou hiérarchiques, nous tentons aussi de nous affranchir des contraintes induites par le choix effectué souvent à priori d'une forme de représentation de l'espace. Nous avons formalisé les concepts de base de cette approche, et ceux-ci ont constitué les éléments d'un langage métier, nommé Ocelet, que nous avons défini. Les outils permettant la mise en œuvre de ce langage ont été développés et intégrés sous la forme d'un environnement de modélisation et de simulation. Enfin nous avons pu expérimenter notre nouvelle approche de modélisation et le langage Ocelet à travers la réalisation de plusieurs modèles présentant des situations variées de dynamiques paysagères
Sciences dealing with reality be it related to nature, society or life, use models. Some of these models describe the relations that exist between measurable properties of that reality, without detailing the interactions between its components. Other models describe those interactions from the point of view of the individuals that form the system, in which case the overall behaviour is not defined a priori but observed a posteriori. In both cases, it can be noted that the scientist is often limited in its capacity to describe the structures, especially those spatial, which support the interactions. We propose a modelling approach that can be considered intermediate, where the system is studied by examining the nature of the interactions involved and the graph structures needed to support them. By unifying the description of spatial, functional, social or hierarchical relationships, we attempt to lift constraints induced by the form of spatial representation that are often chosen a priori. The basic concepts of this approach have been formalized, and were used to define and build a domain specific language, called Ocelet. The tools related to the implementation of the language have also been developed and assembled into an integrated modelling and simulation environment. It was then possible to experiment our new modelling approach and the Ocelet language by developing models for a variety of dynamic landscapes situations
APA, Harvard, Vancouver, ISO, and other styles
35

Ozhan, Gurkan. "Transforming Mission Space Models To Executable Simulation Models." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613826/index.pdf.

Full text
Abstract:
This thesis presents a two step automatic transformation of Field Artillery Mission Space Conceptual Models (ACMs) into High Level Architecture (HLA) Federation Architecture Models (FAMs) into executable distributed simulation code. The approach followed in the course of this thesis adheres to the Model-Driven Engineering (MDE) philosophy. Both ACMs and FAMs are formally defined conforming to their metamodels, ACMM and FAMM, respectively. ACMM is comprised of a behavioral component, based on Live Sequence Charts (LSCs), and a data component based on UML class diagrams. Using ACMM, the Adjustment Followed by Fire For Effect (AdjFFE) mission, which serves as the source model for the model transformation case study, is constructed. The ACM to FAM transformation, which is defined over metamodel-level graph patterns, is carried out with the Graph Rewriting and Transformation (GReAT) tool. Code generation from a FAM is accomplished by employing a model interpreter that produces Java/AspectJ code. The resulting code can then be executed on an HLA Run-Time Infrastructure (RTI). Bringing a fully fledged transformation approach to conceptual modeling is a distinguishing feature of this thesis. This thesis also aims to bring the chart notations to the attention of the mission space modeling community regarding the description of military tasks, particularly their communication aspect. With the experience gained, a set of guidelines for a domainindependent transformer from any metamodel-based conceptual model to FAM is offered.
APA, Harvard, Vancouver, ISO, and other styles
36

Canan, Ozgen. "Generating Motion-economical Plans For Manual Operations." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606524/index.pdf.

Full text
Abstract:
This thesis discusses applying AI planning tools for generating plans for manual operations. Expertise of motion economy domain is used to select good plans among feasible ones. Motion economy is a field of industrial engineering, which deals with observing, reporting and improving manual operations. Motion economy knowledge is organized in principles regarding the sequences and characteristics of motions, arrangement of workspace, design of tools etc. A representation scheme is developed for products, workspace and hand motions of manual operations. Operation plans are generated using a forward chaining planner (TLPLAN). Planner and representation of domain have extensions compared to a standard forward chaining planner, for supporting concurrency, actions with resources and actions with durations. We formulated principles of motion economy as search control temporal formulas. In addition to motion economy rules, we developed rules for simulating common sense of humans and goal-related rules for preventing absurd sequences of actions in the plans. Search control rules constrain the problem and reduce search complexity. Plans are evaluated during search. Paths, which are not in conformity with the principles of motion economy, are pruned with motion economy rules. Sample problems are represented and solved. Diversity of types of these problems shows the generality of representation scheme. In experimental runs, effects of motion economy principles on the generation of plans are observed and analyzed.
APA, Harvard, Vancouver, ISO, and other styles
37

Owaied, H. H. "A computer assisted learning system for reliability engineering : A PROLOG-oriented model devised for the acquisition of domain specific knowledge using a subset of English language dialogue and cognitive psychology principles." Thesis, University of Bradford, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kwon, Ky-Sang. "Multi-layer syntactical model transformation for model based systems engineering." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42835.

Full text
Abstract:
This dissertation develops a new model transformation approach that supports engineering model integration, which is essential to support contemporary interdisciplinary system design processes. We extend traditional model transformation, which has been primarily used for software engineering, to enable model-based systems engineering (MBSE) so that the model transformation can handle more general engineering models. We identify two issues that arise when applying the traditional model transformation to general engineering modeling domains. The first is instance data integration: the traditional model transformation theory does not deal with instance data, which is essential for executing engineering models in engineering tools. The second is syntactical inconsistency: various engineering tools represent engineering models in a proprietary syntax. However, the traditional model transformation cannot handle this syntactic diversity. In order to address these two issues, we propose a new multi-layer syntactical model transformation approach. For the instance integration issue, this approach generates model transformation rules for instance data from the result of a model transformation that is developed for user model integration, which is the normal purpose of traditional model transformation. For the syntactical inconsistency issue, we introduce the concept of the complete meta-model for defining how to represent a model syntactically as well as semantically. Our approach addresses the syntactical inconsistency issue by generating necessary complete meta-models using a special type of model transformation.
APA, Harvard, Vancouver, ISO, and other styles
39

Hamadi, Riad. "Méthodes de décompositions de domaines pour la résolution des CSP : application au système OSIRIS." Université Joseph Fourier (Grenoble ; 1971-2015), 1997. http://www.theses.fr/1997GRE10203.

Full text
Abstract:
La premiere partie de ce travail presente une approche de decomposition de domaines pour resoudre les problemes de satisfaction de contraintes (csp) discrets et continus lineaires. L'approche repose sur : 1 - la representation d'un csp par un graphe appele la micro-structure du csp. 2- la decomposition du csp en sous-csp definis a partir de cliques maximales de la micro-structure. Dans la premiere partie, on presente la methode de decomposition de domaine developpee par jegou en 1993 pour resoudre les csp binaires discrets. Puis on propose une extension de cette methode pour resoudre les csp discrets n-aires et les csp continus lineaires. La seconde partie de ce travail presente l'utilisation de la methode de decomposition de domaine pour definir et resoudre les csp discrets et continus lineaires dans un systeme de representation de connaissances, osiris, ainsi que l'utilisation de micro-structures pour le classement d'objets.
APA, Harvard, Vancouver, ISO, and other styles
40

Michel, David. "All Negative on the Western Front: Analyzing the Sentiment of the Russian News Coverage of Sweden with Generic and Domain-Specific Multinomial Naive Bayes and Support Vector Machines Classifiers." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447398.

Full text
Abstract:
This thesis explores to what extent Multinomial Naive Bayes (MNB) and Support Vector Machines (SVM) classifiers can be used to determine the polarity of news, specifically the news coverage of Sweden by the Russian state-funded news outlets RT and Sputnik. Three experiments are conducted.  In the first experiment, an MNB and an SVM classifier are trained with the Large Movie Review Dataset (Maas et al., 2011) with a varying number of samples to determine how training data size affects classifier performance.  In the second experiment, the classifiers are trained with 300 positive, negative, and neutral news articles (Agarwal et al., 2019) and tested on 95 RT and Sputnik news articles about Sweden (Bengtsson, 2019) to determine if the domain specificity of the training data outweighs its limited size.  In the third experiment, the movie-trained classifiers are put up against the domain-specific classifiers to determine if well-trained classifiers from another domain perform better than relatively untrained, domain-specific classifiers.  Four different types of feature sets (unigrams, unigrams without stop words removal, bigrams, trigrams) were used in the experiments. Some of the model parameters (TF-IDF vs. feature count and SVM’s C parameter) were optimized with 10-fold cross-validation.  Other than the superior performance of SVM, the results highlight the need for comprehensive and domain-specific training data when conducting machine learning tasks, as well as the benefits of feature engineering, and to a limited extent, the removal of stop words. Interestingly, the classifiers performed the best on the negative news articles, which made up most of the test set (and possibly of Russian news coverage of Sweden in general).
APA, Harvard, Vancouver, ISO, and other styles
41

Suarez, John Freddy Garavito. "Ontologias e DSLs na geração de sistemas de apoio à decisão, caso de estudo SustenAgro." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-26072017-113829/.

Full text
Abstract:
Os Sistemas de Apoio à Decisão (SAD) organizam e processam dados e informações para gerar resultados que apoiem a tomada de decisão em um domínio especifico. Eles integram conhecimento de especialistas de domínio em cada um de seus componentes: modelos, dados, operações matemáticas (que processam os dados) e resultado de análises. Nas metodologias de desenvolvimento tradicionais, esse conhecimento deve ser interpretado e usado por desenvolvedores de software para implementar os SADs. Isso porque especialistas de domínio não conseguem formalizar esse conhecimento em um modelo computável que possa ser integrado aos SADs. O processo de modelagem de conhecimento é realizado, na prática, pelos desenvolvedores, parcializando o conhecimento do domínio e dificultando o desenvolvimento ágil dos SADs (já que os especialistas não modificam o código diretamente). Para solucionar esse problema, propõe-se um método e ferramenta web que usa ontologias, na Web Ontology Language (OWL), para representar o conhecimento de especialistas, e uma Domain Specific Language (DSL), para modelar o comportamento dos SADs. Ontologias, em OWL, são uma representação de conhecimento computável, que permite definir SADs em um formato entendível e accessível a humanos e máquinas. Esse método foi usado para criar o Framework Decisioner para a instanciação de SADs. O Decisioner gera automaticamente SADs a partir de uma ontologia e uma descrição naDSL, incluindo a interface do SAD (usando uma biblioteca de Web Components). Um editor online de ontologias, que usa um formato simplificado, permite que especialistas de domínio possam modificar aspectos da ontologia e imediatamente ver as consequência de suasmudanças no SAD.Uma validação desse método foi realizada, por meio da instanciação do SAD SustenAgro no Framework Decisioner. O SAD SustenAgro avalia a sustentabilidade de sistemas produtivos de cana-de-açúcar na região centro-sul do Brasil. Avaliações, conduzidas por especialistas em sustentabilidade da Embrapa Meio ambiente (parceiros neste projeto), mostraram que especialistas são capazes de alterar a ontologia e DSL usadas, sem a ajuda de programadores, e que o sistema produz análises de sustentabilidade corretas.
Decision Support Systems (DSSs) organize and process data and information to generate results to support decision making in a specific domain. They integrate knowledge from domain experts in each of their components: models, data, mathematical operations (that process the data) and analysis results. In traditional development methodologies, this knowledge must be interpreted and used by software developers to implement DSSs. That is because domain experts cannot formalize this knowledge in a computable model that can be integrated into DSSs. The knowledge modeling process is carried out, in practice, by the developers, biasing domain knowledge and hindering the agile development of DSSs (as domain experts cannot modify code directly). To solve this problem, a method and web tool is proposed that uses ontologies, in the Web Ontology Language (OWL), to represent experts knowledge, and a Domain Specific Language (DSL), to model DSS behavior. Ontologies, in OWL, are a computable knowledge representations, which allow the definition of DSSs in a format understandable and accessible to humans and machines. This method was used to create the Decisioner Framework for the instantiation of DSSs. Decisioner automatically generates DSSs from an ontology and a description in its DSL, including the DSS interface (using a Web Components library). An online ontology editor, using a simplified format, allows that domain experts change the ontology and immediately see the consequences of their changes in the in the DSS. A validation of this method was done through the instantiation of the SustenAgro DSS, using the Decisioner Framework. The SustenAgro DSS evaluates the sustainability of sugarcane production systems in the center-south region of Brazil. Evaluations, done by by sustainability experts from Embrapa Environment (partners in this project), showed that domain experts are capable of changing the ontology and DSL program used, without the help of software developers, and that the system produced correct sustainability analysis.
APA, Harvard, Vancouver, ISO, and other styles
42

Bourgeois, Florent. "Système de Mesure Mobile Adaptif Qualifié." Thesis, Mulhouse, 2018. http://www.theses.fr/2018MULH8953/document.

Full text
Abstract:
Les dispositifs matériels mobiles proposent des capacités de mesure à l'aide de capteurs soit embarqués, soit connectés. Ils ont vocation à être de plus en plus utilisés dans des processus de prises de mesures. Ils présentent un caractère critique dans le sens où ces informations doivent être fiables, car potentiellement utilisées dans un contexte exigeant. Malgré une grande demande, peu d'applications proposent d'assister les utilisateurs lors de relevés exploitant ces capacités. Idéalement, ces applications devraient proposer des méthodes de visualisation, de calcul, des procédures de mesure et des fonctions de communications permettant la prise en charge de capteurs connectés ou encore la génération de rapports. La rareté de ces applications se justifie par les connaissances nécessaires pour permettre la définition de procédures de mesure correctes. Ces éléments sont apportés par la métrologie et la théorie de la mesure et sont rarement présents dans les équipes de développement logiciel. De plus, chaque utilisateur effectue des activités de mesure spécifiques au domaine de son champ d'activités, ce qui implique le développement d'applications spécifiques de qualité pouvant être certifiées par des experts. Ce postulat apporte la question de recherche à laquelle les travaux présentés répondent: Comment proposer une approche pour la conception d’applications adaptées à des procédures de mesures spécifiques. Les procédures de mesure pouvant être configurées par un utilisateur final La réponse développée est une "plateforme" de conception d'applications d'assistance à la mesure. Elle permet d'assurer la conformité des procédures de mesures sans l'intervention d'expert de la métrologie. Pour cela elle est construite en utilisant des concepts issus de la métrologie, de l'Ingénierie Dirigée par les Modèles et de la logique du premier ordre. Une étude du domaine de la métrologie permet de mettre en évidence la nécessité d'une expertise des procédures de mesure impliquées dans les applications. Cette expertise comprend des termes et des règles assurant l'intégrité et la cohérence d'une procédure de mesure. Un modèle conceptuel du domaine de la métrologie est proposé. Ce modèle conceptuel est ensuite intégré au processus de développement d'une application. Cette intégration se fait par un encodage de ce modèle conceptuel sous la forme d'un schéma des connaissances de la métrologie en logique du premier ordre. Il permet, la vérification du respect des contraintes inhérentes à la métrologie dans une procédure de mesure. Cette vérification est réalisée en confrontant les procédures de mesures au schéma sous forme de requêtes. Ces requêtes sont décrites à l'aide d'un langage proposé par le schéma. Les applications d'assistance à la mesure nécessitent d'exposer à l'utilisateur un processus de mesure impliquant relevés et affichages de mesures étape par étape. Cela implique de pouvoir décrire un processus de mesure et d'en définir les interfaces et le schéma d'évolution. Pour cela, un éditeur d'application est proposé. Cet éditeur propose un langage spécifique dédié à la description d'applications d'assistance à la mesure. Ce langage est construit à partir des concepts, formalismes et outils proposés par l'environnement de métamodélisation Diagrammatic Predicate Framework (DPF). Le langage comporte des contraintes syntaxiques prévenant les erreurs de construction au niveau logiciel tout en réduisant l'écart sémantique entre l'architecte logiciel l'utilisant et un potentiel expert de la métrologie. [...]
Mobile devices offer measuring capabilities using embedded or connected sensors. They are more and more used in measuring processes. They are critical because the performed measurements must be reliable because possibly used in rigorous context. Despite a real demand, there are relatively few applications assisting users with their measuring processes that use those sensors. Such assistant should propose methods to visualise and to compute measuring procedures while using communication functions to handle connected sensors or to generate reports. Such rarity of applications arises because of the knowledges required to define correct measuring procedures. Those knowledges are brought by metrology and measurement theory and are rarely found in software development teams. Moreover, every user has specific measuring activities depending on his field of work. That implies many quality applications developments which could request expert certification. These premises bring the research question the presented works answer : What approach enables the conception of applications suitable to specific measurement procedures considering that the measurement procedures could be configured by the final user. The presented works propose a platform for the development of measuring assistant applications. The platform ensure the conformity of measuring processes without involving metrology experts. It is built upon metrology, model driven engineering and first order logic concepts. A study of metrology enables to show the need of applications measuring process expert evaluation. This evaluation encompasses terms and rules that ensure the process integrity and coherence. A conceptual model of the metrology domain is proposed. That model is then employed in the development process of applications. It is encoded into a first order logic knowledge scheme of the metrology concepts. That scheme enables to verify that metrology constraints holds in a given measuring process. The verification is performed by confronting measuring processes to the knowledge scheme in the form of requests. Those requests are described with a request language proposed by the scheme. Measuring assistant applications require to propose to the user a measuring process that sequences measuring activities. This implies to describe a measuring process, and also to define interactive interfaces and sequencing mechanism. An application editor is proposed. That editor uses a domain specific language dedicated to the description of measuring assistant applications. The language is built upon concepts, formalisms and tools proposed by the metamodeling environment : Diagrammatic Predicat Framework (DPF). The language encompasses syntactical constraints that prevent construction errors on the software level while reducing the semantical gap between the software architect using it and a potential metrology expert. Then, mobile platforms need to execute a behaviour conforming to the editor described one. An implementation modelling language is proposed. This language enables to describe measuring procedures as sequences of activities. Activities imply to measure, compute and present values. Quantities are all abstracted by numerical values. This eases their computation and the use of sensors. The implementation model is made up of software agents. A mobile application is also proposed. The application is built upon a framework of agents, an agent network composer and a runtime system. The application is able to consider an implementation model and to build the corresponding agent network in order to propose a behaviour matching the end users needs. This enables to answer to any user needs, considering he can access to the implementation model, without requiring to download several applications
APA, Harvard, Vancouver, ISO, and other styles
43

Hart, M. J. Alexandra. "Action in Chronic Fatigue Syndrome: an Enactive Psycho-phenomenological and Semiotic Analysis of Thirty New Zealand Women's Experiences of Suffering and Recovery." Thesis, University of Canterbury. Social and Political Sciences, 2010. http://hdl.handle.net/10092/5294.

Full text
Abstract:
This research into Chronic Fatigue Syndrome (CFS) presents the results of 60 first-person psycho-phenomenological interviews with 30 New Zealand women. The participants were recruited from the Canterbury and Wellington regions, 10 had recovered. Taking a non-dual, non-reductive embodied approach, the phenomenological data was analysed semiotically, using a graph-theoretical cluster analysis to elucidate the large number of resulting categories, and interpreted through the enactive approach to cognitive science. The initial result of the analysis is a comprehensive exploration of the experience of CFS which develops subject-specific categories of experience and explores the relation of the illness to universal categories of experience, including self, ‘energy’, action, and being-able-to-do. Transformations of the self surrounding being-able-to-do and not-being-able-to-do were shown to elucidate the illness process. It is proposed that the concept ‘energy’ in the participants’ discourse is equivalent to the Mahayana Buddhist concept of ‘contact’. This characterises CFS as a breakdown of contact. Narrative content from the recovered interviewees reflects a reestablishment of contact. The hypothesis that CFS is a disorder of action is investigated in detail. A general model for the phenomenology and functional architecture of action is proposed. This model is a recursive loop involving felt meaning, contact, action, and perception and appears to be phenomenologically supported. It is proposed that the CFS illness process is a dynamical decompensation of the subject’s action loop caused by a breakdown in the process of contact. On this basis, a new interpretation of neurological findings in relation to CFS becomes possible. A neurological phenomenon that correlates with the illness and involves a brain region that has a similar structure to the action model’s recursive loop is identified in previous research results and compared with the action model and the results of this research. This correspondence may identify the brain regions involved in the illness process, which may provide an objective diagnostic test for the condition and approaches to treatment. The implications of this model for cognitive science and CFS should be investigated through neurophenomenological research since the model stands to shed considerable light on the nature of consciousness, contact and agency. Phenomenologically based treatments are proposed, along with suggestions for future research on CFS. The research may clarify the diagnostic criteria for CFS and guide management and treatment programmes, particularly multidimensional and interdisciplinary approaches. Category theory is proposed as a foundation for a mathematisation of phenomenology.
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, Li-Fang. "Using domain-specific knowledge to improve information retrieval performance." Thèse, 2003. http://hdl.handle.net/1866/14541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Gunanathan, Sudharsan. "SUPPORTING DOMAIN SPECIFIC WEB-BASED SEARCH USING HEURISTIC KNOWLEDGE EXTRACTION." 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2008-08-55.

Full text
Abstract:
Modern search engines like Google support domain-independent search over the vast information contained in web documents. However domain-specific information access, such as finding less well-known people, locations, and events are not performed efficiently without users developing sophisticated query strategies. This thesis describes the design and development of an application to support one such domain-specific information activity: for insurance (and related) companies to identify weather and natural disaster damage to better assess when and where personnel will be needed. The approach presented to supporting such activity combines information extraction with an interactive presentation of results. Previous domain specific search engines extract information about papers, people, and course information using rule-based or learningbased techniques. However they use the results of information extraction in a typical query and list of results interface. They fail to address the need for interaction based on the extracted document features. The domain specific web-based search application developed in this project combines information extraction with the interactive display of results to facilitate rapid information location. A heuristic evaluation was performed to determine whether the application met the design goals and to improve the design. Thus the final application has an unconventional but interactive presentation of the results with the use of tree based display. The application also allows options for user specific results caching and modification of the search and caching process. With a heuristic based search process it extracts information about place, date and damages regarding a specific disaster using a bank of search heuristics developed.
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Kai-Liang, and 張凱亮. "A Framework for Extracting Knowledge and Composing Domain Specific Contents." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/75590760912765289131.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
91
When authors compose their documents, they usually start from collecting the information pertinent to their composition contexts. They can either intend to understand a subject matter further or to acquire the materials that can be applied in the document they try to compose. After that, authors might need to devise fluent logical structures to organize the composition materials acquired and their personal statements into their compositions. To streamline the document composition process, we propose a framework, ExcDoc (a framework for Extracting knowledge and composing Domain Specific contents), to facilitate the priori material preparation and later document composition process. The framework employs agents to perform the information-extracting task on specific information sources by consulting an ontology that captures the structure in that source. Also, we iteratively elicit representative templates from documents in similar styles to reflect the logical structure of the documents in specific writing perspectives. Thereafter, an agent adopts certain strategy to deploy applicable materials to the templates for authors’compositions.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Yi-Tung, and 陳怡彤. "The Effects of Specific Domain Knowledge on Chinese Text Classification." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/19568804731321840163.

Full text
Abstract:
碩士
國立中正大學
資訊工程所
98
In this study, we use Domain Knowledge to solve the word ambiguity problem caused by synonyms and polysemies, which are di cult to be analyzed and may cause classifi caion errors in text classi fication. The proposed method augments the standard bag of words with generated attributes from Domain Knowledge, which is using di erent dataset to represent the classes. In other words, it helps classi ers to learn more semantic relationships between synonyms and decrease the classifi cation errors caused by polysemies. In our approach, we use Domain Knowledge to reweight the features in training stage and testing stage, respectively. The experimental results show that using Domain Knowledge will improve the classifi cation performance in both stages. And the performance of using Domain Knowledge in training stage is better than that in testing stage, especially with a small quantity of training data, the micro-average BEP improves 0.02(3%) and the macro-average improves 0.04(11%). Contrastly, when Domain Knowledge is used in training stage with a large amount of training data, the micro-average BEP only improves 0.001(0.1%) and the macro-average only improves 0.02(4%).
APA, Harvard, Vancouver, ISO, and other styles
48

"Effects of domain-specific knowledge on social sciences problem-solving performance." Chinese University of Hong Kong, 1990. http://library.cuhk.edu.hk/record=b5886581.

Full text
Abstract:
Yeung Kam-chuen Anthony.
Title also in Chinese.
Thesis (M.A.Ed.)--Chinese University of Hong Kong, 1990.
Bibliography: leaves 137-144.
ACKNOWLEDGEMENTS --- p.ii
ABSTRACT --- p.iv
LIST OF TABLES --- p.viii
LIST OF FIGURES --- p.xiii
CHAPTER
Chapter 1. --- Introduction --- p.1
Context of the study problem --- p.1
Statement of the Problem --- p.3
Significance of the Study --- p.4
Chapter 2. --- Review of Literature --- p.8
From Concept Formation to Problem Solving --- p.8
About Problem Solving --- p.12
Information-processing Theory of Human Problem Solving --- p.16
The Nature of Social Science Problems --- p.28
Domain-specific Knowledge in Social Science Problem Solving --- p.31
Social Science Problem Solving Strategies --- p.38
Chapter 3. --- The Social Science Problem-solving Model --- p.40
Early Development of the Social Science Problem-solving Model --- p.40
The Problem-solving- reasoning Model --- p.41
Chapter 4. --- Research Design --- p.48
Statement of Hypotheses --- p.48
Operational Definitions of Variables --- p.52
Subjects --- p.57
Instruments --- p.62
Procedures --- p.71
Chapter 5. --- Results and Discussion --- p.78
Statistical Analysis of Data --- p.79
Qualitative Analysis of Data --- p.108
Discussion --- p.119
Chapter 6. --- Conclusions and Recommendations --- p.130
Conclusions --- p.130
Implications --- p.132
Limitations --- p.135
Recommendations --- p.136
BIBLIOGRAPHY --- p.137
APPENDIX
Chapter 1. --- The Knowledge Test --- p.145
Chapter 2. --- "The ""Locating a Ball Pen Factory"" Problem" --- p.150
Chapter 3. --- "The ""Locating an Oil Refinery"" Problem" --- p.153
APA, Harvard, Vancouver, ISO, and other styles
49

Chien, Sheng-Hui, and 簡勝輝. "Development of Domain-Specific Knowledge Acquisition Tools: a Primitives-Based Generic Approach." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/98060155168896215485.

Full text
Abstract:
碩士
國立臺灣科技大學
工程技術研究所
82
This thesis describes the design and implementation of a tool builder, which supports the knowledge engineer with an easy- following process to generate a specific KA tool from a generic KA model. A generic KA model is a data structure that properly organizes primitive generic KA activities applicable in any domains. The tool builder specializes the generic KA structure into a specific KA structure, which covers the KA activities applicable in a specific type of domain, i.e., a type-specific KA m del. The tool builder then incorporates domain knowledge schemas of a specific domain into the specific KA structure and generates a specific KA tool (i.e., a domain-specific KA model),which is only applicable in the specific domain. The operations of the tool builder is demonstrated on the development of a domain specific KA tool, called mwKAT, which is responsible for acquiring knowledge for a consulting system that guides the user to select from various existent multimedia components those satisfying the user requirements, and constructs them into a multimedia workstation. The advantages of using the tool builder to develop KA tools are as follows. First, a new specific KA tool can be easily generated by combining an (existing) specific KA structure (constructed via a structure constructor) with another set of domain symbols ( generated through a tool generator). Second, in addition to a set of knowledge representation primitives to support domain modeling this system also contains several pre-defined domain models for easy use. Hence, it provides both flexible and efficient domain modeling mechanisms for the generated KA tools. Third, construction and generation of KA tools are automatically guided by the system, which helps the development of correct specific KA tools.
APA, Harvard, Vancouver, ISO, and other styles
50

Hänig, Christian. "Unsupervised Natural Language Processing for Knowledge Extraction from Domain-specific Textual Resources." Doctoral thesis, 2012. https://ul.qucosa.de/id/qucosa%3A11900.

Full text
Abstract:
This thesis aims to develop a Relation Extraction algorithm to extract knowledge out of automotive data. While most approaches to Relation Extraction are only evaluated on newspaper data dealing with general relations from the business world their applicability to other data sets is not well studied. Part I of this thesis deals with theoretical foundations of Information Extraction algorithms. Text mining cannot be seen as the simple application of data mining methods to textual data. Instead, sophisticated methods have to be employed to accurately extract knowledge from text which then can be mined using statistical methods from the field of data mining. Information Extraction itself can be divided into two subtasks: Entity Detection and Relation Extraction. The detection of entities is very domain-dependent due to terminology, abbreviations and general language use within the given domain. Thus, this task has to be solved for each domain employing thesauri or another type of lexicon. Supervised approaches to Named Entity Recognition will not achieve reasonable results unless they have been trained for the given type of data. The task of Relation Extraction can be basically approached by pattern-based and kernel-based algorithms. The latter achieve state-of-the-art results on newspaper data and point out the importance of linguistic features. In order to analyze relations contained in textual data, syntactic features like part-of-speech tags and syntactic parses are essential. Chapter 4 presents machine learning approaches and linguistic foundations being essential for syntactic annotation of textual data and Relation Extraction. Chapter 6 analyzes the performance of state-of-the-art algorithms of POS tagging, syntactic parsing and Relation Extraction on automotive data. The findings are: supervised methods trained on newspaper corpora do not achieve accurate results when being applied on automotive data. This is grounded in various reasons. Besides low-quality text, the nature of automotive relations states the main challenge. Automotive relation types of interest (e. g. component – symptom) are rather arbitrary compared to well-studied relation types like is-a or is-head-of. In order to achieve acceptable results, algorithms have to be trained directly on this kind of data. As the manual annotation of data for each language and data type is too costly and inflexible, unsupervised methods are the ones to rely on. Part II deals with the development of dedicated algorithms for all three essential tasks. Unsupervised POS tagging (Chapter 7) is a well-studied task and algorithms achieving accurate tagging exist. All of them do not disambiguate high frequency words, only out-of-lexicon words are disambiguated. Most high frequency words bear syntactic information and thus, it is very important to differentiate between their different functions. Especially domain languages contain ambiguous and high frequent words bearing semantic information (e. g. pump). In order to improve POS tagging, an algorithm for disambiguation is developed and used to enhance an existing state-of-the-art tagger. This approach is based on context clustering which is used to detect a word type’s different syntactic functions. Evaluation shows that tagging accuracy is raised significantly. An approach to unsupervised syntactic parsing (Chapter 8) is developed in order to suffice the requirements of Relation Extraction. These requirements include high precision results on nominal and prepositional phrases as they contain the entities being relevant for Relation Extraction. Furthermore, accurate shallow parsing is more desirable than deep binary parsing as it facilitates Relation Extraction more than deep parsing. Endocentric and exocentric constructions can be distinguished and improve proper phrase labeling. unsuParse is based on preferred positions of word types within phrases to detect phrase candidates. Iterating the detection of simple phrases successively induces deeper structures. The proposed algorithm fulfills all demanded criteria and achieves competitive results on standard evaluation setups. Syntactic Relation Extraction (Chapter 9) is an approach exploiting syntactic statistics and text characteristics to extract relations between previously annotated entities. The approach is based on entity distributions given in a corpus and thus, provides a possibility to extend text mining processes to new data in an unsupervised manner. Evaluation on two different languages and two different text types of the automotive domain shows that it achieves accurate results on repair order data. Results are less accurate on internet data, but the task of sentiment analysis and extraction of the opinion target can be mastered. Thus, the incorporation of internet data is possible and important as it provides useful insight into the customer\''s thoughts. To conclude, this thesis presents a complete unsupervised workflow for Relation Extraction – except for the highly domain-dependent Entity Detection task – improving performance of each of the involved subtasks compared to state-of-the-art approaches. Furthermore, this work applies Natural Language Processing methods and Relation Extraction approaches to real world data unveiling challenges that do not occur in high quality newspaper corpora.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography